text
stringlengths
59
500k
subset
stringclasses
6 values
There has been much said in the popular media about declining standards and participation in secondary school mathematics in Australia. It is strange that it should be so, but it is true all the same, that many of the most debated aspects of Mathematics concern matters that are really completely trivial. The Marriage Problem: How to Choose? The problem of how to successfully choose the partner most likely to lead to a long and happy marriage is a task which has occupied the minds of young and older people alike, men and women, among all races and cultures, throughout the ages. A Pythagorean triad $(x,y,u)$ consists of positive integers $x,y,u$ such that $x^2+y^2=u^2$. Geometrically, the integers represent the lengths of the sides of a right-angled-triangle with the hypotenuse $u$. Q1302 Let $\alpha$, $\beta$ and $\gamma$ be the angles of one triangle. ANS: (Correct solution by J.C.
CommonCrawl
Alan Baker (mathematician) Alan Baker FRS[1] (19 August 1939 – 4 February 2018[2]) was an English mathematician, known for his work on effective methods in number theory, in particular those arising from transcendental number theory. Alan Baker FRS Born(1939-08-19)19 August 1939 London, England Died4 February 2018(2018-02-04) (aged 78) Cambridge, England Alma materUniversity College London University of Cambridge Known forNumber theory Diophantine equations Baker's theorem Baker–Heegner–Stark theorem AwardsFields Medal (1970) Adams Prize (1972) Scientific career FieldsMathematics InstitutionsUniversity of Cambridge ThesisSome Aspects of Diophantine Approximation (1964) Doctoral advisorHarold Davenport Doctoral studentsJohn Coates Yuval Flicker Roger Heath-Brown David Masser Cameron Stewart Life Alan Baker was born in London on 19 August 1939. He attended Stratford Grammar School, East London, and his academic career started as a student of Harold Davenport, at University College London and later at Trinity College, Cambridge, where he received his PhD.[3] He was a visiting scholar at the Institute for Advanced Study in 1970 when he was awarded the Fields Medal at the age of 31.[4] In 1974 he was appointed Professor of Pure Mathematics at Cambridge University, a position he held until 2006 when he became an Emeritus. He was a fellow of Trinity College from 1964 until his death.[3] His interests were in number theory, transcendence, linear forms in logarithms, effective methods, Diophantine geometry and Diophantine analysis. In 2012 he became a fellow of the American Mathematical Society.[5] He has also been made a foreign fellow of the National Academy of Sciences, India.[6] Research Baker generalised the Gelfond–Schneider theorem, itself a solution to Hilbert's seventh problem.[7] Specifically, Baker showed that if $\alpha _{1},...,\alpha _{n}$ are algebraic numbers (besides 0 or 1), and if $\beta _{1},..,\beta _{n}$ are irrational algebraic numbers such that the set $\{1,\beta _{1},...,\beta _{n}\}$ is linearly independent over the rational numbers, then the number $\alpha _{1}^{\beta _{1}}\alpha _{2}^{\beta _{2}}\cdots \alpha _{n}^{\beta _{n}}$ is transcendental. Baker made significant contributions to several areas in number theory, such as the Gauss class number problem,[8] diophantine approximation, and to Diophantine equations such as the Mordell curve.[9][10] Selected publications • Baker, Alan (1966), "Linear forms in the logarithms of algebraic numbers. I", Mathematika, 13 (2): 204–216, doi:10.1112/S0025579300003971, ISSN 0025-5793, MR 0220680 • Baker, Alan (1967a), "Linear forms in the logarithms of algebraic numbers. II", Mathematika, 14: 102–107, doi:10.1112/S0025579300008068, ISSN 0025-5793, MR 0220680 • Baker, Alan (1967b), "Linear forms in the logarithms of algebraic numbers. III", Mathematika, 14 (2): 220–228, doi:10.1112/S0025579300003843, ISSN 0025-5793, MR 0220680 • Baker, Alan (1990), Transcendental number theory, Cambridge Mathematical Library (2nd ed.), Cambridge University Press, ISBN 978-0-521-39791-9, MR 0422171; 1st edition. 1975.[11] • Baker, Alan; Wüstholz, G. (2007), Logarithmic forms and Diophantine geometry, New Mathematical Monographs, vol. 9, Cambridge University Press, ISBN 978-0-521-88268-2, MR 2382891 Honours and awards • 1970: Fields Medal • 1972: Adams Prize • 1973: Fellowship of the Royal Society References 1. Masser, David (2023). "Alan Baker. 19 August 1939—4 February 2018". Biographical Memoirs of Fellows of the Royal Society. 74. 2. Trinity College website, retrieved 5 February 2018 3. "BAKER, Prof. Alan". Who's Who & Who Was Who. Vol. 2019 (online ed.). A & C Black. (Subscription or UK public library membership required.) 4. Institute for Advanced Study: A Community of Scholars Archived 6 January 2013 at the Wayback Machine 5. List of Fellows of the American Mathematical Society, retrieved 2012-11-03. 6. "National Academy of Sciences, India: Foreign Fellows". Archived from the original on 18 February 2017. Retrieved 2 June 2018. 7. Biography in Encyclopædia Britannica. http://www.britannica.com/eb/article-9084909/Alan-Baker 8. Goldfeld, Dorian (1985). "Gauss' class number problem for imaginary quadratic fields". Bulletin of the American Mathematical Society. American Mathematical Society (AMS). 13 (1): 23–37. doi:10.1090/s0273-0979-1985-15352-2. ISSN 0273-0979. 9. Masser, David (2021). "Alan Baker, FRS, 1939–2018". Bulletin of the London Mathematical Society. Wiley. 53 (6): 1916–1949. doi:10.1112/blms.12553. ISSN 0024-6093. S2CID 245627886. 10. Wüstholz, Gisbert (2019). "Obituary of Alan Baker FRS". Acta Arithmetica. Institute of Mathematics, Polish Academy of Sciences. 189 (4): 309–345. doi:10.4064/aa181211-14-12. ISSN 0065-1036. S2CID 197494318. 11. Stolarsky, Kenneth B. (1978). "Review: Transcendental number theory by Alan Baker; Lectures on transcendental numbers by Kurt Mahler; Nombres transcendants by Michel Waldschmidt" (PDF). Bull. Amer. Math. Soc. 84 (8): 1370–1378. doi:10.1090/S0002-9904-1978-14584-4. External links • O'Connor, John J.; Robertson, Edmund F., "Alan Baker", MacTutor History of Mathematics Archive, University of St Andrews • Alan Baker at the Mathematics Genealogy Project • Masser, David (January 2019). "Alan Baker 1939–2018" (PDF). Notices of the American Mathematical Society. 66 (1): 32–35. doi:10.1090/noti1753. Fields Medalists • 1936  Ahlfors • Douglas • 1950  Schwartz • Selberg • 1954  Kodaira • Serre • 1958  Roth • Thom • 1962  Hörmander • Milnor • 1966  Atiyah • Cohen • Grothendieck • Smale • 1970  Baker • Hironaka • Novikov • Thompson • 1974  Bombieri • Mumford • 1978  Deligne • Fefferman • Margulis • Quillen • 1982  Connes • Thurston • Yau • 1986  Donaldson • Faltings • Freedman • 1990  Drinfeld • Jones • Mori • Witten • 1994  Bourgain • Lions • Yoccoz • Zelmanov • 1998  Borcherds • Gowers • Kontsevich • McMullen • 2002  Lafforgue • Voevodsky • 2006  Okounkov • Perelman • Tao • Werner • 2010  Lindenstrauss • Ngô • Smirnov • Villani • 2014  Avila • Bhargava • Hairer • Mirzakhani • 2018  Birkar • Figalli • Scholze • Venkatesh • 2022  Duminil-Copin • Huh • Maynard • Viazovska • Category • Mathematics portal Authority control International • ISNI • VIAF National • Norway • France • BnF data • Catalonia • Germany • Italy • Israel • United States • Sweden • Australia • Greece • Croatia • Netherlands • Poland Academics • CiNii • MathSciNet • Mathematics Genealogy Project • zbMATH People • Trove Other • IdRef
Wikipedia
\begin{document} \title{\textit{\textbf{\boldmath Complex uniformly resolvable decompositions of $K_v$ }}} \author[1]{\textsf{\textbf{Csilla Bujt\' as}} \thanks{Supported by the Slovenian Research Agency under the project N1-0108}$^,\!\! $ } \author[2]{\textsf{\textbf{Mario Gionfriddo}} } \author[2]{\textsf{\textbf{Elena Guardo}} } \author[$\!\! $ ]{\textsf{\textbf{\framebox{Lorenzo Milazzo}}} } \author[2]{\\\textsf{\textbf{Salvatore Milici}} \thanks{Supported by MIUR and I.N.D.A.M. (G.N.S.A.G.A.), Italy and by Universit\`a degli Studi di Catania, ``Piano della Ricerca 2016/2018 Linea di intervento 2'' }$^,\!\! $ } \author[4]{\textsf{\textbf{Zsolt Tuza}} \thanks{Supported by the National Research, Development and Innovation Office -- NKFIH under the grant SNN 116095.}$^, $\thanks{corresponding author}$^{,3,} $} \affil[1]{ \small Faculty of Mathematics and Physics, University of Ljubljana, Jadranska 19, Ljubljana 1000, Slovenia; \ e-mail: [email protected] } \affil[2]{ \small Dipartimento di Matematica e Informatica, Universit\`a di Catania, Viale A. Doria, 6, \break 95125 - Catania, Italia; \ e-mail: \{gionfriddo,guardo,milici\}@dmi.unict.it } \affil[3]{ \small Alfr\'ed R\'enyi Institute of Mathematics, 1053 Buda\-pest, Re\'altanoda u.~13--15, Hungary \break $^4$ \small Department of Computer Science and Systems Technology, University of Pannonia, \break 8200 Veszpr\'em, Egyetem u.~10, Hungary; \ e-mail: [email protected] } \date{} \maketitle \begin{center} \large { \em We dedicate this paper to the good friend and colleague Lorenzo Milazzo\\ who passed away in March 2019.} \end{center} \begin{abstract} In this paper we consider the complex uniformly resolvable decompositions of the complete graph $K_v$ into subgraphs such that each resolution class contains only blocks isomorphic to the same graph from a given set $\cH$. We completely determine the spectrum for the cases $\cH =\{K_2, P_3, K_3\}$, $\cH =\{P_4, C_4\}$, and $\cH =\{K_2, P_4, C_4\}$. \end{abstract} \noindent\textbf{Keywords:} Resolvable decomposition; complex uniformly resolvable decomposition; path; cycle. \noindent\textbf{AMS classification}: {05C51, 05C38, 05C70}.\\ \section{Introduction and definitions}\label{introduzione} Given a set $\cH$ of pairwise non-isomorphic graphs, an \emph{$\cH$-decomposition} (or \emph{$\cH$-design}) of a graph $G$ is a decomposition of the edge set of $G$ into subgraphs (called \emph{blocks}) isomorphic to some element of $\cH$. An $\cH$-\emph{factor} of $G$ is a spanning subgraph of $G$ which is a vertex-disjoint union of some copies of graphs belonging to $\cH$. If $\cH= \{H\}$, we will briefly speak of an $H$-factor. An $\cH$-decomposition of $G$ is \emph{resolvable} if its blocks can be partitioned into $\cH$-factors ($\cH$-\emph{factorization} or resolution of $G$). An $\cH$-factor in an $\cH$-factorization is referred to as a \emph{parallel class}. Note that the parallel classes are mutually edge-disjoint, by definition. An $\cH$-factorization $\cal F$ of $G$ is called \emph{uniform} if each factor of ${\cal F}$ is an $H$-factor for some graph $H \in {\cal H}$. A $K_2$-factorization of $G$ is known as a 1-\emph{factorization} and its factors are called 1-{\em factors}; it is well known that a 1-factorization of $K_v$ exists if and only if $v$ is even (\cite{Lu}). If $\cH=\{F_1,\dots,F_k\}$ and $r_{i}\geq 0$ for $i=1,\dots,k$, we denote by $(F_1,\dots,F_k)$-URD$(v; r_{1},\dots,r_{k})$ a uniformly resolvable decomposition of the complete graph $K_v$ having exactly $r_i$ $F_{i}$-factors. A {\em complex} $(F_1,\dots,F_k)$-URD$(v; r_{1},\dots,r_{k})$ is a uniformly resolvable decomposition of the complete graph $K_v$ into $r_{1}+\dots+r_{k}$ parallel classes with the requirement that at least one parallel class is present for each $F_{i}\in\cH$, i.e., $r_{i}>0$ for $i=1,\dots, k$. Recently, the existence problem for $\cH$-factorizations of $K_v$ has been studied and a lot of results have been obtained, especially on the following types of uniformly resolvable ${\cal H}$-decompositions: for a set $\cH$ consisting of two complete graphs of orders at most five in \cite{DLD, R, SG, WG}; for a set $\cH$ of two or three paths on two, three, or four vertices in \cite{GM1,GM2, LMT}; for $\cH =\{P_3, K_3+e\}$ in \cite{GM}; for $\cH =\{K_3, K_{1,3}\}$ in \cite{KMT}; for $\cH =\{C_4, P_{3}\}$ in \cite{M}; for $\cH =\{K_3, P_{3}\}$ in \cite{MT}; for $1$-factors and $n$-stars in \cite{KKMT}; and for $\cH =\{P_2, P_{3}, P_4\}$ in \cite{LMT}. In connection with our current studies the following cases are most relevant: \begin{itemize} \item perfect matchings and parallel classes of triangles or 4-cycles ($\{K_2,K_3\}$ or $\{K_2,C_4\}$, Rees \cite{R}); \item perfect matchings and parallel classes of 3-paths ($\{K_2,P_3\}$, Bermond et al.\ \cite{BHY}, Gionfriddo and Milici~\cite{GM1}); \item parallel classes of 3-paths and triangles ($\{K_3,P_3\}$, Milici and Tuza \cite{MT}). \end{itemize} In this paper we give a complete characterization of the spectrum (the set of all admissible combinations of the parameters) for the following two triplets of graphs and for the pair contained in one of them which is not covered by the cases known so far: \begin{itemize} \item complex $\{K_2, P_3, K_3\}$-decompositions of order $v$ (Section \ref{sec:MPT}, Theorem \ref{theorem11}); \item complex $\{K_2, P_4, C_4\}$-decompositions of order $v$ (Section \ref{sec:MPC}, Theorem \ref{theorem12}); \item complex $\{P_4, C_4\}$-decompositions of order $v$ (Section \ref{sec:PC}, Theorem \ref{theorem13}). \end{itemize} We summarize the formulation of those results in the concluding section, where a conjecture related to the method of ``metamorphosis'' of parallel classes is also raised. We provide the basis for this approach by applying linear algebra in Section \ref{sec:2}. \section{Local metamorphosis} \label{sec:2} In this section we prove three relations between uniform parallel classes of 4-cycles and 4-paths that will be used in the proofs of our main theorems. Before presenting the new statements, let us recall the Milici--Tuza--Wilson Lemma from \cite{MT}. \begin{theorem} \label{theorem1}\cite{MT} The union of two parallel classes of\/ $3$-cycles of\/ $K_v$ can be decomposed into three parallel classes of\/ $P_3$. \end{theorem} The next two results, Theorems~\ref{theorem2} and \ref{theorem3}, will directly imply Theorem~\ref{theorem4} which states a pure metamorphosis from $4$-cycles to $4$-paths. \begin{theorem} \label{theorem2} The union of two parallel classes of\/ $C_4$ is decomposable into two parallel classes of\/ $P_4$ and one perfect matching. \end{theorem} \begin{proof} Let the vertices be $v_1,\dots,v_n$ where $n$ is a multiple of 4. The union of two parallel classes of $C_4$ forms a 4-regular graph $G$ with $2n$ edges, say $e_1,\dots,e_{2n}$. We associate a Boolean variable $x_i$ with each edge $e_i$ ($1\le i\le 2n$) and construct a system of linear equations over $GF(2)$, which has $\frac{3n}{2} -1$ equations over the $2n$ variables. Let us set $$ x_{i_1} + x_{i_2} + x_{i_3} + x_{i_4} = 1 \qquad (\mbox{\rm mod } 2) $$ for each 4-tuple of indices such that $e_{i_1},e_{i_2},e_{i_3},e_{i_4}$ are either the edges of a $C_4$ in a parallel class (call this a $C$-equation) or are the four edges incident with a vertex $v_i$ (a $V$-equation). This gives $\frac 32 n$ equations, but the $V$-equation for $v_n$ can be omitted since the $n$ $V$-equations sum up to 0 (as each edge is counted twice in the total sum) and therefore the one for $v_n$ follows from the others. We claim that this system of equations is contradiction-free over $GF(2)$. To show this, we need to prove that if the left sides of a subcollection $\mathcal{E}$ of the equations sum up to 0, then also the right sides have zero sum; that is, the number $|\mathcal{E}|$ of its equations is even. Observe that each variable is present in precisely three equations: in one $C$-equation and two $V$-equations. Hence, to have zero sum on the left side, any $x_i$ should either not appear in any equations of $\mathcal{E}$ or be present in precisely two. This means one of the following two situations. \begin{description} \item[$(T1)$] If $(e_{i_1},e_{i_2},e_{i_3},e_{i_4})$ is a 4-cycle (in this cyclic order of edges) and its $C$-equation belongs to $\mathcal{E}$, then precisely two related $V$-equations must be present in $\mathcal{E}$, namely either those for the vertices $e_{i_1}\cap e_{i_2}$ and $e_{i_3}\cap e_{i_4}$ or those for $e_{i_2}\cap e_{i_3}$ and $e_{i_1}\cap e_{i_4}$. \item[$(T2)$] If $(e_{i_1},e_{i_2},e_{i_3},e_{i_4})$ is a 4-cycle such that its $C$-equation does not belong to $\mathcal{E}$ but some $x_{i_j}$ ($1\le j\le 4$) is involved in $\mathcal{E}$, then all the four $V$-equations for $v_{i_1},v_{i_2},v_{i_3},v_{i_4}$ must be present in $\mathcal{E}$. \end{description} In the first and second parallel class of 4-cycles, respectively, let us denote the number of cycles of type $(T1)$ by $a_1$ and $b_1$, and that of type $(T2)$ by $a_2$ and $b_2$. Then the number of $V$-equations in $\mathcal{E}$ is equal to both $2a_1+4a_2$ and $2b_1+4b_2$, which is the same as the average of these two numbers. Thus, the number $|\mathcal{E}|$ of equations is equal to $$ a_1 + b_1 + \frac 12 ((2a_1+4a_2) + (2b_1+4b_2)) = 2 ( a_1+a_2 + b_1+b_2 ) $$ that is even, as needed. Since the system of equations is non-contradictory, it has a solution $\xi\in\{0,1\}^{2n}$ over $GF(2)$. We observe further that in any $C$-equation $x_{i_1} + x_{i_2} + x_{i_3} + x_{i_4} = 1$ we may switch the values from $\xi(x_{i_j})$ to $1-\xi(x_{i_j})$ simultaneously for all $1\le j\le 4$, and doing so the modified values remain a solution because the parities of sums in the $V$-equations do not change either. In this way, we can transform $\xi$ to a basic solution $\xi_0$ in which every $C$-equation contains precisely one 1 and three 0s. Since each $V$-equation contains precisely two or zero variables from each $C$-equation, it follows that in the basic solution $\xi_0$ each $V$-equation, too, contains precisely one $1$ and three $0$s. As a consequence, the variables which have $x_i=1$ in the basic solution define a perfect matching in $G$ (since at most two 1s may occur at each vertex, and then the corresponding $V$-equation implies that there is precisely one). Moreover, removing those edges from $G$, each cycle of each parallel class becomes a $P_4$. In this way we obtain two parallel classes of $P_4$, and one further class which is a perfect matching. \end{proof} \begin{theorem} \label{theorem3}The edge-disjoint union of a perfect matching and a parallel class of\/ $C_4$ is decomposable into two parallel classes of\/ $P_4$. \end{theorem} \begin{proof} We apply several ideas from the previous proof, but in a somewhat different way. We now introduce Boolean variables $x_1,\dots,x_n$ for the edges $e_1,\dots,e_n$ of the 4-cycles only; but still there will be two kinds of linear equations, namely $n/4$ of them for 4-cycles (called $C$-equations) and $n/2$ of them for the edges of the perfect matching ($M$-equations). They are of the same form as before: $$ x_{i_1} + x_{i_2} + x_{i_3} + x_{i_4} = 1 \qquad (\mbox{\rm mod } 2) $$ The $C$-equations require $e_{i_1},e_{i_2},e_{i_3},e_{i_4}$ to be the edges of a $C_4$ in the parallel class. The $M$-equations take $e_{i_1},e_{i_2},e_{i_3},e_{i_4}$ as the four edges incident with a matching edge. If a matching edge is the diagonal of a 4-cycle, then their equations coincide; and if a matching edge shares just one vertex with a 4-cycle then the $M$-equation and the $C$-equation share two variables which correspond to consecutive edges on the cycle. Further, we recall that the matching is edge-disjoint from the cycles, therefore each variable associated with a cycle-edge occurs in precisely two distinct $M$-equations. These facts imply that only two types of $C$-equations can occur in a subcollection $\mathcal{E}$ of equations whose left sides sum up to 0 over $GF(2)$. \begin{description} \item[$(T1)$] If $(e_{i_1},e_{i_2},e_{i_3},e_{i_4})$ is a 4-cycle and its $C$-equation belongs to $\mathcal{E}$, then the corresponding $C_4$ has precisely two (antipodal) vertices for which the $M$-equations of the incident matching edges are present in $\mathcal{E}$. (At the moment it is unimportant whether those two vertices form a matching edge or not.) \item[$(T2)$] If $(e_{i_1},e_{i_2},e_{i_3},e_{i_4})$ is a 4-cycle whose $C$-equation does not belong to $\mathcal{E}$ but some $x_{i_j}$ ($1\le j\le 4$) is involved in $\mathcal{E}$, then each $M$-equation belonging to a matching edge incident with some of the four vertices $v_{i_1},v_{i_2},v_{i_3},v_{i_4}$ is present in $\mathcal{E}$. (It is again unimportant whether one or both or none of the diagonals of the $C_4$ in question is a matching edge.) \end{description} Let now $a_1$ and $a_2$ denote the number of cycles with type $(T1)$ and type $(T2)$, respectively. By what has been said, the number of vertices requiring an $M$-equation is equal to $2a_1+4a_2$. Since each of those equations is now counted at both ends of the corresponding matching edge, we obtain that $\mathcal{E}$ contains exactly $a_1+2a_2$ $M$-equations; moreover it has $a_1$ $C$-equations, by definition. Thus, the number $|\mathcal{E}|$ of equations is equal to $a_1 + (a_1+2a_2) = 2 ( a_1+a_2 )$ which is even. Thus, if the left sides in $\mathcal{E}$ sum up to zero, then also the right sides have sum 0 in $GF(2)$. It proves that the system of the $3n/4$ equations is contradiction-free and has a solution $\xi\in\{0,1\}^n$ over $GF(2)$. Now, we observe that in any $C$-equation $x_{i_1} + x_{i_2} + x_{i_3} + x_{i_4} = 1$ we may switch the values from $\xi(x_{i_j})$ to $1-\xi(x_{i_j})$ simultaneously for all $1\le j\le 4$. Doing so, the modified values remain a solution as the parities of sums in the $M$-equations do not change either. In this way we can transform $\xi$ to a basic solution $\xi_0$ in which every $C$-equation contains precisely one $1$ and three $0$s. Since each $M$-equation has precisely two or four or zero variables from any $C$-equation, it follows that in the basic solution $\xi_0$ each $M$-equation, too, contains precisely one $1$ and three $0$s. As a consequence, the variables (cycle-edges) which have $x_i=1$ in the basic solution establish a pairing between the edges of the original matching. Hence the set $I=\{e_i : \xi_0(x_i)=1\}$ together with the edges of the given matching factor determines a $P_4$-factor. Moreover, removing the edges of $I$ from the $4$-cycles, we obtain another parallel class of paths $P_4$. \end{proof} These two types of metamorphosis can be combined to obtain the following third one. \begin{theorem} \label{theorem4} The union of three parallel classes of\/ $C_4$ is decomposable into four parallel classes of\/ $P_4$. \end{theorem} \begin{proof} Applying Theorem \ref{theorem2} we transform the union of the first and the second $C_4$-class into two $P_4$-classes and a perfect matching. After that we combine the third $C_4$-class with the perfect matching just obtained into two further $P_4$-classes, by Theorem \ref{theorem3}. \end{proof} \section{The spectrum for $\cH=\{K_2,P_3,K_3\}$} \label{sec:MPT} In this section we consider complex uniformly resolvable decompositions of the complete graph $K_v$ into $m$ classes containing only copies of 1-factors (perfect matchings), $p$ classes containing only copies of paths $P_3$ and $t$ classes containing only copies of triangles $K_3$. The current problem is to determine the set of feasible triples $(m,p,t)$ such that $m\cdot p\cdot t \ne 0$, for which there exists a complex $(K_2,P_3,K_3)$-URD$(v;m,p,t)$. A little more than that, for $v=6$ and $v=12$ we shall list also those feasible $(m,p,t)$ in which $m$ or $p$ or $t$ is zero. \begin{theorem} \label{theorem11} The necessary and sufficient conditions for the existence of a complex\/ $(K_2,P_3,K_3)$-URD\/$(v;m,p,t)$ are: \begin{description} \item[$(i)$] $v\ge 12$ and\/ $v$ is a multiple of\/ $6$; \item[$(ii)$] $3m + 4p + 6t = 3v-3$. \end{description} Moreover, the parameters\/ $m,p,t$ are in the following ranges: \begin{description} \item[$(iii)$] $1\le m\le v-7$ and\/ $m$ is odd, \qquad \ $3\le p\le 3\cdot\lfloor \frac v4 -1 \rfloor$, \qquad \ $1\le t\le \lfloor \frac v2 -3 \rfloor$. \end{description} \end{theorem} \begin{proof} We first prove that the conditions are necessary. Divisibility of $v$ by 6 is immediately seen, due to the presence of $K_3$-classes and $1$-factors. We observe further that the number of edges in a parallel class is $v$ for a triangle-class, $2v/3$ for a $P_3$-class and $v/2$ for a matching. Thus, in any $(K_2,P_3,K_3)$-URD$(v;m,p,t)$ we must have $$ \frac{mv}{2} + \frac{2pv}{3} + tv = {v\choose 2} . $$ Dividing it by $v/6$, the assertion of $(ii)$ follows. As $(ii)$ implies, $p$ is a multiple of 3, say $p=3x$. Then we obtain $$ m+4x+2t=v-1$$ and also conclude that $m$ is odd. Since $m\ge 1$, $t\ge 1$, and $x\ge 1$, this equation yields $$ m \leq v-7 , \qquad x \leq v/4 - 1 , \qquad t \leq v/2 - 3, $$ implying the conditions listed in $(iii)$, and the first one also excludes $n=6$. This completes the proof that the conditions $(i)$--$(iii)$ are necessary. To prove the sufficiency of $(i)$--$(ii)$, we consider $v\ge 18$ first. Since $v$ is a multiple of 6 according to $(i)$, there exists a Nearly Kirkman Triple System of order $v$, which means $m=1$ perfect matching and $t=\frac n2 -1$ parallel classes of triangles. More generally, for every odd $m$ in the range $1\le m\le v-7$, there exists a collection of $m$ perfect matchings and $t=\frac{v-1-m}2$ parallel classes of triangles, which together decompose $K_v$; this was proved in \cite{R}. From such a system, for every $0<x<\frac{v-1-m}{4}$, we can take $2x$ parallel classes of triangles. Applying Theorem \ref{theorem1} \cite{MT}, also proved independently by Wilson (unpublished), we obtain $3x$ parallel classes of paths $P_3$. This gives a complex $(K_2,P_3,K_3)$-URD$(v;m,3x,\frac{v-1-m}{2}-2x)$. For $n=12$, the statement follows by Proposition \ref{lemmaF2} below. \end{proof} \subsection{Small cases} Obviously, the proofs of the necessary conditions that $v$ is a multiple of $6$, and that the equality $3m+4p+6t=3v-3$ must be satisfied by every $(K_2, P_3, K_{3})$-URD$(v;m,p,t)$, do not use the assumption $m\cdot p\cdot t\neq 0$. \begin{proposition} \label{lemmaF1} There exists a\/ $(K_2, P_3, K_{3})$-URD\/$(6;m,p,t)$ if and only if\/ $(m,p,t)\in \{(5,0,0), (3,0,1),(1,3,0)\}$. \end{proposition} \begin{proof} Putting $v=6$, the equation $3m+4p+6t=15$ has exactly four solutions $(m,p,t)$ over the nonnegative integers. The case $(1,0,2)$ would correspond to an NKTS(6) which is known not to exist \cite{R}. The case $(5,0,0)$ corresponds to a $1$-factorization of the complete graph $K_{6}$ which is known to exist \cite{Lu}. The case of $(1,3,0)$ is just the same as a $(K_2, P_3)$-URD$(6;1,3)$ that is known to exist \cite{GM2}. To see the existence for $(3,0,1)$, consider $V(K_{6})=\mathbb{Z}_{6}$ and the following classes: $\{\{1,4\}, \{2,5\}, \{3,6\}\}$, $\{\{1,5\}, \{2,6\}, \{3,4\}\}$, $\{\{1,6\}, \{2,4\}, \{3,5\}\}$, $\{(1,2,3), (4,5,6)\}$. \end{proof} \begin{proposition} \label{lemmaF2} There exists a\/ $(K_2, P_3, K_{3})$-URD\/$(12;m,p,t)$ if and only if\/ $(m,p,t)\in \{(11,0,0), (9,0,1), (7,3,0), (7,0,2),(5,0,3), (5,3,1), (3,6,0),(3,3,2), (3,0,2),(1,6,1), (1,3,3)\}$. \end{proposition} \begin{proof} Checking the nonnegative integer solutions of $3m+4p+6t=33$, the case of $(1,0,5)$ would correspond to an NKTS(12) which is known not to exist \cite{R}. The case of $(11,0,0)$ corresponds to a 1-factorization of the complete graph $K_{12}$ that is known to exist \cite{CD}. The result for the cases $(9,0,1)$, $(7,0,2)$, $(5,0,3)$, $(3,0,4)$ follows by \cite{R}. Applying Theorem \ref{theorem1} to $(7,0,2)$, $(5,0,3)$, $(3,0,4)$, we obtain the existence for $(7,3,0)$, $(5,3,1)$, $(3,3,2)$, and $(3,6,0)$. The existence for the case $(1,3,3)$ is shown by the following construction. Let $V(K_{12})=\mathbb{Z}_{12}$, and consider the following parallel classes: \begin{itemize} \item matching: $\{\{1,6\}, \{2,4\}, \{3,0\},\{5,11\}, \{7,9\}, \{8,10\}\}$; \item paths: $\{\{5,0,11\}, \{8,1,7\}, \{9,2,3\},\{6,4,10\}\}$, $\{\{1,3,8\},\{7,5,10\},\{4,9,0\}, \{2,11,6\}\},\\\{ \{ 0,6,5\},\{2,7,4\},\{9,8,11\},\{3,10,1\}\}$; \item triangles: $\{\{0,1,2), \{3,4,5\}, \{6,7,8\},\{9,10,11\} \}$, $\{\{0,4,8\}, \{3,7,11\}, \{2,6,10\},\{1,5,9\} \}$,\\ $\{\{0,7,10\}, \{3,6,9\}, \{2,5,8\}, \{1,4,11\} \}$. \end{itemize} Finally, we apply Theorem \ref{theorem1} to the case $(1,3,3)$ and infer that a $(K_2, P_3, K_{3})$-URD$(12;1,6,1)$ exists, too. \end{proof} \section{The spectrum for $\cF=\{K_2,P_4,C_4\}$} \label{sec:MPC} In this section we consider complex uniformly resolvable decompositions of the complete graph $K_v$ into $m$ parallel $1$-factors, $p$ parallel classes of $4$-paths, and $c$ parallel classes of $4$-cycles. The current problem is to determine the set of feasible triples $(m,p,c)$ such that $m\cdot p\cdot c \ne 0$, for which there exists a complex $(K_2,P_3,C_4)$-URD$(v;m,p,c)$. The case of $\{P_4, C_4\}$, that is $m=0$, will be discussed in Section \ref{sec:PC}. \begin{theorem} \label{theorem12} The necessary and sufficient conditions for the existence of a complex\/ $(K_2,P_4,C_4)$-URD\/$(v;m,p,c)$ are: \begin{description} \item[$(i)$] $v\ge 8$ and\/ $v$ is a multiple of\/ $4$; \item[$(ii)$] $2m + 3p + 4c = 2v-2$. \end{description} Moreover, the parameters\/ $m,p,c$ are in the following ranges: \begin{description} \item[$(iii)$] $1\le c\le \frac v2 - 3$, \qquad \ $1\le m\le v-6$, \qquad \ $2\le p\le 2\, \lfloor \frac {v-4}3 \rfloor$; \item[$(iv)$] $p$ is even; and if\/ $p\equiv 2 ~ (\mbox{\rm mod } 4)$, then also\/ $m$ is even. \end{description} \end{theorem} \begin{proof} We first show that the conditions are necessary. Since $K_v$ has a $C_4$-factor, $v$ must be a multiple of~$4$. Further, as a $C_4$-, $K_2$-, and $P_4$-factor respectively cover exactly $v$, $v/2$, and $3v/4$ edges, in a $(K_2,P_4,C_4)$-URD$(v;m,p,c)$ we have $$ \frac{mv}2 + \frac {3pv}4 + {cv} = {v\choose 2} . $$ This equality directly implies $(ii)$ and we may also conclude that $p$ is even and, further, if $p\equiv 2 ~ (\mbox{\rm mod } 4)$, then $m$ must be even as well. Putting $p=2x$ we obtain $$m+3x+2c=v-1.$$ By our condition, all the three types of parallel classes are present in the decomposition, i.e.\ we have $c\ge 1$, $m \ge 1$, and $x \ge 1$. These, together with the equality above, imply the necessity of $(iii)$. Next we prove the sufficiency of $(i)$--$(ii)$. We first take a 1-factorization of $K_{v/2}$ into $v/2-1$ perfect matchings, which exists because $v$ is a multiple of 4. Now, replace each vertex of $K_{v/2}$ with two non-adjacent vertices. This blow-up results in $v/2-1$ parallel classes of $C_4$ inside $K_v$, and the missing edges can be taken as a perfect matching. Let $C$ be the set of the parallel classes of $C_4$ and $x$ be a nonnegative integer such that $0<x\leq \lfloor\frac{v-4}{3}\rfloor$. The construction splits into two cases depending on the parity of $x$. If $x$ is even, take $\frac{3x}{2}$ parallel classes from $C$. Applying Theorem \ref{theorem4}, we transform the $\frac{3x}{2}$ parallel classes of $C_4$ into $2x=p$ parallel classes of paths $P_4$. For any given $y=c$ in the range $0<y \leq \lfloor\frac{v-2}{2}-\frac{3x}{2}\rfloor$, keep $y$ classes of $C_4$ and transform the remaining $\frac{v-2}{2}-\frac{3x}{2}-y$ classes of $C_4$ into $2(\frac{v-2}{2}-\frac{3x}{2}-y)=m-1$ classes of 1-factors. In this way we obtain a complex $(K_2,P_4,C_4)$-URD$(v; v-3x-2y-1,2x,y)$. If $x$ is odd, take $\frac{3(x-1)}{2}+2$ parallel classes from $C$. By Theorems~\ref{theorem2} and \ref{theorem4}, we can transform the $\frac{3(x-1)}{2}+2$ parallel classes of $C_4$ into $2x=p$ parallel classes of paths $P_4$ and a 1-factor. For any given $y=c$ in the range $0<y \leq \frac{v-2}{2}-\frac{3(x-1)}{2}-2$, keep $y$ classes of $C_4$ and transform the remaining $\frac{v-2}{2}-\frac{3(x-1)}{2}-2-y$ classes of $C_4$ into $2(\frac{v-2}{2}-\frac{3(x-1)}{2}-2-y)=m-2$ classes of 1-factors. In this way, we obtain a complex $(K_2,P_4,C_4)$-URD$(v; v-3x-2y-1,2x,y)$. The result, for every $v\equiv 0\pmod{4}$, $0< x\leq \lfloor\frac{v-4}{3}\rfloor$, and $0< y\leq \lfloor\frac{v-2}{2}-\frac{3x}{2}\rfloor$, is a uniformly resolvable decomposition of $K_v$ into into $v-1-3x-2y=m$ classes containing only copies of 1-factors, $2x=p$ classes containing only copies of paths $P_4$, and $y=c$ classes containing only copies of 4-cycles $C_4$. This finishes the proof of the theorem. \end{proof} \section{The spectrum for $\cH=\{P_4,C_4\}$} \label{sec:PC} Finally, we consider complex uniformly resolvable decompositions of the complete graph $K_v$ into $p$ classes containing only copies of paths $P_4$ and $c$ classes containing only copies of 4-cycles $C_4$. \begin{theorem} \label{theorem13} The necessary and sufficient conditions for the existence of a complex\/ $(P_4,C_4)$-URD\/$(v;p,c)$ are: \begin{description} \item[$(i)$] $v\ge 8$ and\/ $v$ is a multiple of\/ $4$; \item[$(ii)$] $3p+4c = 2v-2$. \end{description} \end{theorem} \begin{proof} Necessity is a consequence of Theorem \ref{theorem12}, since we did not need to assume $m>0$ in that part of its proof. Turning to sufficiency, the condition $3p+4c=2v-2$ implies that $3p\equiv 2v-2\pmod{4}$. This gives $ p=2+4x$ and $c=\frac{v-4}{2}-3x$. For a construction, we start with a decomposition of $K_v$ into a perfect matching $F$ and $\frac{v-2}{2}$ parallel classes of $C_4$ as in the proof of Theorem \ref{theorem12}. By Theorem \ref{theorem3}, we can transform one class of $C_4$ and $F$ into two classes of paths $P_4$. Then, by Theorem \ref{theorem4}, we transform $3x$ parallel classes of $C_4$ into $4x$ parallel classes of $P_4$. The result, for every $x$ such that $0\leq x\leq \lfloor\frac{v-6}{6}\rfloor$, is a uniformly resolvable decomposition of $K_v$ into $2+4x$ classes containing only copies of paths $P_4$ and $\frac{v-4}{2}-3x$ classes containing only copies of 4-cycles $C_4$. This completes the proof. \end{proof} \section{Conclusion} Combining Theorems \ref{theorem11}, \ref{theorem12}, and \ref{theorem13}, we obtain the main result of this paper. \begin{theorem} \label{theorem14} ~~~ \begin{description} \item[$(i)$] A complex\/ $(K_2, P_3, K_3)$-URD\/$(v; m,p,t)$ exists if and only if\/ $v\ge 12$,\/ $v$ is a multiple of\/ $6$, and\/ $3m + 4p + 6t = 3v-3$. \item[$(ii)$] A complex\/ $(K_2, P_4, C_4)$-URD\/$(v; m,p,t)$ exists if and only if\/ $v\geq 8$,\/ $v$ is a multiple of\/ $4$, and\/ $2m + 3p + 4t = 2v-2$. \item[$(iii)$] A complex\/ $(P_4, C_4)$-URD\/$(v; p,t)$ exists if and only if\/ $v\geq 8$,\/ $v$ is a multiple of\/ $4$, and\/ $3p + 4t = 2v-2$. \end{description} \end{theorem} Concerning the local metamorphosis studied in Section~\ref{sec:2}, we pose the following conjecture as a common generalization of Theorems~\ref{theorem1} and \ref{theorem4}. \begin{conjecture} \label{conjecture} The union of\/ $k-1$ parallel classes of\/ $C_k$ is decomposable into\/ $k$ parallel classes of\/ $P_k$. \end{conjecture} \end{document}
arXiv
# Creating and calling SQL functions in PostgreSQL To create a SQL function in PostgreSQL, you use the `CREATE FUNCTION` statement. This statement defines the function's name, input parameters, return type, and the body of the function. The function's body is a list of SQL statements separated by semicolons. Here's an example of creating a simple SQL function that adds two numbers: ```sql CREATE FUNCTION add_numbers(a integer, b integer) RETURNS integer AS $$ BEGIN RETURN a + b; END; $$ LANGUAGE plpgsql; ``` To call this function, you use the `SELECT` statement: ```sql SELECT add_numbers(5, 10); ``` This will return the result `15`. ## Exercise Create a SQL function that calculates the square of a given number. Test the function by calling it with different input values. In addition to the basic syntax, there are several other features and options you can use when creating SQL functions in PostgreSQL. For example, you can specify default values for input parameters, use the `OUT` keyword to declare output parameters, and handle exceptions using the `RAISE` and `EXCEPTION` keywords. # Data processing with PL/Python To use PL/Python in PostgreSQL, you first need to install the PL/Python extension. After that, you can create a function using the `LANGUAGE plpythonu` or `LANGUAGE plpython3u` keyword. Here's an example of a simple PL/Python function that adds two numbers: ```sql CREATE FUNCTION add_numbers_plpython(a integer, b integer) RETURNS integer AS $$ return a + b $$ LANGUAGE plpythonu; ``` To call this function, you use the `SELECT` statement: ```sql SELECT add_numbers_plpython(5, 10); ``` This will return the result `15`. ## Exercise Create a PL/Python function that calculates the square of a given number. Test the function by calling it with different input values. # Advanced data manipulation techniques One common technique is to use the `WITH` clause to create temporary tables or views that can be used in subsequent queries. For example, you can create a temporary table with a subset of data: ```sql WITH filtered_data AS ( SELECT * FROM data WHERE condition ) SELECT * FROM filtered_data; ``` Another technique is to use the `LATERAL` keyword to join a table with a function that returns a set of rows. This allows you to perform complex calculations on each row in the table. For example: ```sql SELECT a, b, my_function(a, b) FROM data; ``` ## Exercise Create a PL/Python function that calculates the average of a group of numbers. Use this function in a query to calculate the average of each group in a table. # Working with complex data structures PostgreSQL provides several data types for working with arrays, such as `ARRAY`, `INT[]`, and `TEXT[]`. You can create an array column in a table using the `ARRAY` keyword: ```sql CREATE TABLE data ( id SERIAL PRIMARY KEY, numbers INT[] ); ``` You can also use the `json` and `jsonb` data types to store JSON data. These data types provide powerful functions for querying and manipulating JSON data. For example: ```sql SELECT json_data->>'field_name' FROM data; ``` ## Exercise Create a table with a column of type `jsonb` and insert some sample data. Write a PL/Python function that extracts a specific field from the JSON data and returns it. Test the function by calling it with different input values. # Using PL/Python for data analysis and visualization To use PL/Python for data analysis, you can use the `pandas` library, which provides powerful data manipulation and analysis functions. To use PL/Python for visualization, you can use the `matplotlib` library, which provides a wide range of plotting functions. Here's an example of a PL/Python function that calculates the mean and standard deviation of a group of numbers: ```sql CREATE FUNCTION calculate_stats_plpython(numbers NUMERIC[]) RETURNS TABLE(mean NUMERIC, std_dev NUMERIC) AS $$ import pandas as pd data = pd.Series(numbers) mean = data.mean() std_dev = data.std() return [(mean, std_dev)] $$ LANGUAGE plpythonu; ``` You can call this function in a query: ```sql SELECT * FROM calculate_stats_plpython(ARRAY[1, 2, 3, 4, 5]); ``` ## Exercise Create a PL/Python function that calculates the correlation between two columns in a table. Use this function in a query to calculate the correlation between two columns in a table. # Integrating with external data sources To integrate with external data sources, you can use the `requests` library for making HTTP requests, or the `psycopg2` library for accessing PostgreSQL databases. For example, you can create a PL/Python function that fetches data from an API: ```sql CREATE FUNCTION fetch_data_plpython(url TEXT) RETURNS TABLE(data TEXT) AS $$ import requests response = requests.get(url) data = response.json() return [(data,)] $$ LANGUAGE plpythonu; ``` You can call this function in a query: ```sql SELECT * FROM fetch_data_plpython('https://api.example.com/data'); ``` ## Exercise Create a PL/Python function that reads data from a CSV file and returns it as a table. Use this function in a query to load data from a CSV file into a table. # Performance optimization for PL/Python functions To optimize PL/Python functions, you can use techniques such as: - Using the `EXECUTE` statement to execute a prepared SQL statement, which can be more efficient than executing a string of SQL code. - Using the `GREATEST` and `LEAST` functions to perform element-wise operations on arrays. - Using the `BEGIN` and `END` keywords to define a transaction block, which can help reduce the overhead of multiple queries. ## Exercise Create a PL/Python function that calculates the sum of all numbers in a table. Optimize this function for better performance by using the techniques mentioned above. Test the function by calling it with a large dataset. # Advanced error handling and debugging To handle errors in PL/Python functions, you can use the `RAISE` and `EXCEPTION` keywords to raise custom exceptions. For example: ```sql CREATE FUNCTION divide_plpython(a NUMERIC, b NUMERIC) RETURNS NUMERIC AS $$ if b == 0: raise Exception('Division by zero') return a / b $$ LANGUAGE plpythonu; ``` To debug your functions, you can use the `RAISE` keyword to print debug information, such as variable values or error messages. For example: ```sql CREATE FUNCTION debug_plpython(a NUMERIC, b NUMERIC) RETURNS NUMERIC AS $$ try: result = a / b except ZeroDivisionError: raise Exception('Division by zero') return result $$ LANGUAGE plpythonu; ``` ## Exercise Create a PL/Python function that calculates the factorial of a number. Add error handling and debugging statements to the function to handle potential errors and debug it effectively. # Real-world use cases and case studies ## Exercise Discuss a real-world use case where PL/Python was used to solve a complex data processing problem in PostgreSQL. Describe the problem, the solution implemented using PL/Python, and the benefits of using PL/Python in this case. # Conclusion and future developments In conclusion, PL/Python is a powerful tool for extending the capabilities of PostgreSQL and enabling more complex and flexible data processing tasks. As PostgreSQL continues to evolve, we can expect to see further advancements in the integration of PL/Python and other programming languages, as well as new features and capabilities for data processing. ## Exercise Summarize the key takeaways from this textbook and discuss how they can be applied in real-world scenarios to leverage PL/Python in PostgreSQL for advanced data processing.
Textbooks
The dartboard below has a radius of 6 inches. Each of the concentric circles has a radius two inches less than the next larger circle. If nine darts land randomly on the target, how many darts would we expect to land in a non-shaded region? [asy]import graph; fill(Circle((0,0),15),gray(0.7)); fill(Circle((0,0),10),white); draw(Circle((0,0),20)); draw(Circle((0,0),15)); draw(Circle((0,0),10)); [/asy] The probability for a single dart to land in the non-shaded region is the ratio of the area of the non-shaded region to the area of the entire dartboard. The area of the entire dartboard is $\pi \cdot 6^2 = 36\pi$. The area of the shaded region is the area of the second largest circle minus the area of the smallest circle, or $\pi \cdot 4^2 - \pi \cdot 2^2 = 12 \pi$, so the area of the non-shaded region is $36\pi - 12\pi = 24\pi$. Thus, our ratio is $\frac{24\pi}{36\pi}=\frac{2}{3}$. If each dart has a $\frac{2}{3}$ chance of landing in a non-shaded region and there are 9 darts, then the expected number of darts that land in a non-shaded region is $9 \cdot \frac{2}{3} = \boxed{6}$.
Math Dataset
\begin{document} \title{Firefly Algorithms for Multimodal Optimization} \author{Xin-She Yang \\ Department of Engineering, University of Cambridge, \\ Trumpington Street, Cambridge CB2 1PZ, UK } \date{} \maketitle \abstract{ Nature-inspired algorithms are among the most powerful algorithms for optimization. This paper intends to provide a detailed description of a new Firefly Algorithm (FA) for multimodal optimization applications. We will compare the proposed firefly algorithm with other metaheuristic algorithms such as particle swarm optimization (PSO). Simulations and results indicate that the proposed firefly algorithm is superior to existing metaheuristic algorithms. Finally we will discuss its applications and implications for further research. \\ } \noindent {\bf Citation detail:} X.-S. Yang, ``Firefly algorithms for multimodal optimization", in: {\it Stochastic Algorithms: Foundations and Applications}, SAGA 2009, Lecture Notes in Computer Sciences, Vol. {\bf 5792}, pp. 169-178 (2009). \section{Introduction} Biologically inspired algorithms are becoming powerful in modern numerical optimization \cite{Bod,Deb,Gold,Ken2,Yang,Yang2}, especially for the NP-hard problems such as the travelling salesman problem. Among these biology-derived algorithms, the multi-agent metaheuristic algorithms such as particle swarm optimization form hot research topics in the start-of-the-art algorithm development in optimization and other applications \cite{Bod,Deb,Yang}. Particle swarm optimization (PSO) was developed by Kennedy and Eberhart in 1995 \cite{Ken}, based on the swarm behaviour such as fish and bird schooling in nature, the so-called swarm intelligence. Though particle swarm optimization has many similarities with genetic algorithms, but it is much simpler because it does not use mutation/crossover operators. Instead, it uses the real-number randomness and the global communication among the swarming particles. In this sense, it is also easier to implement as it uses mainly real numbers. This paper aims to introduce the new Firefly Algorithm and to provide the comparison study of the FA with PSO and other relevant algorithms. We will first outline the particle swarm optimization, then formulate the firefly algorithms and finally give the comparison about the performance of these algorithms. The FA optimization seems more promising than particle swarm optimization in the sense that FA can deal with multimodal functions more naturally and efficiently. In addition, particle swarm optimization is just a special class of the firefly algorithms as we will demonstrate this in this paper. \section{Particle Swarm Optimization} \subsection{Standard PSO} The PSO algorithm searches the space of the objective functions by adjusting the trajectories of individual agents, called particles, as the piecewise paths formed by positional vectors in a quasi-stochastic manner \cite{Ken,Ken2}. There are now as many as about 20 different variants of PSO. Here we only describe the simplest and yet popular standard PSO. The particle movement has two major components: a stochastic component and a deterministic component. A particle is attracted toward the position of the current global best $\ff{g}^*$ and its own best location $\ff{x}_i^*$ in history, while at the same time it has a tendency to move randomly. When a particle finds a location that is better than any previously found locations, then it updates it as the new current best for particle $i$. There is a current global best for all $n$ particles. The aim is to find the global best among all the current best solutions until the objective no longer improves or after a certain number of iterations. For the particle movement, we use $\ff{x}^*_i$ to denote the current best for particle $i$, and $\ff{g}^* \approx\min$ or $\max \{f(\ff{x}_i)\} (i=1,2,...,n)$ to denote the current global best. Let $\ff{x}_i$ and $\ff{v}_i$ be the position vector and velocity for particle $i$, respectively. The new velocity vector is determined by the following formula \begin{equation} \ff{v}_i^{t+1}= \ff{v}_i^t + \alpha \ff{\epsilon}_1 \odot (\ff{g}^*-\ff{x}_i^t) + \beta \ff{\epsilon}_2 \odot (\ff{x}_i^*-\ff{x}_i^t). \label{pso-speed-100} \end{equation} where $\ff{\epsilon}_1$ and $\ff{\epsilon}_2$ are two random vectors, and each entry taking the values between 0 and 1. The Hadamard product of two matrices $\ff{u \odot v}$ is defined as the entrywise product, that is $[\ff{u \odot v}]_{ij}=u_{ij} v_{ij}$. The parameters $\alpha$ and $\beta$ are the learning parameters or acceleration constants, which can typically be taken as, say, $\alpha\approx\beta \=2$. The initial values of $\ff{x}_i^{t=0}$ can be taken as the bounds or limits $a=\min (x_j)$, $b=\max(x_j)$ and $\ff{v}_i^{t=0}\!\!=0$. The new position can then be updated by \begin{equation} \ff{x}_i^{t+1}=\ff{x}_i^t+\ff{v}_i^{t+1}. \end{equation} Although $\ff{v}_i$ can be any values, it is usually bounded in some range $[0, \ff{v}_{max}]$. There are many variants which extend the standard PSO algorithm, and the most noticeable improvement is probably to use inertia function $\theta (t)$ so that $\ff{v}_i^t$ is replaced by $\theta(t) \ff{v}_i^t$ where $\theta$ takes the values between 0 and 1. In the simplest case, the inertia function can be taken as a constant, typically $\theta \approx 0.5 \sim 0.9$. This is equivalent to introducing a virtual mass to stabilize the motion of the particles, and thus the algorithm is expected to converge more quickly. \section{Firefly Algorithm} \subsection{Behaviour of Fireflies} The flashing light of fireflies is an amazing sight in the summer sky in the tropical and temperate regions. There are about two thousand firefly species, and most fireflies produce short and rhythmic flashes. The pattern of flashes is often unique for a particular species. The flashing light is produced by a process of bioluminescence, and the true functions of such signaling systems are still debating. However, two fundamental functions of such flashes are to attract mating partners (communication), and to attract potential prey. In addition, flashing may also serve as a protective warning mechanism. The rhythmic flash, the rate of flashing and the amount of time form part of the signal system that brings both sexes together. Females respond to a male's unique pattern of flashing in the same species, while in some species such as photuris, female fireflies can mimic the mating flashing pattern of other species so as to lure and eat the male fireflies who may mistake the flashes as a potential suitable mate. We know that the light intensity at a particular distance $r$ from the light source obeys the inverse square law. That is to say, the light intensity $I$ decreases as the distance $r$ increases in terms of $I \propto 1/r^2$. Furthermore, the air absorbs light which becomes weaker and weaker as the distance increases. These two combined factors make most fireflies visible only to a limited distance, usually several hundred meters at night, which is usually good enough for fireflies to communicate. The flashing light can be formulated in such a way that it is associated with the objective function to be optimized, which makes it possible to formulate new optimization algorithms. In the rest of this paper, we will first outline the basic formulation of the Firefly Algorithm (FA) and then discuss the implementation as well as its analysis in detail. \subsection{Firefly Algorithm} Now we can idealize some of the flashing characteristics of fireflies so as to develop firefly-inspired algorithms. For simplicity in describing our new Fireflire Algorithm (FA), we now use the following three idealized rules: 1) all fireflies are unisex so that one firefly will be attracted to other fireflies regardless of their sex; 2) Attractiveness is proportional to their brightness, thus for any two flashing fireflies, the less brighter one will move towards the brighter one. The attractiveness is proportional to the brightness and they both decrease as their distance increases. If there is no brighter one than a particular firefly, it will move randomly; 3) The brightness of a firefly is affected or determined by the landscape of the objective function. For a maximization problem, the brightness can simply be proportional to the value of the objective function. Other forms of brightness can be defined in a similar way to the fitness function in genetic algorithms. Based on these three rules, the basic steps of the firefly algorithm (FA) can be summarized as the pseudo code shown in Fig. \ref{fa-fig-100}. \vcode{0.9}{{\sf Firefly Algorithm}} { \indent \quad Objective function $f(\ff{x}), \qquad \ff{x}=(x_1, ..., x_d)^T$ \\ \indent \quad Generate initial population of fireflies $\ff{x}_i \; (i=1,2,...,n)$ \\ \indent \quad Light intensity $I_i$ at $\ff{x}_i$ is determined by $f(\ff{x}_i)$ \\ \indent \quad Define light absorption coefficient $\gamma$ \\ \indent \quad {\bf while} ($t<$MaxGeneration) \\ \indent \quad {\bf for} $i=1:n$ all $n$ fireflies \\ \indent \qquad {\bf for} $j=1:i$ all $n$ fireflies \\ \indent \qquad \qquad {\bf if} ($I_j>I_i$), Move firefly $i$ towards $j$ in d-dimension; {\bf end if} \\ \indent \qquad \qquad Attractiveness varies with distance $r$ via $\exp[-\gamma r]$ \\ \indent \qquad \qquad Evaluate new solutions and update light intensity \\ \indent \qquad {\bf end for }$j$ \\ \indent \quad {\bf end for }$i$ \\ \indent \quad Rank the fireflies and find the current best \\ \indent \quad {\bf end while} \\ \indent \quad Postprocess results and visualization }{Pseudo code of the firefly algorithm (FA). \label{fa-fig-100} } In certain sense, there is some conceptual similarity between the firefly algorithms and the bacterial foraging algorithm (BFA) \cite{Gazi,Passino}. In BFA, the attraction among bacteria is based partly on their fitness and partly on their distance, while in FA, the attractiveness is linked to their objective function and monotonic decay of the attractiveness with distance. However, the agents in FA have adjustable visibility and more versatile in attractiveness variations, which usually leads to higher mobility and thus the search space is explored more efficiently. \subsection{Attractiveness} In the firefly algorithm, there are two important issues: the variation of light intensity and formulation of the attractiveness. For simplicity, we can always assume that the attractiveness of a firefly is determined by its brightness which in turn is associated with the encoded objective function. In the simplest case for maximum optimization problems, the brightness $I$ of a firefly at a particular location $\ff{x}$ can be chosen as $I(\ff{x}) \propto f(\ff{x})$. However, the attractiveness $\beta$ is relative, it should be seen in the eyes of the beholder or judged by the other fireflies. Thus, it will vary with the distance $r_{ij}$ between firefly $i$ and firefly $j$. In addition, light intensity decreases with the distance from its source, and light is also absorbed in the media, so we should allow the attractiveness to vary with the degree of absorption. In the simplest form, the light intensity $I(r)$ varies according to the inverse square law $I(r)=I_s/r^2$ where $I_s$ is the intensity at the source. For a given medium with a fixed light absorption coefficient $\gamma$, the light intensity $I$ varies with the distance $r$. That is $ I=I_0 e^{-\gamma r}, $ where $I_0$ is the original light intensity. In order to avoid the singularity at $r=0$ in the expression $I_s/r^2$, the combined effect of both the inverse square law and absorption can be approximated using the following Gaussian form \begin{equation} I(r)=I_0 e^{-\gamma r^2}. \end{equation} Sometimes, we may need a function which decreases monotonically at a slower rate. In this case, we can use the following approximation \begin{equation} I(r)=\kk{I_0}{1+\gamma r^2}. \end{equation} At a shorter distance, the above two forms are essentially the same. This is because the series expansions about $r=0$ \begin{equation} e^{-\gamma r^2} \approx 1- \gamma r^2 + \kk{1}{2} \gamma^2 r^4 + ..., \qquad \kk{1}{1+\gamma r^2} \approx 1-\gamma r^2 + \gamma^2 r^4 + ..., \end{equation} are equivalent to each other up to the order of $O(r^3)$. As a firefly's attractiveness is proportional to the light intensity seen by adjacent fireflies, we can now define the attractiveness $\beta$ of a firefly by \begin{equation} \beta (r) = \beta_0 e^{-\gamma r^2}, \label{att-equ-100} \end{equation} where $\beta_0$ is the attractiveness at $r=0$. As it is often faster to calculate $1/(1+r^2)$ than an exponential function, the above function, if necessary, can conveniently be replaced by $ \beta = \kk{\beta_0}{1+\gamma r^2}$. Equation (\ref{att-equ-100}) defines a characteristic distance $\Gamma=1/\sqrt{\gamma}$ over which the attractiveness changes significantly from $\beta_0$ to $\beta_0 e^{-1}$. In the implementation, the actual form of attractiveness function $\beta(r)$ can be any monotonically decreasing functions such as the following generalized form \begin{equation} \beta(r) =\beta_0 e^{-\gamma r^m}, \qquad (m \ge 1). \end{equation} For a fixed $\gamma$, the characteristic length becomes $\Gamma=\gamma^{-1/m} \rightarrow 1$ as $m \rightarrow \infty$. Conversely, for a given length scale $\Gamma$ in an optimization problem, the parameter $\gamma$ can be used as a typical initial value. That is $\gamma =\kk{1}{\Gamma^m}$. \subsection{Distance and Movement} The distance between any two fireflies $i$ and $j$ at $\ff{x}_i$ and $\ff{x}_j$, respectively, is the Cartesian distance \begin{equation} r_{ij}=||\ff{x}_i-\ff{x}_j|| =\sqrt{\sum_{k=1}^d (x_{i,k} - x_{j,k})^2}, \end{equation} where $x_{i,k}$ is the $k$th component of the spatial coordinate $\ff{x}_i$ of $i$th firefly. In 2-D case, we have $r_{ij}=\sqrt{(x_i-x_j)^2+(y_i-y_j)^2}.$ The movement of a firefly $i$ is attracted to another more attractive (brighter) firefly $j$ is determined by \begin{equation} \ff{x}_i =\ff{x}_i + \beta_0 e^{-\gamma r^2_{ij}} (\ff{x}_j-\ff{x}_i) + \alpha \; ({\rm rand}-\kk{1}{2}), \end{equation} where the second term is due to the attraction while the third term is randomization with $\alpha$ being the randomization parameter. ${\sf rand}$ is a random number generator uniformly distributed in $[0,1]$. For most cases in our implementation, we can take $\beta_0=1$ and $\alpha \in [0,1]$. Furthermore, the randomization term can easily be extended to a normal distribution $N(0,1)$ or other distributions. In addition, if the scales vary significantly in different dimensions such as $-10^5$ to $10^5$ in one dimension while, say, $-0.001$ to $0.01$ along the other, it is a good idea to replace $\alpha$ by $\alpha S_k$ where the scaling parameters $S_k (k=1,...,d)$ in the $d$ dimensions should be determined by the actual scales of the problem of interest. The parameter $\gamma$ now characterizes the variation of the attractiveness, and its value is crucially important in determining the speed of the convergence and how the FA algorithm behaves. In theory, $\gamma \in [0,\infty)$, but in practice, $\gamma=O(1)$ is determined by the characteristic length $\Gamma$ of the system to be optimized. Thus, in most applications, it typically varies from $0.01$ to $100$. \subsection{Scaling and Asymptotic Cases} It is worth pointing out that the distance $r$ defined above is {\it not} limited to the Euclidean distance. We can define many other forms of distance $r$ in the $n$-dimensional hyperspace, depending on the type of problem of our interest. For example, for job scheduling problems, $r$ can be defined as the time lag or time interval. For complicated networks such as the Internet and social networks, the distance $r$ can be defined as the combination of the degree of local clustering and the average proximity of vertices. In fact, any measure that can effectively characterize the quantities of interest in the optimization problem can be used as the `distance' $r$. The typical scale $\Gamma$ should be associated with the scale in the optimization problem of interest. If $\Gamma$ is the typical scale for a given optimization problem, for a very large number of fireflies $n \gg m$ where $m$ is the number of local optima, then the initial locations of these $n$ fireflies should distribute relatively uniformly over the entire search space in a similar manner as the initialization of quasi-Monte Carlo simulations. As the iterations proceed, the fireflies would converge into all the local optima (including the global ones) in a stochastic manner. By comparing the best solutions among all these optima, the global optima can easily be achieved. At the moment, we are trying to formally prove that the firefly algorithm will approach global optima when $n \rightarrow \infty$ and $t \gg 1$. In reality, it converges very quickly, typically with less than 50 to 100 generations, and this will be demonstrated using various standard test functions later in this paper. There are two important limiting cases when $\gamma \rightarrow 0$ and $\gamma \rightarrow \infty$. For $\gamma \rightarrow 0$, the attractiveness is constant $\beta=\beta_0$ and $\Gamma \rightarrow \infty$, this is equivalent to say that the light intensity does not decrease in an idealized sky. Thus, a flashing firefly can be seen anywhere in the domain. Thus, a single (usually global) optimum can easily be reached. This corresponds to a special case of particle swarm optimization (PSO) discussed earlier. Subsequently, the efficiency of this special case is the same as that of PSO. On the other hand, the limiting case $\gamma \rightarrow \infty$ leads to $\Gamma \rightarrow 0$ and $\beta(r) \rightarrow \delta(r)$ (the Dirac delta function), which means that the attractiveness is almost zero in the sight of other fireflies or the fireflies are short-sighted. This is equivalent to the case where the fireflies fly in a very foggy region randomly. No other fireflies can be seen, and each firefly roams in a completely random way. Therefore, this corresponds to the completely random search method. As the firefly algorithm is usually in somewhere between these two extremes, it is possible to adjust the parameter $\gamma$ and $\alpha$ so that it can outperform both the random search and PSO. In fact, FA can find the global optima as well as all the local optima simultaneously in a very effective manner. This advantage will be demonstrated in detail later in the implementation. A further advantage of FA is that different fireflies will work almost independently, it is thus particularly suitable for parallel implementation. It is even better than genetic algorithms and PSO because fireflies aggregate more closely around each optimum (without jumping around as in the case of genetic algorithms). The interactions between different subregions are minimal in parallel implementation. \begin{figure} \caption{Michalewicz's function for two independent variables with a global minimum $f_* \approx -1.801$ at $(2.20319,1.57049)$. } \label{yangfig-100} \end{figure} \section{Multimodal Optimization with Multiple Optima} \subsection{Validation} In order to demonstrate how the firefly algorithm works, we have implemented it in Matlab. We will use various test functions to validate the new algorithm. As an example, we now use the FA to find the global optimum of the Michalewicz function \begin{equation} f(\ff{x})=-\sum_{i=1}^d \sin (x_i) [\sin (\kk{i x_i^2}{\pi})]^{2m}, \end{equation} where $m=10$ and $d=1,2,...$. The global minimum $f_* \approx -1.801$ in 2-D occurs at $(2.20319,1.57049)$, which can be found after about 400 evaluations for 40 fireflies after 10 iterations (see Fig. \ref{yangfig-100} and Fig. \ref{yangfig-200}). Now let us use the FA to find the optima of some tougher test functions. This is much more efficient than most of existing metaheuristic algorithms. In the above simulations, the values of the parameters are $\alpha=0.2$, $\gamma=1$ and $\beta_0=1$. \begin{figure} \caption{The initial 40 fireflies (left) and their locations after $10$ iterations (right). } \label{yangfig-200} \end{figure} We have also used much tougher test functions. For example, Yang described a multimodal function which looks like a standing-wave pattern \cite{Yang3} \begin{equation} f(\ff{x})=\Big[ e^{-\sum_{i=1}^d (x_i/a)^{2 m}} - 2 e^{-\sum_{i=1}^d x_i^2} \Big] \cdot \prod_{i=1}^d \cos^2 x_i, \qquad m=5, \end{equation} is multimodal with many local peaks and valleys, and it has a unique global minimum $f_*=-1$ at $(0,0,...,0)$ in the region $-20 \le x_i \le 20$ where $i=1,2,...,d$ and $a=15$. The 2D landscape of Yang's function is shown in Fig. \ref{yangfig-300}. \subsection{Comparison of FA with PSO and GA} Various studies show that PSO algorithms can outperform genetic algorithms (GA) \cite{Gold} and other conventional algorithms for solving many optimization problems. This is partially due to that fact that the broadcasting ability of the current best estimates gives better and quicker convergence towards the optimality. A general framework for evaluating statistical performance of evolutionary algorithms has been discussed in detail by Shilane et al. \cite{Shilane}. Now we will compare the Firefly Algorithms with PSO, and genetic algorithms for various standard test functions. For genetic algorithms, we have used the standard version with no elitism with a mutation probability of $p_m=0.05$ and a crossover probability of $0.95$. For the particle swarm optimization, we have also used the standard version with the learning parameters $\alpha \approx \beta \approx 2$ without the inertia correction \cite{Gold,Ken,Ken2}. We have used various population sizes from $n=15$ to $200$, and found that for most problems, it is sufficient to use $n=15$ to $50$. Therefore, we have used a fixed population size of $n=40$ in all our simulations for comparison. After implementing these algorithms using Matlab, we have carried out extensive simulations and each algorithm has been run at least 100 times so as to carry out meaningful statistical analysis. The algorithms stop when the variations of function values are less than a given tolerance $\epsilon \le 10^{-5}$. The results are summarized in the following table (see Table 1) where the global optima are reached. The numbers are in the format: average number of evaluations (success rate), so $3752 \pm 725 (99\%)$ means that the average number (mean) of function evaluations is 3752 with a standard deviation of 725. The success rate of finding the global optima for this algorithm is $99\%$. \begin{figure} \caption{Yang's function in 2D with a global minimum $f_* = -1$ at $(0,0)$ where $a=15$. } \label{yangfig-300} \end{figure} \begin{table}[ht] \caption{Comparison of algorithm performance} \centering \begin{tabular}{ccccc} \hline \hline Functions/Algorithms & GA & PSO & FA \\ \hline Michalewicz's ($d\!\!=\!\!16$) & $89325 \pm 7914 (95 \%)$ & $6922 \pm 537 (98\%)$ & $3752 \pm 725 (99\%)$ \\ Rosenbrock's ($d\!\!=\!\!16$) & $55723 \pm 8901 (90\%)$ & $32756 \pm 5325 (98\%)$ & $7792 \pm 2923 (99\%) $ \\ De Jong's ($d\!\!=\!\!256$) & $25412 \pm 1237 (100\%)$ & $17040 \pm 1123 (100\%)$ & $7217 \pm 730 (100\%)$\\ Schwefel's ($d\!\!=\!\!128$) & $227329 \pm 7572 (95\%)$ & $14522 \pm 1275 (97\%)$ & $9902 \pm 592 (100\%)$ \\ Ackley's ($d\!\!=\!\!128$) & $32720 \pm 3327 (90\%)$ & $23407 \pm 4325 (92\%)$ & $5293 \pm 4920 (100\%)$ \\ Rastrigin's & $110523 \pm 5199 (77 \%)$ & $79491 \pm 3715 (90\%)$ & $15573 \pm 4399 (100\%)$ \\ Easom's & $19239 \pm 3307 (92\%)$ & $17273 \pm 2929 (90\%)$ & $7925 \pm 1799 (100\%)$ \\ Griewank's & $70925 \pm 7652 (90\%)$ & $55970 \pm 4223 (92\%)$ & $12592 \pm 3715 (100\%)$ \\ Shubert's (18 minima) & $54077 \pm 4997 (89\%)$ & $23992 \pm 3755 (92\%)$ & $12577 \pm 2356 (100\%)$ \\ Yang's ($d=16$) & $27923 \pm 3025 (83\%)$ & $14116 \pm 2949 (90\%)$ & $7390 \pm 2189 (100\%)$ \\ \hline \end{tabular} \end{table} We can see that the FA is much more efficient in finding the global optima with higher success rates. Each function evaluation is virtually instantaneous on modern personal computer. For example, the computing time for 10,000 evaluations on a 3GHz desktop is about 5 seconds. Even with graphics for displaying the locations of the particles and fireflies, it usually takes less than a few minutes. It is worth pointing out that more formal statistical hypothesis testing can be used to verify such significance. \section{Conclusions} In this paper, we have formulated a new firefly algorithm and analyzed its similarities and differences with particle swarm optimization. We then implemented and compared these algorithms. Our simulation results for finding the global optima of various test functions suggest that particle swarm often outperforms traditional algorithms such as genetic algorithms, while the new firefly algorithm is superior to both PSO and GA in terms of both efficiency and success rate. This implies that FA is potentially more powerful in solving NP-hard problems which will be investigated further in future studies. The basic firefly algorithm is very efficient, but we can see that the solutions are still changing as the optima are approaching. It is possible to improve the solution quality by reducing the randomness gradually. A further improvement on the convergence of the algorithm is to vary the randomization parameter $\alpha$ so that it decreases gradually as the optima are approaching. These could form important topics for further research. Furthermore, as a relatively straightforward extension, the Firefly Algorithm can be modified to solve multiobjective optimization problems. In addition, the application of firefly algorithms in combination with other algorithms may form an exciting area for further research. \end{document}
arXiv
Methodology Article | Open | Published: 27 February 2017 MINT: a multivariate integrative method to identify reproducible molecular signatures across independent experiments and platforms Florian Rohart1, Aida Eslami2, Nicholas Matigian1, Stéphanie Bougeard3 & Kim-Anh Lê Cao1 BMC Bioinformaticsvolume 18, Article number: 128 (2017) | Download Citation Molecular signatures identified from high-throughput transcriptomic studies often have poor reliability and fail to reproduce across studies. One solution is to combine independent studies into a single integrative analysis, additionally increasing sample size. However, the different protocols and technological platforms across transcriptomic studies produce unwanted systematic variation that strongly confounds the integrative analysis results. When studies aim to discriminate an outcome of interest, the common approach is a sequential two-step procedure; unwanted systematic variation removal techniques are applied prior to classification methods. To limit the risk of overfitting and over-optimistic results of a two-step procedure, we developed a novel multivariate integration method, MINT, that simultaneously accounts for unwanted systematic variation and identifies predictive gene signatures with greater reproducibility and accuracy. In two biological examples on the classification of three human cell types and four subtypes of breast cancer, we combined high-dimensional microarray and RNA-seq data sets and MINT identified highly reproducible and relevant gene signatures predictive of a given phenotype. MINT led to superior classification and prediction accuracy compared to the existing sequential two-step procedures. MINT is a powerful approach and the first of its kind to solve the integrative classification framework in a single step by combining multiple independent studies. MINT is computationally fast as part of the mixOmics R CRAN package, available at http://www.mixOmics.org/mixMINT/and http://cran.r-project.org/web/packages/mixOmics/. High-throughput technologies, based on microarray and RNA-sequencing, are now being used to identify biomarkers or gene signatures that distinguish disease subgroups, predict cell phenotypes or classify responses to therapeutic drugs. However, few of these findings are reproduced when assessed in subsequent studies and even fewer lead to clinical applications [1, 2]. The poor reproducibility of identified gene signatures is most likely a consequence of high-dimensional data, in which the number of genes or transcripts being analysed is very high (often several thousands) relative to a comparatively small sample size being used (<20). One way to increase sample size is to combine raw data from independent experiments in an integrative analysis. This would improve both the statistical power of the analysis and the reproducibility of the gene signatures that are identified [3]. However, integrating transcriptomic studies with the aim of classifying biological samples based on an outcome of interest (integrative classification) has a number of challenges. Transcriptomic studies often differ from each other in a number of ways, such as in their experimental protocols or in the technological platform used. These differences can lead to so-called 'batch-effects', or systematic variation across studies, which is an important source of confounding [4]. Technological platform, in particular, has been shown to be an important confounder that affects the reproducibility of transcriptomic studies [5]. In the MicroArray Quality Control (MAQC) project, poor overlap of differentially expressed genes was observed across different microarray platforms (∼ 60%), with low concordance observed between microarray and RNA-seq technologies specifically [6]. Therefore, these confounding factors and sources of systematic variation must be accounted for, when combining independent studies, to enable genuine biological variation to be identified. The common approach to integrative classification is sequential. A first step consists of removing batch-effect by applying for instance ComBat [7], FAbatch [8], Batch Mean-Centering [9], LMM-EH-PS [10], RUV-2 [4] or YuGene [11]. A second step fits a statistical model to classify biological samples and predict the class membership of new samples. A range of classification methods also exists for these purposes, including machine learning approaches (e.g. random forests [12, 13] or Support Vector Machine [14–16]) as well as multivariate linear approaches (Linear Discriminant Analysis LDA, Partial Least Square Discriminant Analysis PLSDA [17], or sparse PLSDA [18]). The major pitfall of the sequential approach is a risk of over-optimistic results from overfitting of the training set. This leads to signatures that cannot be reproduced on test sets. Moreover, most proposed classification models have not been objectively validated on an external and independent test set. Thus, spurious conclusions can be generated when using these methods, leading to limited potential for translating results into reliable clinical tools [2]. For instance, most classification methods require the choice of a parameter (e.g. sparsity), which is usually optimised with cross-validation (data are divided into k subsets or 'folds' and each fold is used once as an internal test set). Unless the removal of batch-effects is performed independently on each fold, the folds are not independent and this leads to over-optimistic classification accuracy on the internal test sets. Hence, batch removal methods must be used with caution. For instance, ComBat can not remove unwanted variation in an independent test set alone as it requires the test set to be normalised with the learning set in a transductive rather than inductive approach [19]. This is a clear example where over-fitting and over-optimistic results can be an issue, even when a test set is considered. To address existing limitations of current data integration approaches and the poor reproducibility of results, we propose a novel Multivariate INTegrative method, MINT. MINT is the first approach of its kind that integrates independent data sets while simultaneously, accounting for unwanted (study) variation, classifying samples and identifying key discriminant variables. MINT predicts the class of new samples from external studies, which enables a direct assessment of its performance. It also provides insightful graphical outputs to improve interpretation and inspect each study during the integration process. We validated MINT in a subset of the MAQC project, which was carefully designed to enable assessment of unwanted systematic variation. We then combined microarray and RNA-seq experiments to classify samples from three human cell types (human Fibroblasts (Fib), human Embryonic Stem Cells (hESC) and human induced Pluripotent Stem Cells (hiPSC)) and from four classes of breast cancer (subtype Basal, HER2, Luminal A and Luminal B). We use these datasets to demonstrate the reproducibility of gene signatures identified by MINT. We use the following notations. Let X denote a data matrix of size N observations (rows) ×P variables (e.g. gene expression levels, in columns) and Y a dummy matrix indicating each sample class membership of size N observations (rows) ×K categories outcome (columns). We assume that the data are partitioned into M groups corresponding to each independent study m: {(X (1),Y (1)),…,(X (M),Y (M))} so that $\sum _{m=1}^{M} n_{m}=N$, where n m is the number of samples in group m, see Additional file 1: Figure S1. Each variable from the data set X (m) and Y (m) is centered and has unit variance. We write X and Y the concatenation of all X (m) and Y (m), respectively. Note that if an internal known batch effect is present in a study, this study should be split according to that batch effect factor into several sub-studies considered as independent. For $n\in \mathbb {N}$, we denote for all $a\in \mathbb {R}^{n}$ its ℓ 1 norm $||a||_{1}=\sum _{1}^{n}|a_{j}|$ and its ℓ 2 norm $||a||_{2}=\left (\sum _{1}^{n}a_{j}^{2}\right)^{1/2}$ and |a|+ the positive part of a. For any matrix we denote by ⊤ its transpose. PLS-based classification methods to combine independent studies PLS approaches have been extended to classify samples Y from a data matrix X by maximising a formula based on their covariance. Specifically, latent components are built based on the original X variables to summarise the information and reduce the dimension of the data while discriminating the Y outcome. Samples are then projected into a smaller space spanned by the latent component. We first detail the classical PLS-DA approach and then describe mgPLS, a PLS-based model we previously developed to model a group (study) structure in X. PLS-DA Partial Least Squares Discriminant Analysis [17] is an extension of PLS for a classification frameworks where Y is a dummy matrix indicating sample class membership. In our study, we applied PLS-DA as an integrative approach by naively concatenating all studies. Briefly, PLS-DA is an iterative method that constructs H successive artificial (latent) components t h =X h a h and u h =Y h b h for h=1,..,H, where the h th component t h (respectively u h ) is a linear combination of the X (Y) variables. H denotes the dimension of the PLS-DA model. The weight coefficient vector a h (b h ) is the loading vector that indicates the importance of each variable to define the component. For each dimension h=1,…,H PLS-DA seeks to maximize $$ \underset{||a_{h}||_{2} = ||b_{h}||_{2} =1}{\max }cov(X_{h} a_{h}, Y_{h} b_{h}), $$ where X h ,Y h are residual matrices (obtained through a deflation step, as detailed in [18]). The PLS-DA algorithm is described in Additional file 1: Supplemental Material S1. The PLS-DA model assigns to each sample i a pair of H scores $(t_{h}^{i}, u_{h}^{i})$ which effectively represents the projection of that sample into the X- or Y- space spanned by those PLS components. As H<<P, the projection space is small, allowing for dimension reduction as well as insightful sample plot representation (e.g. graphical outputs in "Results" section). While PLS-DA ignores the data group structure inherent to each independent study, it can give satisfactory results when the between groups variance is smaller than the within group variance or when combined with extensive data subsampling to account for systematic variation across platforms [21]. mgPLS Multi-group PLS is an extension of the PLS framework we recently proposed to model grouped data [22, 23], which is relevant for our particular case where the groups represent independent studies. In mgPLS, the PLS-components of each group are constraint to be built based on the same loading vectors in X and Y. These global loading vectors thus allow the samples from each group or study to be projected in the same common space spanned by the PLS-components. We extended the original unsupervised approach to a supervised approach by using a dummy matrix Y as in PLS-DA to classify samples while modelling the group structure. For each dimension h=1,…,H mgPLS-DA seeks to maximize $$ \underset{||a_{h}||_{2} = ||b_{h}||_{2} =1}{\max }\sum_{m=1}^{M} n_{m} cov\left(X^{(m)}_{h}a_{h}, Y^{(m)}_{h} b_{h}\right), $$ where a h and b h are the global loadings vectors common to all groups, $t_{h}^{(m)}=X_{h}^{(m)}a_{h}$ and $u_{h}^{(m)}=Y_{h}^{(m)}b_{h}$ are the group-specific (partial) PLS-components, and $X_{h}^{(m)}$ and $ Y_{h}^{(m)}$ are the residual (deflated) matrices. The global loadings vectors (a h ,b h ) and global components (t h =X h a h ,u h =Y h b h ) enable to assess overall classification accuracy, while the group-specific loadings and components provide powerful graphical outputs for each study that is integrated in the analysis. Global and group-specific components and loadings are represented in Additional file 1: Figure S2. The next development we describe below is to include internal variable selection in mgPLS-DA for large dimensional data sets. Our novel multivariate integrative method MINT simultaneously integrates independent studies and selects the most discriminant variables to classify samples and predict the class of new samples. MINT seeks for a common projection space for all studies that is defined on a small subset of discriminative variables and that display an analogous discrimination of the samples across studies. The identified variables share common information across all studies and therefore represent a reproducible signature that helps characterising biological systems. MINT further extends mgPLS-DA by including a ℓ 1-penalisation on the global loading vector a h to perform variable selection. For each dimension h=1,…,H the MINT algorithm seeks to maximize $$ \underset{||a_{h}||_{2} = ||b_{s}||_{2} =1}{\max }\sum_{m=1}^{M} n_{m} cov(X_{h}^{(m)}a_{h}, Y_{h}^{(m)}b_{h}) + \lambda_{h}||a_{h}||_{1}, $$ where in addition to the notations from Eq. (2), λ h is a non negative parameter that controls the amount of shrinkage on the global loading vectors a h and thus the number of non zero weights. Similarly to Lasso [24] or sparse PLS-DA [18], the added ℓ 1 penalisation in MINT improves interpretability of the PLS-components that are now defined only on a set of selected biomarkers from X (with non zero weight) that are identified in the linear combination $X_{h}^{(m)}a_{h}$. The ℓ 1 penalisation in effectively solved in the MINT algorithm using soft-thresholding (see pseudo Algorithm 1). In addition to the integrative classification framework, MINT was extended to an integrative regression framework (multiple multivariate regression, Additional file 1 Supplemental Material S2). Class prediction and parameters tuning with MINT MINT centers and scales each study from the training set, so that each variable has mean 0 and variance 1, similarly to any PLS methods. Therefore, a similar pre-processing needs to be applied on test sets. If a test sample belongs to a study that is part of the training set, then we apply the same scaling coefficients as from the training study. This is required so that MINT applied on a single study will provide the same results as PLS. If the test study is completely independent, then it is centered and scaled separately. After scaling the test samples, the prediction framework of PLS is used to estimate the dummy matrix Y test of an independent test set X test [25], where each row in Y test sums to 1, and each column represents a class of the outcome. A class membership is assigned (predicted) to each test sample by using the maximal distance, as described in [18]. It consists in assigning the class with maximal positive value in Y test . The main parameter to tune in MINT is the penalty λ h for each PLS-component h, which is usually performed using Cross-Validation (CV). In practice, the parameter λ h can be equally replaced by the number of variables to select on each component, which is our preferred user-friendly option. The assessment criterion in the CV can be based on the proportion of misclassified samples, proportion of false or true positives, or, as in our case, the balanced error rate (BER). BER is calculated as the averaged proportion of wrongly classified samples in each class and weights up small sample size classes. We consider BER to be a more objective performance measure than the overall misclassification error rate when dealing with unbalanced classes. MINT tuning is computationally efficient as it takes advantage of the group data structure in the integrative study. We used a "Leave-One-Group-Out Cross-Validation (LOGOCV)", which consists in performing CV where group or study m is left out only once m=1,…,M. LOGOCV realistically reflects the true case scenario where prediction is performed on independent external studies based on a reproducible signature identified on the training set. Finally, the total number of components H in MINT is set to K−1, K= number of classes, similar to PLS-DA and ℓ 1 penalised PLS-DA models [18]. We demonstrate the ability of MINT to identify the true positive genes on the MAQC project, then highlight the strong properties of our method to combine independent data sets in order to identify reproducible and predictive gene signatures on two other biological studies. The MicroArray quality control (MAQC) project. The extensive MAQC project focused on assessing microarray technologies reproducibility in a controlled environment [5]. Two reference samples, RNA samples Universal Human Reference (UHR) and Human Brain Reference (HBR) and two mixtures of the original samples were considered. Technical replicates were obtained from three different array platforms -Illumina, AffyHuGene and AffyPrime- for each of the four biological samples A (100% UHR), B (100% HBR), C (75% UHR, 25% HBR) and D (25% UHR and 75% HBR). Data were downloaded from Gene Expression Omnibus (GEO) - GSE56457. In this study, we focused on identifying biomarkers that discriminate A vs. B and C vs. D. The experimental design is referenced in Additional file 1: Table S1. Stem cells. We integrated 15 transcriptomics microarray datasets to classify three types of human cells: human Fibroblasts (Fib), human Embryonic Stem Cells (hESC) and human induced Pluripotent Stem Cells (hiPSC). As there exists a biological hierarchy among these three cell types, two sub-classification problems are of interest in our analysis, which we will address simultaneously with MINT. On the one hand, differences between pluripotent (hiPSC and hESC) and non-pluripotent cells (Fib) are well-characterised and are expected to contribute to the main biological variation. Our first level of analysis will therefore benchmark MINT against the gold standard in the field. On the other hand, hiPSC are genetically reprogrammed to behave like hESC and both cell types are commonly assumed to be alike. However, differences have been reported in the literature [26–28], justifying the second and more challenging level of classification analysis between hiPSC and hESC. We used the cell type annotations of the 342 samples as provided by the authors of the 15 studies. The stem cell dataset provides an excellent showcase study to benchmark MINT against existing statistical methods to solve a rather ambitious classification problem. Each of the 15 studies was assigned to either a training or test set. Platforms uniquely represented were assigned to the training set and studies with only one sample in one class were assigned to the test set. Remaining studies were randomly assigned to training or test set. Eventually, the training set included eight datasets (210 samples) derived on five commercial platforms and the independent test set included the remaining seven datasets (132 samples) derived on three platforms (Table 1). Table 1 Stem cells experimental design The pre-processed files were downloaded from the http://www.stemformatics.org collaborative platform [29]. Each dataset was background corrected, log2 transformed, YuGene normalized and mapped from probes ID to Ensembl ID as previously described in [11], resulting in 13 313 unique Ensembl gene identifiers. In the case where datasets contained multiple probes for the same Ensembl ID gene, the highest expressed probe was chosen as the representative of that gene in that dataset. The choice of YuGene normalisation was motivated by the need to normalise each sample independently rather than as a part of a whole study (e.g. existing methods ComBat [7], quantile normalisation (RMA [30])), to effectively limit over-fitting during the CV evaluation process. Breast cancer. We combined whole-genome gene-expression data from two cohorts from the Molecular Taxonomy of Breast Cancer International Consortium project (METABRIC, [31] and of two cohorts from the Cancer Genome Atlas (TCGA, [32]) to classify the intrinsic subtypes Basal, HER2, Luminal A and Luminal B, as defined by the PAM50 signature [20]. The METABRIC cohorts data were made available upon request, and were processed by [31]. TCGA cohorts are gene-expression data from RNA-seq and microarray platforms. RNA-seq data were normalised using Expectation Maximisation (RSEM) and percentile-ranked gene-level transcription estimates. The microarray data were processed as described in [32]. The training set consisted in three cohorts (TCGA RNA-seq and both METABRIC microarray studies), including the expression levels of 15 803 genes on 2 814 samples; the test set included the TCGA microarray cohort with 254 samples (Table 2). Two analyses were conducted, which either included or discarded the PAM50 genes from the data. The first analysis aimed at recovering the PAM50 genes used to classify the samples. The second analysis was performed on 15,755 genes and aimed at identifying an alternative signature to the PAM50. Table 2 Experimental design of four breast cancer cohorts including 4 cancer subtypes: Basal, HER2, Luminal A (LumA) and Luminal B (LumB) Performance comparison with sequential classification approaches We compared MINT with sequential approaches that combine batch-effect removal approaches with classification methods. As a reference, classification methods were also used on their own on a naive concatenation of all studies. Batch-effect removal methods included Batch Mean-Centering (BMC, [9]), ComBat [7], linear models (LM) or linear mixed models (LMM), and classification methods included PLS-DA, sPLS-DA [18], mgPLS [22, 23] and Random forests (RF [12]). For LM and LMM, linear models were fitted on each gene and the residuals were extracted as a batch-corrected gene expression [33, 34]. The study effect was set as a fixed effect with LM or as a random effect with LMM. No sample outcome (e.g. cell-type) was included. Prediction with ComBat normalised data were obtained as described in [19]. In this study, we did not include methods that require extra information -as control genes with RUV-2 [4]- and methods that are not widely available to the community as LMM-EH [10]. Classification methods were chosen so as to simultaneously discriminate all classes. With the exception of sPLS-DA, none of those methods perform internal variable selection. The multivariate methods PLS-DA, mgPLS and sPLS-DA were run on K−1 components, sPLS-DA was tuned using 5-fold CV on each component. All classification methods were combined with batch-removal method with the exception of mgPLS that already includes a study structure in the model. MINT and PLS-DA-like approaches use a prediction threshold based on distances (see "Class prediction and parameters tuning with MINT " section) that optimally determines class membership of test samples, and as such do not require receiver operating characteristic (ROC) curves and area under the curve (AUC) performance measures. In addition, those measures are limited to binary classification which do not apply for our stem cell and breast cancer multi-class studies. Instead we use Balanced classification Error Rate to objectively evaluate the classification and prediction performance of the methods for unbalanced sample size classes (" MINT " section). Classification accuracies for each class were also reported. Validation of the MINT approach to identify signatures agnostic to batch effect The MAQC project processed technical replicates of four well-characterised biological samples A, B, C and D across three platforms. Thus, we assumed that genes that are differentially expressed (DEG) in every single platform are true positive. We primarily focused on identifying biomarkers that discriminate C vs. D, and report the results of A vs. B in the Additional file 1: Supplemental Material S3, Figure S3. Differential expression analysis of C vs. D was conducted on each of the three microarray platforms using ANOVA, showing an overlap of 1385 DEG (FDR <10−3 [35]), which we considered as true positive. This corresponded to 62.6% of all DEG for Illumina, 30.5% for AffyHuGene and 21.0% for AffyPrime (Additional file 1: Figure S4). We observed that conducting a differential analysis on the concatenated data from the three microarray platforms without accommodating for batch effects resulted in 691 DEG, of which only 56% (387) were true positive genes. This implies that the remaining 44% (304) of these genes were false positive, and hence were not DE in at least one study. The high percentage of false positive was explained by a Principal Component Analysis (PCA) sample plot that showed samples clustering by platforms (Additional file 1: Figure S4), which confirmed that the major source of variation in the combined data was attributed to platforms rather than cell types. MINT selected a single gene, BCAS1, to discriminate the two biological classes C and D. BCAS1 was a true positive gene, as part of the common DEG, and was ranked 1 for Illumina, 158 for AffyPrime and 1182 for AffyHuGene. Since the biological samples C and D are very different, the selection of one single gene by MINT was not surprising. To further investigate the performance of MINT, we expanded the number of genes selected by MINT, by decreasing its sparsity parameter (see Methods), and compared the overlap between this larger MINT signature and the true positive genes. We observed an overlap of 100% for a MINT signature of size 100, and an overlap of 89% for a signature of size 1385, which is the number of common DEG identified previously. The high percentage of true positive selected by MINT demonstrates its ability to identify a signature agnostic to batch effect. Limitations of common meta-analysis and integrative approaches A meta-analysis of eight stem cell studies, each including three cell types (Table 1, stem cell training set), highlighted a small overlap of DEG lists obtained from the analysis of each separate study (FDR <10−5, ANOVA, Additional file 1: Table S2). Indeed, the Takahashi study with only 24 DEG limited the overlap between all eight studies to only 5 DEG. This represents a major limitation of merging pre-analysed gene lists as the concordance between DEG lists decreases when the number of studies increases. One alternative to meta-analysis is to perform an integrative analysis by concatenating all eight studies. Similarly to the MAQC analysis, we first observed that the major source of variation in the combined data was attributed to study rather than cell type (Fig. 1 a). PLS-DA was applied to discriminate the samples according to their cell types, and it showed a strong study variation (Fig. 1 b), despite being a supervised analysis. Compared to unsupervised PCA (Fig. 1 a), the study effect was reduced for the fibroblast cells, but was still present for the similar cell types hESC and hiPSC. We reached similar conclusions when analysing the breast cancer data (Additional file 1: Supplemental Material S4, Figure S5). Stem cell study. a PCA on the concatenated data: a greater study variation than a cell type variation is observed. b PLSDA on the concatenated data clustered Fibroblasts only. c MINT sample plot shows that each cell type is well clustered, d MINT performance: BER and classification accuracy for each cell type and each study MINT outperforms state-of-the-art methods We compared the classification accuracy of MINT to sequential methods where batch removal methods were applied prior to classification methods. In both stem cell and breast cancer studies, MINT led to the best accuracy on the training set and the best reproducibility of the classification model on the test set (lowest Balanced Error Rate, BER, Fig. 2, Additional file 1: Figures S6 and S7). In addition, MINT consistently ranked first as the best performing method, followed by ComBat+sPLSDA with an average rank of 4.5 (Additional file 1: Figure S8). Classification accuracy for both training and test set for the stem cells and breast cancer studies (excluding PAM50 genes). The classification Balanced Error Rates (BER) are reported for all sixteen methods compared with MINT (in black) On the stem cell data, we found that fibroblasts were the easiest to classify for all methods, including those that do not accommodate unwanted variation (PLS-DA, sPLS-DA and RF, Additional file 1: Figure S6). Classifying hiPSC vs. hESC proved more challenging for all methods, leading to a substantially lower classification accuracy than fibroblasts. The analysis of the breast cancer data (excluding PAM50 genes) showed that methods that do not accommodate unwanted variation were able to rightly classify most of the samples from the training set, but failed at classifying any of the four subtypes on the external test set. As a consequence, all samples were predicted as LumB with PLS-DA and sPLS-DA, or Basal with RF (Additional file 1: Figure S7). Thus, RF gave a satisfactory performance on the training set (BER =18.5), but a poor performance on the test set (BER =75). Additionally, we observed that the biomarker selection process substantially improved classification accuracy. On the stem cell data, LM+sPLSDA and MINT outperformed their non sparse counterparts LM+PLSDA and mgPLS (Fig. 2, BER of 9.8 and 7.1 vs. 20.8 and 11.9), respectively. Finally, MINT was largely superior in terms of computational efficiency. The training step on the stem cell data which includes 210 samples and 13,313 was run in 1 s, compared to 8 s with the second best performing method ComBat+sPLS-DA (2013 MacNook Pro 2.6 Ghz, 16 Gb memory). The popular method ComBat took 7.1s to run, and sPLS-DA 0.9s. The training step on the breast cancer data that includes 2817 samples and 15,755 genes was run in 37s for MINT and 71.5s for ComBat(30.8s)+sPLS-DA(40.6s). Study-specific outputs with MINT One of the main challenges when combining independent studies is to assess the concordance between studies. During the integration procedure, MINT proposes not only individual performance accuracy assessment, but also insightful graphical outputs that are study-specific and can serve as Quality Control step to detect outlier studies. One particular example is the Takahashi study from the stem cell data, whose poor performance (Fig. 1 d) was further confirmed on the study-specific outputs (Additional file 1: Figure S9). Of note, this study was the only one generated through Agilent technology and its sample size only accounted for 4.2% of the training set. The sample plots from each individual breast cancer data set showed the strong ability of MINT to discriminate the breast cancer subtypes while integrating data sets generated from disparate transcriptomics platforms, microarrays and RNA-sequencing (Fig. 3 a–c). Those data sets were all differently pre-processed, and yet MINT was able to model an overall agreement between all studies; MINT successfully built a space based on a handful of genes in which samples from each study are discriminated in a homogenous manner. MINT study-specific sample plots showing the projection of samples from a METABRIC Discovery, b METABRIC Validation and c TCGA-RNA-seq experiments, in the same subspace spanned by the first two MINT components. The same subspace is also used to plot the (d) overall (integrated) data. e Balanced Error Rate and classification accuracy for each study and breast cancer subtype from the MINT analysis MINT gene signature identified promising biomarkers MINT is a multivariate approach that builds successive components to discriminate all categories (classes) indicated in an outcome variable. On the stem cell data, MINT selected 2 and 15 genes on the first two components respectively (Additional file 1: Table S3). The first component clearly segregated the pluripotent cells (fibroblasts) vs. the two non-pluripotent cell types (hiPSC and hESC) (Fig. 1 c, d). Those non pluripotent cells were subsequently separated on component two with some expected overlap given the similarities between hiPSC and hESC. The two genes selected by MINT on component 1 were LIN28A and CAR which were both found relevant in the literature. Indeed, LIN28A was shown to be highly expressed in ESCs compared to Fibroblasts [36, 37] and CAR has been associated to pluripotency [38]. Finally, despite the high heterogeneity of hiPSC cells included in this study, MINT gave a high accuracy for hESC and hiPSC on independent test sets (93.9% and 77.9% respectively, Additional file 1: Figure S6), suggesting that the 15 genes selected by MINT on component 2 have a high potential to explain the differences between those cell types (Additional file 1: Table S3). On the breast cancer study, we performed two analyses which either included or discarded the PAM50 genes that were used to define the four cancer subtypes Basal, HER2, Luminal A and Luminal B [20]. In the first analysis, we aimed to assess the ability of MINT to specifically identify the PAM50 key driver genes. MINT successfully recovered 37 of the 48 PAM50 genes present in the data (77%) on the first three components (7, 20 and 10 respectively). The overall signature included 30, 572 and 636 genes on each component (see Additional file 1: Table S4), i.e. 7.8% of the total number of genes in the data. The performance of MINT (BER of 17.8 on the training set and 11.6 on the test set) was superior than when performing a PLS-DA on the PAM50 genes only (BER of 20.8 on the training set and a very high 75 on the test set). This result shows that the genes selected by MINT offer a complementary characterisation to the PAM50 genes. In the second analysis, we aimed to provide an alternative signature to the PAM50 genes by ommitting them from the analysis. MINT identified 11, 272 and 253 genes on the first three components respectively (Additional file 1: Table S5 and Figure S10). The genes selected on the first component gradually differentiated Basal, HER2 and Luminal A/B, while the second component genes further differentiated Luminal A from Luminal B (Fig. 3 d). The classification performance was similar in each study (Fig. 3 e), highlighting an excellent reproducibility of the biomarker signature across cohorts and platforms. Among the 11 genes selected by MINT on the first component, GATA3 is a transcription factor that regulates luminal epithelial cell differentiation in the mammary glands [39, 40], it was found to be implicated in luminal types of breast cancer [41] and was recently investigated for its prognosis significance [42]. The MYB-protein plays an essential role in Haematopoiesis and has been associated to Carcinogenesis [43, 44]. Other genes present in our MINT gene signature include XPB1 [45], AGR3 [46], CCDC170 [47] and TFF3 [48] that were reported as being associated with breast cancer. The remaining genes have not been widely associated with breast cancer. For instance, TBC1D9 has been described as over expressed in cancer patients [49, 50]. DNALI1 was first identified for its role in breast cancer in [51] but there was no report of further investigation. Although AFF3 was never associated to breast cancer, it was recently proposed to play a pivotal role in adrenocortical carcinoma [52]. It is worth noting that these 11 genes were all included in the 30 genes previously selected when the PAM50 genes were included, and are therefore valuable candidates to complement the PAM50 gene signature as well as to further characterise breast cancer subtypes. There is a growing need in the biological and computational community for tools that can integrate data from different microarray platforms with the aim of classifying samples (integrative classification). Although several efficient methods have been proposed to address the unwanted systematic variation when integrating data [4, 7, 9–11], these are usually applied as a pre-processing step before performing classification. Such sequential approach may lead to overfitting and over-optimistic results due to the use of transductive modelling (such as prediction based on ComBat-normalised data [19]) and the use of a test set that is normalised or pre-processed with the training set. To address this crucial issue, we proposed a new Multivariate INTegrative method, MINT, that simultaneously corrects for batch effects, classifies samples and selects the most discriminant biomarkers across studies. MINT seeks to identify a common projection space for all studies that is defined on a small subset of discriminative variables and that display an analogous discrimination of the samples across studies. Therefore, MINT provides sample plot and classification performance specific to each study (Fig. 3). Among the compared methods, MINT was found to be the fastest and most accurate method to integrate and classify data from different microarray and RNA-seq platforms. Integrative approaches such as MINT are essential when combining multiple studies of complex data to limit spurious conclusions from any downstream analysis. Current methods showed a high proportion of false positives (44% on MAQC data) and exhibited very poor prediction accuracy (PLS-DA, sPLS-DA and RF, Fig. 2). For instance, RF was ranked second only to MINT on the breast cancer learning set, but it was ranked as the worst method on the test set. This reflects the absence of controlling for batch effects in these methods and supports the argument that assessing the presence of batch effects is a key preliminary step. Failure to do so, as shown in our study, can result in poor reproducibility of results in subsequent studies, and this would not be detected without an independent test set. We assessed the ability of MINT to identify relevant gene signatures that are reproducible and platform-agnostic. MINT successfully integrated data from the MAQC project by selecting true positives genes that were also differentially expressed in each experiment. We also assessed MINT's capabilities analysing stem cells and breast cancer data. In these studies, MINT displayed the highest classification accuracy in the training sets and the highest prediction accuracy in the testing sets, when compared to sixteen sequential procedures (Fig. 2). These results suggest that, in addition to being highly predictive, the discriminant variables identified by MINT are also of strong biological relevance. In the stem cell data, MINT identified 2 genes LIN28A and CAR, to discriminate pluripotent cells (fibroblasts) against non-pluripotent cells (hiPSC and hESC). Pluripotency is well-documented in the literature and OCT4 is currently the main known marker for undifferentiated cells [53–56]. However, MINT did not selected OCT4 on the first component but instead, identified two markers, LIN28A and CAR, that were ranked higher than OCT4 in the DEG list obtained on the concatenated data (see Additional file 1: Figure S11, S12). While the results from MINT still supported OCT4 as a marker of pluripotency, our analysis suggests that LIN28A and CAR are stronger reproducible markers of differentiated cells, and could therefore be superior as substitutions or complements to OCT4. Experimental validation would be required to further assess the potential of LIN28A or CAR as efficient markers. Several important issues require consideration when dealing with the general task of integrating data. First and foremost, sample classification is crucial and needs to be well defined. This required addressing in analyses with the stem cell and breast cancer studies generated from multiple research groups and different microarray and RNA-seq platforms. For instance, the breast cancer subtype classification relied on the PAM50 intrinsic classifier proposed by [20], which we admit is still controversial in the literature [31]. Similarly, the biological definition of hiPSC differs across research groups [26, 28], which results in poor reproducibility among experiments and makes the integration of stem cell studies challenging [21]. The expertise and exhaustive screening required to homogeneously annotate samples hinders data integration, and because it is a process upstream to the statistical analysis, data integration approaches, including MINT, can not address it. A second issue in the general process of integrating datasets from different sources is data access and normalisation. As raw data are often not available, this results in integration of data sets that have each been normalised differently, as was the case with the breast cancer data in our study. Despite this limitation, MINT produced satisfactory results in that study. We were also able to overcome this issue in the stem cells data by using the stemformatics resource [29] where we had direct access to homogeneously pre-processed data (background correction, log2- and YuGene-transformed [11]). In general, variation in the normalisation processes of different data sets produces unwanted variation between studies and we recommend this should be avoided if possible. A final important issue in data integration involves accounting for both between-study differences and platform effects. When samples clustered by study and the studies clustered by platform, then the experimental platform and not the study, is the biggest source of variation (e.g. 75% of the variance in the breast cancer data, Additional file 1: Figure S5). Indeed, there are inherent differences between commercial platforms that greatly magnify unwanted variability, as was discussed by [5] on the MAQC project. As platform information and study effects are nested, MINT and other data integration methods dismiss the platform information and focus on the study effect only. Indeed, each study is considered as included in a single platform. MINT successfully integrated microarray and RNA-seq data, which supports that such an approach will likely be sufficient in most scenarios. When applying MINT, additional considerations need be taken into account. In order to reduce unwanted systematic variation, the method centers and scales each study as an initial step, similarly to BMC [9]. Therefore, only studies with a sample size >3 can be included, either in a training or test set. In addition, all outcome categories need to be represented in each study. Indeed, neither MINT nor any classification methods can perform satisfactorily in the extreme case where each study only contains a specific outcome category, as the outcome and the study effect can not be distinguished in this specific case. We introduced MINT, a novel Multivariate INTegrative method, that is the first approach to integrate independent transcriptomics studies from different microarray and RNA-seq platforms by simultaneously, correcting for batch effects, classifying samples and identifying key discriminant variables. We first validated the ability of MINT to select true positives genes when integrating the MAQC data across different platforms. Then, MINT was compared to sixteen sequential approaches and was shown to be the fastest and most accurate method to discriminate and predict three human cell types (human Fibroblasts, human Embryonic Stem Cells and human induced Pluripotent Stem Cells) and four subtypes of breast cancer (Basal, HER2, Luminal A and Luminal B). The gene signatures identified by MINT contained existing and novel biomarkers that were strong candidates for improved characterisation the phenotype of interest. In conclusion, MINT enables reliable integration and analysis of independent genomic data sets, outperforms existing available sequential methods, and identifies reproducible genetic predictors across data sets. MINT is available through the mixMINT module in the mixOmics R-package. BER: Balanced error rate Differentially expressed gene FDR: False discovery rate Fib: hESC: Human embryonic stem cells hiPSC: Human induced pluripotent stem cells Linear model LMM: Linear mixed model MAQC: MicroArray quality control Multivariate integration method sPLS-DA: sparse partial least square discriminant analysis Random forest Pihur V, Datta S, Datta S. Finding common genes in multiple cancer types through meta–analysis of microarray experiments: A rank aggregation approach. Genomics. 2008; 92(6):400–3. Kim S, Lin C-W, Tseng GC. Metaktsp: a meta-analytic top scoring pair method for robust cross-study validation of omics prediction analysis. Bioinformatics. 2016; 32:1966–173. Lazar C, Meganck S, Taminau J, Steenhoff D, Coletta A, Molter C, Y.Weiss-Solis D, Duque R, Bersini H, Nowé A. Batch effect removal methods for microarray gene expression data integration: a survey. Brief Bioinform. 2012; 14(4):469–90. Gagnon-Bartsch JA, Speed TP. Using control genes to correct for unwanted variation in microarray data. Biostatistics. 2012; 13(3):539–52. Shi L, Reid LH, Jones WD, Shippy R, Warrington JA, Baker SC, Collins PJ, De Longueville F, Kawasaki ES, Lee KY, et al. The microarray quality control (maqc) project shows inter-and intraplatform reproducibility of gene expression measurements. Nat Biotechnol. 2006; 24(9):1151–61. Su Z, Labaj P, Li S, Thierry-Mieg J, et al. A comprehensive assessment of rna-seq accuracy, reproducibility and information content by the sequencing quality control consortium. Nat Biotechnol. 2014; 32(9):903–14. Johnson W, Li C, Rabinovic A. Adjusting batch effects in microarray expression data using empirical Bayes methods. Biostatistics. 2007; 8(1):118–27. Hornung R, Boulesteix AL, Causeur D. Combining location-and-scale batch effect adjustment with data cleaning by latent factor adjustment. BMC Bioinforma. 2016; 17(1):1. Sims AH, Smethurst GJ, Hey Y, Okoniewski MJ, Pepper SD, Howell A, Miller CJ, Clarke RB. The removal of multiplicative, systematic bias allows integration of breast cancer gene expression datasets–improving meta-analysis and prediction of prognosis. BMC Med Genomics. 2008; 1(1):42. Listgarten J, Kadie C, Schadt EE, Heckerman D. Correction for hidden confounders in the genetic analysis of gene expression. Proc Natl Acad Sci USA. 2010; 107(38):16465–70. Lê Cao KA, Rohart F, McHugh L, Korm O, Wells CA. YuGene: A simple approach to scale gene expression data derived from different platforms for integrated analyses. Genomics. 2014; 103:239–51. Breiman L. Random forests. Mach Learn. 2001; 45(1):5–32. Dudoit S, Fridlyand J, Speed TP. Comparison of discrimination methods for the classification of tumors using gene expression data. J Am Stat Assoc. 2002; 97(457):77–87. Guyon I, Weston J, Barnhill S, Vapnik V. Gene selection for cancer classification using support vector machines. Mach Learn. 2002; 46(1-3):389–422. Díaz-Uriarte R, De Andres SA. Gene selection and classification of microarray data using random forest. BMC Bioinforma. 2006; 7(1):1. Sowa JP, Atmaca Ö, Kahraman A, Schlattjan M, Lindner M, Sydor S, Scherbaum N, Lackner K, Gerken G, Heider D, et al.Non-invasive separation of alcoholic and non-alcoholic liver disease with predictive modeling. PloS ONE. 2014; 9(7):101444. Barker M, Rayens W. Partial least squares for discrimination. J Chemom. 2003; 17(3):166–73. Lê Cao KA, Boitard S, Besse P. Sparse PLS discriminant analysis: biologically relevant feature selection and graphical displays for multiclass problems. BMC Bioinforma. 2011; 12:253. Hughey JJ, Butte AJ. Robust meta-analysis of gene expression using the elastic net. Nucleic Acids Res. 2015; 43(12):79. Parker JS, Mullins M, Cheang MC, Leung S, Voduc D, Vickery T, Davies S, Fauron C, He X, Hu Z, et al. Supervised risk predictor of breast cancer based on intrinsic subtypes. J Clin Oncol. 2009; 27(8):1160–7. Rohart F, Mason EA, Matigian N, Mosbergen R, Korn O, Chen T, Butcher S, Patel J, Atkinson K, Khosrotehrani K, Fisk NM, Lê Cao K, Wells CA. A molecular classification of human mesenchymal stromal cells. PeerJ. 2016; 4:1845. Eslami A, Qannari EM, Kohler A, Bougeard S. Multi-group PLS regression: application to epidemiology. In: New Perspectives in Partial Least Squares and Related Methods. New York: Springer: 2013. p. 243–55. Eslami A, Qannari EM, Kohler A, Bougeard S. Algorithms for multi-group PLS. J Chemometrics. 2014; 28(3):192–201. Tibshirani R. Regression shrinkage and selection via the lasso. J R Stat Soc Ser B Stat Methodol. 1996; 58(1):267–88. Tenenhaus M. La Régression PLS: Théorie et Pratique. Paris: Editions Technip; 1998. Bilic J, Belmonte JCI. Concise review: Induced pluripotent stem cells versus embryonic stem cells: close enough or yet too far apart?Stem Cells. 2012; 30(1):33–41. Chin MH, Mason MJ, Xie W, Volinia S, Singer M, Peterson C, Ambartsumyan G, Aimiuwu O, Richter L, Zhang J, et al. Induced pluripotent stem cells and embryonic stem cells are distinguished by gene expression signatures. Cell stem cell. 2009; 5(1):111–23. Newman AM, Cooper JB. Lab-specific gene expression signatures in pluripotent stem cells. Cell stem cell. 2010; 7(2):258–62. Wells CA, Mosbergen R, Korn O, Choi J, Seidenman N, Matigian NA, Vitale AM, Shepherd J. Stemformatics: visualisation and sharing of stem cell gene expression. Stem Cell Res. 2013; 10(3):387–95. Bolstad BM, Irizarry RA, Åstrand M, Speed TP. A comparison of normalization methods for high density oligonucleotide array data based on variance and bias. Bioinformatics. 2003; 19(2):185–93. Curtis C, Shah SP, Chin SF, Turashvili G, Rueda OM, Dunning MJ, Speed D, Lynch AG, Samarajiwa S, Yuan Y, et al. The genomic and transcriptomic architecture of 2,000 breast tumours reveals novel subgroups. Nature. 2012; 486(7403):346–52. Cancer Genome Atlas Network and others. Comprehensive molecular portraits of human breast tumours. Nature. 2012; 490(7418):61–70. Whitcomb BW, Perkins NJ, Albert PS, Schisterman EF. Treatment of batch in the detection, calibration, and quantification of immunoassays in large-scale epidemiologic studies. Epidemiology (Cambridge). 2010; 21(Suppl 4):44. Rohart F, San Cristobal M, Laurent B. Selection of fixed effects in high dimensional linear mixed models using a multicycle ecm algorithm. Comput Stat Data Anal. 2014; 80:209–22. Benjamini Y, Hochberg Y. Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Stat Soc Ser B Stat Methodol. 1995; 57(1):289–300. Yu J, Vodyanik MA, Smuga-Otto K, Antosiewicz-Bourget J, Frane JL, Tian S, Nie J, Jonsdottir GA, Ruotti V, Stewart R, et al. Induced pluripotent stem cell lines derived from human somatic cells. Science. 2007; 318(5858):1917–20. Tsialikas J, Romer-Seibert J. LIN28: roles and regulation in development and beyond. Development. 2015; 142(14):2397–404. Krivega M, Geens M, Van de Velde H. CAR expression in human embryos and hESC illustrates its role in pluripotency and tight junctions. Reproduction. 2014; 148(5):531–44. Kouros-Mehr H, Slorach EM, Sternlicht MD, Werb Z. Gata-3 maintains the differentiation of the luminal cell fate in the mammary gland. Cell. 2006; 127(5):1041–55. Asselin-Labat ML, Sutherland KD, Barker H, Thomas R, Shackleton M, Forrest NC, Hartley L, Robb L, Grosveld FG, van der Wees J, et al. Gata-3 is an essential regulator of mammary-gland morphogenesis and luminal-cell differentiation. Nat Cell Biol. 2007; 9(2):201–9. Jiang YZ, Yu KD, Zuo WJ, Peng WT, Shao ZM. Gata3 mutations define a unique subtype of luminal-like breast cancer with improved survival. Cancer. 2014; 120(9):1329–37. McCleskey BC, Penedo TL, Zhang K, Hameed O, Siegal GP, Wei S. Gata3 expression in advanced breast cancer: prognostic value and organ-specific relapse. Am J Clin Path. 2015; 144(5):756–63. Vargova K, Curik N, Burda P, Basova P, Kulvait V, Pospisil V, Savvulidi F, Kokavec J, Necas E, Berkova A, et al. Myb transcriptionally regulates the mir-155 host gene in chronic lymphocytic leukemia. Blood. 2011; 117(14):3816–825. Khan FH, Pandian V, Ramraj S, Aravindan S, Herman TS, Aravindan N. Reorganization of metastamirs in the evolution of metastatic aggressive neuroblastoma cells. BMC Genomics. 2015; 16(1):1. Chen X, Iliopoulos D, Zhang Q, Tang Q, Greenblatt MB, Hatziapostolou M, Lim E, Tam WL, Ni M, Chen Y, et al. Xbp1 promotes triple-negative breast cancer by controlling the hif1 [agr] pathway. Nature. 2014; 508(7494):103–7. Garczyk S, von Stillfried S, Antonopoulos W, Hartmann A, Schrauder MG, Fasching PA, Anzeneder T, Tannapfel A, Ergönenc Y, Knüchel R, et al. Agr3 in breast cancer: Prognostic impact and suitable serum-based biomarker for early cancer detection. PloS ONE. 2015; 10(4):0122106. Yamamoto-Ibusuki M, Yamamoto Y, Fujiwara S, Sueta A, Yamamoto S, Hayashi M, Tomiguchi M, Takeshita T, Iwase H. C6orf97-esr1 breast cancer susceptibility locus: influence on progression and survival in breast cancer patients. Eur J Human Genet. 2015; 23(7):949–56. May FE, Westley BR. Tff3 is a valuable predictive biomarker of endocrine response in metastatic breast cancer. Endocr Relat Cancer. 2015; 22(3):465–79. Andres SA, Brock GN, Wittliff JL. Interrogating differences in expression of targeted gene sets to predict breast cancer outcome. BMC Cancer. 2013; 13(1):1. Andres SA, Smolenkova IA, Wittliff JL. Gender-associated expression of tumor markers and a small gene set in breast carcinoma. Breast. 2014; 23(3):226–33. Parris TZ, Danielsson A, Nemes S, Kovács A, Delle U, Fallenius G, Möllerström E, Karlsson P, Helou K. Clinical implications of gene dosage and gene expression patterns in diploid breast carcinoma. Clin Cancer Res. 2010; 16(15):3860–874. Lefevre L, Omeiri H, Drougat L, Hantel C, Giraud M, Val P, Rodriguez S, Perlemoine K, Blugeon C, Beuschlein F, et al. Combined transcriptome studies identify aff3 as a mediator of the oncogenic effects of β-catenin in adrenocortical carcinoma. Oncogenesis. 2015; 4(7):161. Rosner MH, Vigano MA, Ozato K, Timmons PM, Poirie F, Rigby PW, Staudt LM. A POU-domain transcription factor in early stem cells and germ cells of the mammalian embryo. Nature. 1990; 345(6277):686–92. Schöler HR, Ruppert S, Suzuki N, Chowdhury K, Gruss P. New type of POU domain in germ line-specific protein Oct-4. Nature. 1990; 344(6265):435–9. Niwa H, Miyazaki J-i, Smith AG. Quantitative expression of Oct-3/4 defines differentiation, dedifferentiation or self-renewal of ES cells. Nat Genet. 2000; 24(4):372–6. Matin MM, Walsh JR, Gokhale PJ, Draper JS, Bahrami AR, Morton I, Moore HD, Andrews PW. Specific knockdown of Oct4 and β2-microglobulin expression by RNA interference in human embryonic stem cells and embryonic carcinoma cells. Stem Cells. 2004; 22(5):659–68. Bock C, Kiskinis E, Verstappen G, Gu H, Boulting G, Smith ZD, Ziller M, Croft GF, Amoroso MW, Oakley DH, et al. Reference Maps of human ES and iPS cell variation enable high-throughput characterization of pluripotent cell lines. Cell. 2011; 144(3):439–52. Briggs JA, Sun J, Shepherd J, Ovchinnikov DA, Chung TL, Nayler SP, Kao LP, Morrow CA, Thakar NY, Soo SY, et al. Integration-free induced pluripotent stem cells model genetic and neural developmental features of down syndrome etiology. Stem Cells. 2013; 31(3):467–78. Chung HC, Lin RC, Logan GJ, Alexander IE, Sachdev PS, Sidhu KS. Human induced pluripotent stem cells derived under feeder-free conditions display unique cell cycle and DNA replication gene profiles. Stem Cells Dev. 2011; 21(2):206–16. Ebert AD, Yu J, Rose FF, Mattis VB, Lorson CL, Thomson JA, Svendsen CN. Induced pluripotent stem cells from a spinal muscular atrophy patient. Nature. 2009; 457(7227):277–80. Guenther MG, Frampton GM, Soldner F, Hockemeyer D, Mitalipova M, Jaenisch R, Young RA. Chromatin structure and gene expression programs of human embryonic and induced pluripotent stem cells. Cell Stem Cell. 2010; 7(2):249–57. Maherali N, Ahfeldt T, Rigamonti A, Utikal J, Cowan C, Hochedlinger K. A high-efficiency system for the generation and study of human induced pluripotent stem cells. Cell Stem Cell. 2008; 3(3):340–5. Marchetto MC, Carromeu C, Acab A, Yu D, Yeo GW, Mu Y, Chen G, Gage FH, Muotri AR. A model for neural development and treatment of Rett syndrome using human induced pluripotent stem cells. Cell. 2010; 143(4):527–39. Takahashi K, Tanabe K, Ohnuki M, Narita M, Sasaki A, Yamamoto M, Nakamura M, Sutou K, Osafune K, Yamanaka S. Induction of pluripotency in human somatic cells via a transient state resembling primitive streak-like mesendoderm. Nat Commun. 2014; 5:3678. Andrade LN, Nathanson JL, Yeo GW, Menck CFM, Muotri AR. Evidence for premature aging due to oxidative stress in iPSCs from Cockayne syndrome. Hum Mol Genet. 2012; 21(17):3825–4. Hu K, Yu J, Suknuntha K, Tian S, Montgomery K, Choi KD, Stewart R, Thomson JA, Slukvin II. Efficient generation of transgene-free induced pluripotent stem cells from normal and neoplastic bone marrow and cord blood mononuclear cells. Blood. 2011; 117(14):109–19. Kim D, Kim CH, Moon JI, Chung YG, Chang MY, Han BS, Ko S, Yang E, Cha KY, Lanza R, et al. Generation of human induced pluripotent stem cells by direct delivery of reprogramming proteins. Cell Stem Cell. 2009; 4(6):472. Loewer S, Cabili MN, Guttman M, Loh YH, Thomas K, Park IH, Garber M, Curran M, Onder T, Agarwal S, et al. Large intergenic non-coding RNA-RoR modulates reprogramming of human induced pluripotent stem cells. Nat Genet. 2010; 42(12):1113–7. Si-Tayeb K, Noto FK, Nagaoka M, Li J, Battle MA, Duris C, North PE, Dalton S, Duncan SA. Highly efficient generation of human hepatocyte-like cells from induced pluripotent stem cells. Hepatology. 2010; 51(1):297–305. Vitale AM, Matigian NA, Ravishankar S, Bellette B, Wood SA, Wolvetang EJ, Mackay-Sim A. Variability in the generation of induced pluripotent stem cells: importance for disease modeling. Stem Cells Transl Med. 2012; 1(9):641–50. Yu J, Hu K, Smuga-Otto K, Tian S, Stewart R, Slukvin II, Thomson JA. Human induced pluripotent stem cells free of vector and transgene sequences. Science. 2009; 324(5928):797–801. The authors would like to thank Marie-Joe Brion, University of Queensland Diamantina Institute for her careful proof-reading and suggestions. This project was partly funded by the ARC Discovery grant project DP130100777 and the Australian Cancer Research Foundation for the Diamantina Individualised Oncology Care Centre at the University of Queensland Diamantina Institute (FR), and the National Health and Medical Research Council (NHMRC) Career Development fellowship APP1087415 (KALC). The funding bodies did not play a role in the design of the study and collection, analysis, and interpretation of data. The MicroArray Quality Control (MAQC) project data are available from the Gene Expression Omnibus (GEO) - GSE56457. The stem cell raw data are available from GEO and the pre-processed data is available from the (http://www.stemformatics.org) platform. The breast cancer data were obtained from the Molecular Taxonomy of Breast Cancer International Consortium project (METABRIC, [31], upon request) and from the Cancer Genome Atlas (TCGA, [32]). The MINT R scripts and functions are publicly available in the mixOmics R package (https://cran.r-project.org/package=mixOmics), with tutorials on http://www.mixOmics.org/mixMINT. FR developed and implemented the MINT method, analysed the stem cell and breast cancer data, NM analysed the MAQC data, KALC supervised all statistical analyses. ES and SB contributed to the early stage of the project to set up the analysis plan. The manuscript was primarily written by FR with editorial advice from AE, NM, SB and KALC. All authors read and approved the final manuscript. The University of Queensland Diamantina Institute, The University of Queensland, Translational Research Institute, Brisbane, 4102, QLD, Australia Florian Rohart , Nicholas Matigian & Kim-Anh Lê Cao Centre for Heart Lung Innovation, University of British Columbia, Vancouver, BC V6Z 1Y6, Canada Aida Eslami French agency for food, environmental and occupational health safety (Anses), Department of Epidemiology, Ploufragan, 22440, France Stéphanie Bougeard Search for Florian Rohart in: Search for Aida Eslami in: Search for Nicholas Matigian in: Search for Stéphanie Bougeard in: Search for Kim-Anh Lê Cao in: Correspondence to Kim-Anh Lê Cao. Additional file 1 Supplementary material. This pdf document contains supplementary methods and all supplementary Figures and Tables. Specifically, it provides the PLS-algorithm, the extension of MINT in a regression framework, the application to the MAQC data (A vs B), the meta-analysis of the breast cancer data, the classification accuracy of the tested methods on the stem cells and breast cancer data, and details on the signature genes identified by MINT on the stem cells and breast cancer data. (PDF 4403 kb) Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Multivariate Partial-least-square
CommonCrawl
EVOLUTION OF THE PRIMORDIAL MAGNETIC FIELD I. INITIAL MORPHOLOGY AND STRENGTH Jung, Jae-Hun;Park, Chang-Bom 109 The morphology and strength of the primordial magnetic field which is generated spontaneously in the early universe are studied for three models: (1) inflation (2) primordial magnetized bubble and (3) primordial turbulence models, We calculate the power spectra of magnetic field that are scale-free and proportional to $k^{1.5},k^{3{\sim}4}$ and $k^{2/3}$, respectively. The configurations of magnetic field having these power spectra are visualized. To constrain the present strength of the primordial magnetic field we calculate the anisotropy of the microwave background radiation in Bianchi type I universe with globally homogeneous magnetic field. From the COBE limit of the quadrupole moment of $({\delta}T/T)_{l=2}$ the present strength of horizen-scale magnetic fields $B_p$ is constrained to be less than $9{\times}10^{-8}G$. UBV($I_{KC}$) CCD PHOTOMETRY OF YOUNG OPEN CLUSTERS. I- IC1805 Sung, Hwan-Kyung;Lee, See-Woo 119 The $UBVI_{KC}$ CCD photometry was performed in the central region(${\sim}20'{\times}20'$) of the extremely young open cluster IC1805. Member stars were selected in the (B-V)-(U-B) and (V-I)-(B-V) color-color planes. Applying recent stellar evolutionary models, we derived the age, age-spread, and initial mass function(IMF) of the cluster. IC1805 was found to be an extremely young($t_{age}{\sim}1.5Myr$) and has a flat IMF with the slope of ${\Gamma}=-1.0{\pm}0.2$. THE BRIGHT PART OF THE LUMINOSITY FUNCTION FOR HALO STARS Lee, Sang-Gak 139 The bright part of the halo luminosity function is derived from a sample of the 233 NLTT propermotion stars, which are selected by the 220 km/ see of cutoff velocity in transverse to rid the contamination by the disk stars and corrected for the stars omitted in the sample by the selection criterion. It is limited to the absolute magnitude range of $M_v=4-8$, but is based on the largest sample of halo stars up to now. This luminosity function provides a number density of $2.3{\cdot}10^{-5}pc^{-3}$ and a mass density of $2.3{\cdot}10^{-5}M_{o}pc^{-3}$ for 4 < $M_v$ < 8 in the solar neighborhood. These are not sufficient for disk stability. The kinematics of the sample stars are < U > = - 7 km/sec, < V > = - 228 km/sec, and < W > = -8 km/sec with (${\sigma_u},{\sigma_v},{\sigma_w}$) = (192, 84, 94) km/sec. The average metallicity of them is [Fe/H] = $- 1.7{\pm}0.8$. These are typical values for halo stars which are selected by the high cutoff velocity. We reanalyze the luminosity function for a sample of 57 LHS proper-motion stars. The newly derived luminosity function is consistent with the one derived from the NLTT halo stars, but gives a somewhat smaller number density for the absolute magnitude range covered by the LF from NLTT stars. The luminosity function based on the LHS stars seems to have a dip in the magnitude range corresponding to the Wielen Dip, but it also seems to have some fluctuations due to a small number of sample stars. PLASMA WAVE PROPAGATION IN THE BLACK HOLE IONOSPHERE Park, Seok-Jae 147 An axisymmetric, stationary electrodynamic model of the central engine of an active galactic nucleus has been well formulated by Macdonald and Thorne. In this model the relativistic region around the central black hole must be filled by highly conducting plasma. We analyze plasma wave propagation in this region and discuss the results. We find that the ionosphere cannot exist right outside of the event horizon of the black hole. Another interesting aspect is that certain resonance phenomena can occur in this case. THE PHOTOGRAPIC PHOTOMETRY OF THE GLOBULAR CLUSTER NGC 6752 Lee, Kang-Hwan;Lee, See-Woo;Jeon, Young-Beom 153 More than 22,300 stars in NGC 6752 were measured over the region of 5' < r < 23' in B and V AAT plates. The most of these are main sequence(MS) stars and about 130 blue horizontal branch(BHB) stars were detected. The C-M diagram of all measured stars shows gaps appearing at $V{\approx}15.^{m}2$ and $16.^{m}2$ along the red giant branch(RGB) and their appearance shown by Lee & Cannon(1980) is found to be independent of measured region. The bimodal distribution of BHB stars is confirmed again and a wide gap shown by Lee & Cannon(1980) at $V{\approx}16^m$ is clearly seen for stars in the outer part (8' < r < 13') in the cluster. It is noted, however, that this gap is occupied by about a dozen of BHB staIs located in the inner region (5' < r < 8'). The number ratio of bright BHB star (V < $15^m$) to faint BHB stars (V > $15^m$) decreases with increasing radial distance from the cluster center. Three faintest BHB stars were found, and two stars ($V{\approx}18.^{m}5$) of there are located in the inner region of $r{\approx}6'$ and the other faintest one ($V{\approx}19.^{m}3$) located in the outer part of $r{\approx}13'$. Also a bluest star of (B - V) $\approx$ -0.5 at $V{\approx}17.^{m}2$ is found but it is located at the outer part of $r{\approx}13'$ in NE region. Therefore, the membership of the faintest BHB star and bluest star is suspected. The luminosity function(LF) and mass function(MF) for NGC 6752 were derived for MS stars. The LF for stars of $M_v\;<\;6^m$ in the outer part of r > 8' shows a consistency with that derived by Penny & Dickens(1986). IS THE PEGASUS DWARF GALAXY A MEMBER OF THE LOCAL GROUP? Lee, Myung-Gyoon 169 Deep V I CCD photometry of the Pegasus dwarf irregular galaxy shows that the tip of the red giant branch (RGB) is located at I = $21.15{\pm}0.10$ mag and (V - I) = $1.58{\pm}0.03$. Using the I magnitude of the tip of the RGB (TRGB), the distance modulus of the Pegasus galaxy is estimated to be $(m\;-\;M)_o\;=\;25.13{\pm}0.11$ mag (corresponding to a distance of d = $1060{\pm}50$ kpc). This result is in a good agreement with the recent distance estimate based on the TRGB method by Aparicio [1994, ApJ, 437, L27],$ (m\;-\;M)_o$ = 24.9 (d = 950 kpc). However, our distance estimate is much smaller than that based on the Cepheid variable candidates by Hoessel et al.[1990, AJ, 100, 1151], $(m\;-\;M)_o\;=\;26.22{\pm}0.20$ (d = $1750{\pm}160$ kpc) mag. The color-magnitude diagram illustrates that the Cepheid candidates used by Hoessel et al.are not located in the Cepheid instability strip, but in the upper part of the giant branch. This result shows that the Cepheid candidates studied by Hoessel et al.are probably not Cepheids, but other types of variable stars. Taking the average of our distance estimate and Aparicio's, the distance to the Pegasus galaxy is d= $1000{\pm}80$ kpc. Considering the distance and velocity of the Pegasus galaxy with respect to the center of the Local Group, we conclude that the Pegasus galaxy is probably a member of the Local Group. UBV CCD PHOTOMETRY OF THE LMC DOUBLE CLUSTER NGC 1850 We present UBV CCD photometry of the double cluster NGC 1850 located at the NW edge of the bar of the Large Magellanic Cloud. The color-magnitude diagram shows that NGC 1850 has a prominent population of massive core-He burning stars which is incomparably richer than any other known star clusters. The reddening is estimated from the (U-B) - (B-V) diagram to be E(B - V) = $0.15{\pm}0.05$. We have estimated the ages of NGC 1850 and a very compact blue star cluster (NGC 1850A) located at ${\sim}30''$ west of NGC 1850 using isochrones based on the convective overshooting models: $80{\sim}10$ Myrs and $5{\sim}2$ Myra, respectively. Several evidence suggest that it is probably the compact cluster NGC 1850A that is responsible for the arc-shaped nebulosity (Henize N 103B) surrounding the east side of NGC 1850. CCD PHOTOMETRY OF A DELTA SCUTI STAR IN AN OPEN CLUSTER II. BT CNC IN THE PRAESEPE Kim, Seung-Lee;Lee, See-Woo 197 Real time CCD differential photometry was performed for BT Cnc in Praesepe cluster from February to March, 1994. New 885 differential V magnitudes were obtained for thirteen nights. From the frequency analysis, we have detected two distinct pulsational frequencies of $f_1$=9.7783c/d and $f_2$=7.0153c/d. The first frequency is nearly equal to the previous result(Breger 1980), but the second one is much different. Our reanalysis of the previous data obtained by Guerrero el al.(1979) indicates that the previous result of $f_s$=5.95c/d might be uncertain; it was not detected in the power spectrum. Also it turns out that our second frequency could not be fitted to the previous data and the reanalyzed frequency($f_2$=7.8813c/d) of the previous data was poor-fitted to our data. Therefore we suggest that the second frequency which might be newly excited in the nonradial mode, has been changed over the last eighteen years. TRIAXIAL BULGES IN BARRED GALAXIES Ann, Hong-Bae 209 We have examined bulge morphology of 104 bright barred galaxies, using V-band surface photometry based on the Kiso Schmidt plates. By measuring the bulge ellipticity and bulge-disk misalignment, we have classified bulges into four morphological types: sphere, oblate spheroid, triaxial ellipsoid, and pseudo triaxial ellipsoid. About half of the observed galaxies are found to have triaxial bulges with mean ellipticity of 0.24. They are distributed uniformly along the Hubble sequence. A MULTI-DIMENSIONAL MAGNETOHYDRODYNAMIC CODE IN CYLINDRICAL GEOMETRY Ryu, Dong-Su;Yun, Hong-Sik;Choe, Seung-Urn 223 We describe the implementation of a multi-dimensional numerical code to solve the equations for idea! magnetohydrodynamics (MHD) in cylindrical geometry. It is based on an explicit finite difference scheme on an Eulerian grid, called the Total Variation Diminishing (TVD) scheme, which is a second-order-accurate extension of the Roe-type upwind scheme. Multiple spatial dimensions are treated through a Strang-type operator splitting. Curvature and source terms are included in a way to insure the formal accuracy of the code to be second order. The constraint of a divergence-free magnetic field is enforced exactly by adding a correction, which involves solving a Poisson equation. The Fourier Analysis and Cyclic Reduction (FACR) method is employed to solve it. Results from a set of tests show that the code handles flows in cylindrical geometry successfully and resolves strong shocks within two to four computational cells. The advantages and limitations of the code are discussed. DYNAMICAL CHARACTERISTICS OF SUNSPOT CHROMOSPHERES II. ANALYSIS OF CA II H, K AND ${\lambda}8498$ LINES OF A SUNSPOT (SPO 5007) FOR OSCILLATORY MOTIONS Yoon, Tae-Sam;Yun, Hong-Sik;Kim, Jeong-Hoon 245 We have analyzed the time series of Ca II H,K and ${\lambda}8498$ line profiles taken for a sunspot (SPO 5007) with the Echelle spectrograph attached to Vacuum Tower Telescope at Sacramento Peak Solar Observatory. Each set of spectra was taken simultaneously for 20 minutes at a time interval of 30 seconds. A total of 40 photographic films for each line was scanned by a PDS at Korea Astronomy Observatory. The central peak intensity of Ca II H ($I_{max}$), the intensity measured at ${\Delta}{\lambda}=-0.1{\AA}$ from the line center of ${\lambda}8498(I_{{\lambda}8489})$, the radial velocity ($V_r$) and the Doppler width (${\Delta}{\lambda}_D$) estimated from Ca II H have been measured to study the dynamical behaviors of the sunspot chromosphere. Fourier analysis has been carried out for these measured quantities. Our main results are as follows: (1) We have confirmed the 3-minute oscillation being dominant throughout the umbra. The period of oscillations jumps from 180 sec in the umbra to 500 to 1000 sec in the penumbra. (2) The nonlinear character of the umbral oscillation is noted from the observed sawtooth shaped radial velocity fluctuations with amplitudes reaching up to $5{\sim}6\;km/sec$. (3) The spatial distribution of the maximum powers shows that the power of oscillations is stronger in the umbra than in the penumbra. (4) The spatial distributions of the time averaged < $I_{max}$ > and < $V_r$ > across the spot are found to be nearly axially symmetric, implying that the physical quantities derived from the line profiles of Ca II H and ${\lambda}8498$ are inherently associated with the geometry of the magnetic field distribution of the spot. (5) The central peaks of the CaII H emission core lead the upward motions of the umbral atmosphere by $90^{\circ}$, while no phase delay is found in intensities between $I_{max}$ and $I_{{\lambda}8498}$, suggesting that the umbral oscillation is of standing waves. KINEMATICS AND CHEMISTRY OF THE S140/L1204 MOLECULAR COMPLEX Park, Yong-Sun;Minh, Young-Chul 255 The HII region S140 and the associated molecular cloud L1204 have been observed with 10 molecular transitions, CO (1-0), $^{13}CO$ (1-0), $C^{18}O$ (1-0), CS (2-1), $HCO^+$ (1-0), HCN (1-0), SO (${2_2}-{1_1}$), $SO_2(2_{20}-3_{13})$, OCS (8-7), and $HNCO\;(4_{04}-3_{03})$ with ${\sim}50"$ angular resolutions. More than 7,000 spectra were obtained in total. The morphology of this region shows a massive fragment (the S140 core) and the extended envelope to the northeast. Several gas condensations have been identified in the envelope, having masses of ${\sim}10^{3}M_{\odot}$ and gas number densities of ${\lesssim}10^{4}cm^{-3}$ to $3{\times}10^{5}cm^{-3}$ in their cores. The column densities of the observed molecular species toward the S140 core appear to be the typical warm clouds' abundances. It seems to be that the S140 core and L1204 have been swept up by an expanding shell called the Cepheus bubble. The large value of $L_{IR}$(embedded\;stars)/$M_{cloud}\;{\sim}\;5\;L_{\odot}$/$M_{\odot}$ of the S140 core may suggest that the star formation has been stimulated by the HII region, but the shock velocity and the pressure of the region seem to give a hint of the spontaneous star formation by the self gravity.
CommonCrawl
Future research directions List of symbols Use of agro-industrial residue from the canned pineapple industry for polyhydroxybutyrate production by Cupriavidus necator strain A-04 Vibhavee Sukruansuwan1 and Suchada Chanprateep Napathorn1Email authorView ORCID ID profile Biotechnology for Biofuels201811:202 Accepted: 16 July 2018 Pineapple is the third most important tropical fruit produced worldwide, and approximately 24.8 million tons of this fruit are produced annually throughout the world, including in Thailand, which is the fourth largest pineapple producer in the world. Pineapple wastes (peel and core) are generated in a large amount equal to approximately 59.36% based on raw material. In general, the anaerobic digestion of pineapple wastes is associated with a high biochemical oxygen demand and high chemical oxygen demand, and this process generates methane and can cause greenhouse gas emissions if good waste management practices are not enforced. This study aims to fill the research gap by examining the feasibility of pineapple wastes for promoting the high-value-added production of biodegradable polyhydroxybutyrate (PHB) from the available domestic raw materials. The objective of this study was to use agro-industrial residue from the canned pineapple industry for biodegradable PHB production. The results indicated that pretreatment with an alkaline reagent is not necessary. Pineapple core was sized to − 20/+ 40 mesh particle and then hydrolyzed with 1.5% (v/v) H2SO4 produced the highest concentration of fermentable sugars, equal to 0.81 g/g dry pineapple core, whereas pineapple core with a + 20 mesh particle size and hydrolyzed with 1.5% (v/v) H3PO4 yielded the highest concentration of PHB substrates (57.2 ± 1.0 g/L). The production of PHB from core hydrolysate totaled 35.6 ± 0.1% (w/w) PHB content and 5.88 ± 0.25 g/L cell dry weight. The use of crude aqueous extract (CAE) of pineapple waste products (peel and core) as a culture medium was investigated. CAE showed very promising results, producing the highest PHB content of 60.00 ± 0.5% (w/w), a cell dry weight of 13.6 ± 0.2 g/L, a yield (\(Y_{{{P \mathord{\left/ {\vphantom {P S}} \right. \kern-0pt} S}}}\)) of 0.45 g PHB/g PHB substrate, and a productivity of 0.160 g/(L h). This study demonstrated the feasibility of utilizing pineapple waste products from the canned pineapple industry as lignocellulosic feedstocks for PHB production. C. necator strain A-04 was able to grow on various sugars and tolerate levulinic acid and 5-hydroxymethyl furfural, and a detoxification step was not required prior to the conversion of cellulose hydrolysate to PHB. In addition to acid hydrolysis, CAE was identified as a potential carbon source and offers a novel method for the low-cost production of PHB from a realistic lignocellulosic biomass feedstock. Polyhydroxybutyrate Agro-industrial residue Canned pineapple industry Crude aqueous extract Low-cost PHB production The current trends and challenges regarding sustainability in industrial biotechnology have stimulated the development of renewable feedstocks that are not sources of food or feed [1]. This transition from simple reducing sugars to alternative renewable raw materials, which have complex structures and require additional procedures, encouraged us to develop simple, economic, and effective processes for the conversion of lignocellulosic biomass, particularly the agro-industrial residue that is abundant in the Southeast Asian region, to fermentable sugars. Among the various green products that are currently available, bioplastics have received a special attention from academic and industrial researchers in the recent decades. Polyhydroxyalkanoates (PHAs) are microbial polyesters that are synthesized and accumulated in a wide variety of microorganisms as an internal storage energy source [2]. The major drawback to the commercialization of PHAs is their high cost, their precursors, which are mainly high-purity substrates, their long cultivation time, and their extraction and purification process, which is more expensive than those used for the conventional polymers. Even polylactide, PLA, a well-known biodegradable polymer, is synthesized in a hybrid chemical process that is costly to extract and purify. The increasing demand for alternative and renewable raw materials and the use of biodegradable polymers, along with the awareness and promotion of green procurement policies, are motivations expected to benefit the market growth of PHAs. Lignocellulosic biomass has become an attractive alternative to fossil resources for the production of biofuels and various biochemical reagents, including biodegradable PHAs, which are the most abundant in nature. Cellulosic resources include: agricultural and forest residues, as well as municipal and industrial waste products, which are represented as low-value renewable materials that offer a feasible option for the production of high-value-added products [3]. Several studies have investigated the production of PHAs from cellulose biomass hydrolysate by a variety of microorganisms, including recombinant Escherichia coli [4–12]. The direct use of hydrolysates of hemicelluloses as a mixture of sugars for PHA production without the removal of inhibitors has also been explored. For instance, Yu and Stahl found that Cupriavidus necator can accumulate to 57% of dry cell weight (DCW) from bagasse hydrolysate in the presence of low inhibitor concentrations [6]. Most recently, Dietrich et al. assessed softwood hemicellulose hydrolysate (mixture of glucose, mannose, galactose, xylose, and arabinose) and the potentially inhibitory lignocellulose degradation products [acetic acid, 5-hydroxymethylfurfural (5-HMF), furfural (FAL), and vanillin] for polyhydroxybutyrate (PHB) production by Paraburkholderia sacchari IPT 101, and found that this bacterial strain converted all sugars simultaneously to achieve a maximum PHB concentration of 5.72 g/L and 80.5% (w/w) PHB after 51 h [13]. However, the utilization of hemicellulose hydrolysates for PHB production at an industrial scale necessitates high productivity. In addition, the direct utilization of lignocellulosic waste from the canned pineapple industry as a hemicellulose hydrolysate (as a mixture of sugars) for PHA production has not been investigated. Pineapple (Ananas comosus L. Merr.) is one of the most popular tropical fruits consumed worldwide and an economically significant plant. Approximately 24.8 million tons are produced annually throughout the world (https://www.worldatlas.com/articles/top-pineapple-producing-countries.html). According to the Department of Agriculture, Ministry of Agriculture and Cooperatives, Thailand is ranked first in the production and exportation of pineapple, with an approximately 50% share of the global market [14]. In Thailand, pineapple is exported in various forms to the international market, including canned, raw juice, and various frozen and dried products. In 2016, the quantity of exported canned pineapple was 482,640 tons, which is worth US$615.10 million [15]. The processing of canned products generates high amounts of agro-industrial waste, and pineapple waste products include 44.36% peel and 15% core with respect to the total raw materials [16] (Fig. 1). Thailand has 75 pineapple processing factories, and these generate approximately 200 tons of agro-industrial waste per day [17]. Pineapple wastes are normally used as animal feed or disposed in landfills, where the wastes might undergo anaerobic digestion, resulting in methane leakage [18]. Leeben et al. analyzed the performance of pineapple processing factories with regard to sustainable development, including waste management [19]. These researchers reported that the pineapple processing factories of Thailand generate large quantities of solid waste and wastewater as well as various amounts of organic content depending on the production capacity, type of technology used, and factory's size [19]. Modern technology is used in more than 70% of the production lines in large- and medium-sized factories but only in 40–50% of the production process in small factories; therefore, small factories produce higher amounts of waste than large- and medium-sized factories [19]. Large-sized factories manage their peel and core waste products to produce crude aqueous extract (CAE), which is filtered and evaporated to obtain pineapple juice concentrate. The final pulp waste is sold as an animal feed; however, pulp waste is not considered attractive as an animal feed due to its high fiber content and soluble carbohydrates with a low protein content [20]. Small-sized factories might not have a sufficient budget to invest in the production of these by-products [21]. Flowchart of the pineapple waste products generated during industrial pineapple canning To date, the value-added processing and utilization of pineapple wastes could be a potential source of important compounds, such as sucrose, glucose, fructose, cellulose, fiber, bromelain, phenolics, and cellulose nanocrystals [22, 23]. Researchers in Thailand have focused on the production of fertilizer, improvements in calcareous soil [24, 25], animal feed [26, 27], extraction decomposable pots [17], and plastic reinforcement [28, 29]. The bioeconomy industry has recently become one of the Thai government's target industries and forms part of the five future industries comprising bioplastics [30]. Thus, this study aimed to fill this research gap by examining the feasibility of using pineapple wastes with the available domestic raw materials for value-added production. To this end, the study focused on the utilization of canned pineapple waste products for PHB production through the development of a rapid, low-cost, and high-yield hydrolysis process. Compositions of agro-industrial residues The compositions of pineapple peel, pineapple core, and CAE are presented in Table 1. The major components of pineapple core were found to be 29.5% (w/v) holocellulose [17.2% (w/v) α-cellulose and 12.3% (w/v) hemicellulose] and 1.8% (w/v) lignin. Water content was 89.2% (w/v). Pineapple peel consisted of 36.8% (w/v) holocellulose [22.9% (w/v) α-cellulose and 13.9% (w/v) hemicellulose] and 5.1% (w/v) lignin. Water content was 86.5% (w/v). The sugar composition of CAE was analyzed by HPLC and found to consist of 20.14 g/L sucrose, 24.48 g/L glucose, 2.78 g/L fructose, and 0.30 g/L galactose. The total fermentable sugar concentration in CAE was 47.35 g/L, and this amount included a combined concentration of the PHB substrates glucose and fructose (SPHB) of 27.26 g/L. Characterization of agro-industrial residues Pineapple wastes Lignocellulosic biomass compositions (%) α-Cellulose β-Cellulose γ-Cellulose Holocellulose Benzene extractives Peela Sugar content (g/L) SPHB (g/L) Galactose SPHB PHB substrates (glucose and fructose) aThe chemical compositions of pineapple core and pineapple peel used in this study were determined according to the Technical Association of Pulp and Paper Industry (TAPPI) standard methods bThe sugar compositions of CEA were analyzed by HPLC The lignocellulosic biomass and sugar compositions of CAE reported in this study, however, were somewhat different from those reported previously [31–33]. The biomass and sugar compositions vary depending on the plant age, growth conditions, soil conditions, geographic location, climate, and other environmental factors, such as temperature, stress, and humidity [34]. Effects of different pretreatment solutions Three pretreatment solutions [NaOH, Ca(OH)2, and water], with different concentrations of 0.0 (water), 0.25, 0.5, 1.0, and 2.0% (w/v) were systematically investigated in this experiment (Fig. 2). Pineapple peel samples pretreated with water and subjected to 0.5% (v/v) H2SO4 hydrolysis yielded the highest amount of total reducing sugars (17.8 ± 0.2 g/L) (P < 0.05). Interestingly, as shown in Fig. 2a, increasing the NaOH pretreatment solution to 0.25, 0.5, 1, and 2% (w/v) decreased the reducing sugars in PPH to 15.6 ± 0.5, 14.7 ± 0.1, 10.9 ± 0.2, and 7.5 ± 0.8 g/L, respectively. Similarly, the reducing sugars found in PPH decreased to 14.7 ± 0.1, 14.2 ± 0.8, 13.7 ± 0.2, and 8.6 ± 0.4 g/L, respectively, as the concentration of the Ca(OH)2 pretreatment solution was increased to 0.25, 0.5, 1, and 2% (w/v). In general, the alkaline pretreatment of lignocellulosic biomass degrades the lignin matrix, resulting in the availability of cellulose and hemicellulose for enzymatic degradation [35]. The effectiveness of this process, however, depends on the lignin content of the lignocellulosic biomass; specifically, the effectiveness obtained with hardwood with a low lignin content is higher than that found with softwood with a high lignin content [36]. As shown in Table 1, the lignin contents in pineapple peel and core were only 5.12 and 1.82% (w/v), respectively. Thus, these materials do not have to be subjected to pretreatment with an alkaline solution. Our results are in accordance with those reported by Jeetah et al., who found that the alkaline pretreatment (2% (w/v) NaOH) of pineapple wastes was not effective in terms of the obtained reducing sugars compared with an un-pretreated sample [37]. In this study, pineapple peel and core are considered softwood with a low lignin content. We proposed that the decrease in reducing sugars observed was a consequence of the extensive hydrolysis of cellulose and sugars by the strongly alkaline solution. It can be concluded that, during alkaline pretreatment, some portions of cellulose and hemicellulose are degraded and removed from the biomass by the action of hydroxide ions [38]. Because alkaline pretreatment steps were found to not be necessary in this study, the total costs of the overall PHA production process can be reduced. The costs of the alkaline solutions Ca(OH)2 and NaOH (50% liquid) are ∼ $70 and ∼ $325/ton, respectively. In addition, the absence of an alkaline pretreatment step would reduce the water required for post-pretreatment biomass washing, and subsequently, the waste treatment costs, including capital investment equipment and manufacturing costs, could be reduced [39]. Pineapple peel and core are, therefore, very promising agro-industrial residues for fermentable sugar conversion. Effects of types and concentrations of alkaline pretreatments on the hydrolysis of a pineapple peel and b pineapple core. Open bars, Ca2OH; solid bars, NaOH. The error bars represent the standard deviations (n = 3). An asterisk indicates a significant difference (P < 0.05) The PPH obtained after water treatment was preliminarily evaluated for PHB production by C. necator strain A-04. During a 72-h period, the cell dry weight (CDW) continually increased to 9.68 g/L, and at the end of this incubation, the PHB content reached 17.25% (w/w), indicating that C. necator strain A-04 can utilize PPH as a carbon source for growth and PHB production. Effect of particle size on the yield of sugars and inhibitors As shown in Table 2, pineapple core with − 20/+ 40 mesh particle sizes yielded the highest fermentable sugar concentration of 81.0 ± 0.5 g/L; however, these sugars included both PHB substrates (glucose and fructose; 33.6 ± 0.55 g/L) and non-PHB substrates (xylose, arabinose, and galactose; 47.4 ± 0.5 g/L). Pineapple peel with − 20/+ 40 mesh particle sizes also gave the highest xylose concentration, 24.1 ± 0.4 g/L. For both the pineapple peel and core, the + 20 mesh particle sizes yielded the highest PHB substrate concentrations, namely, 43.9 ± 0.4 and 36.9 ± 0.4 g/L, respectively, whereas the − 20/+ 40 mesh particle sizes produced the highest concentrations of xylose, which is not a PHB substrate. Effect of particle sizes on the yield of sugars Sugars (g/L) Total FS (g/L) \(Y_{{\text{FS/PW}}}\) (g/g) \(Y_{{S_{{\text{PHB}}} /\text{PW}}}\) (g/g) Arabinose 1.5% (v/v) H2SO4 > 0.841 24.5 ± 0.1 8.1 ± 0.6 1.5% (v/v) H3PO4 Ten grams of samples for each of three particle sizes were hydrolyzed with 1.5% (v/v) H2SO4 and 1.5% (v/v) H3PO4 at 121 °C for 15 min SPHB PHB substrates, FS fermentable sugars, PW pineapple waste Several reports have stated that reducing the particle size of lignocellulosic biomass improves its digestibility by increasing the total surface area and eliminating mass and heat transfer limitations during hydrolysis reactions [40–43]. However, our results support the findings reported by Harun et al., who found that the hydrolysis of rice straw with ammonia fiber expansion shows reductions in sugar conversion with increases in the size of the milled and cut substrates [44]. The larger cut rice straw particles (5 cm) demonstrated significantly higher sugar conversion than the small particles [44]. Thus, the influence of particle size on biomass digestibility has some limits. We observed that the smallest particles (− 40 mesh, < 0.420 mm) were not fully immersed in the pretreatment solution but rather floated on the surface, which limited their digestibility and resulted in the lowest fermentable sugar and PHB substrate concentrations in all cases tested. Because the goal of the experiment was to produce PHB by C. necator strain A-04, the PHB substrates glucose and fructose were more important than the total fermentable sugars. Therefore, the + 20 mesh particle size, which yielded the highest concentration of PHB substrates, was selected for the next experiment. Effects of types and concentrations of acids on the yield of sugars and inhibitors Hydrolysis of pineapple core (Fig. 3) and peel (Fig. 4) with H2SO4 and H3PO4 was investigated using concentrations of 1, 1.5, 2, 2.5, and 3% (v/v). Figure 3a shows the sugar composition resulting from the hydrolysis of pineapple core with 1, 1.5, 2, 2.5, and 3% (v/v) H2SO4, and Fig. 3b shows the sugar composition obtained from the hydrolysis of pineapple core with 1, 1.5, 2, 2.5, and 3% (v/v) H3PO4. In addition, Fig. 3c illustrates the inhibitor composition resulting from the hydrolysis of pineapple core with 1, 1.5, 2, 2.5, and 3% (v/v) H2SO4, and Fig. 3d presents the inhibitor composition obtained from the hydrolysis of pineapple core with 1, 1.5, 2, 2.5, and 3% (v/v) H3PO4. For comparison, Fig. 4a presents the sugar composition obtained by the hydrolysis of pineapple peel with 1, 1.5, 2, 2.5, and 3% (v/v) H2SO4, whereas the sugar composition obtained after the hydrolysis of pineapple peel with 1, 1.5, 2, 2.5, and 3% (v/v) H3PO4 is shown in Fig. 4b. Figure 4c reveals the inhibitor composition resulting from the hydrolysis of pineapple peel with 1, 1.5, 2, 2.5, and 3% (v/v) H2SO4, and Fig. 4d shows the inhibitor composition after the hydrolysis of pineapple peel with 1, 1.5, 2, 2.5, and 3% (v/v) H3PO4. Glucose was the major product in all the cases. The H2SO4 hydrolysis of pineapple peel (Fig. 4a) produced PHB substrate levels that were clearly higher than those obtained from pineapple core (Fig. 3a) (P < 0.05). This result can be attributed to the higher level of α-cellulose in pineapple peel compared with that in pineapple core. In contrast, the H2SO4 hydrolysis of the pineapple core (Fig. 3a) produced a xylose level higher than that obtained with pineapple peel (Fig. 4a), consistent with the high hemicellulose content of pineapple core (Table 1). Interestingly, the H3PO4 hydrolysis of pineapple core (Fig. 3b) yielded a fermentable sugar content of 69.62 ± 0.63 g/L and the highest PHB substrate content, 57.22 ± 1.07 g/L (P < 0.05). Thus, the type of acid was found to strongly affect the types of sugar released from the lignocellulosic biomass. Effects of types and concentrations of acid on a the resulting sugar composition after the hydrolysis of pineapple core with 1, 1.5, 2, 2.5, and 3% (v/v) H2SO4; b the resulting sugar composition after the hydrolysis of pineapple core with 1, 1.5, 2, 2.5, and 3% (v/v) H3PO4; c the resulting inhibitor composition after the hydrolysis of pineapple core with 1, 1.5, 2, 2.5, and 3% (v/v) H2SO4; and d the resulting inhibitor composition after the hydrolysis of pineapple core with 1, 1.5, 2, 2.5, and 3% (v/v) H3PO4. The error bars represent the standard deviations (n = 3). An asterisk indicates a significant difference (P < 0.05) Effects of types and concentrations of acid on a the resulting sugar composition after the hydrolysis of pineapple peel with 1, 1.5, 2, 2.5, and 3% (v/v) H2SO4; b the resulting sugar composition after the hydrolysis of pineapple peel with 1, 1.5, 2, 2.5, and 3% (v/v) H3PO4; c the resulting inhibitor composition after the hydrolysis of pineapple peel with 1, 1.5, 2, 2.5, and 3% (v/v) H2SO4; and d the resulting inhibitor composition after the hydrolysis of pineapple peel with 1, 1.5, 2, 2.5, and 3% (v/v) H3PO4. The error bars represent the standard deviations (n = 3). An asterisk indicates a significant difference (P < 0.05) The results of the effects of acid type and concentration on the inhibitors produced are also summarized in Figs. 3c, d and 4c, d. Based on these results, the H2SO4 (Fig. 3c) and H3PO4 (Fig. 3d) hydrolysis of pineapple core produced the highest concentrations of inhibitors. Specifically, the highest concentration of 5-HMF, 29.2 g/L, was produced by the 1.5% (v/v) H2SO4 hydrolysis of pineapple core (Fig. 3c), whereas the hydrolysis of pineapple core with 3.0% (v/v) H3PO4 produced 21.1 g/L 5-HMF (Fig. 3d). One reason for this result is that pineapple core contains more hemicellulose than pineapple peel (Table 1), and hemicellulose is subsequently hydrolyzed to obtain inhibitors. Therefore, the acid hydrolysis of pineapple peel produced lower amounts of levulinic acid (LA), 5-HMF, and FAL. Xylose and arabinose are known to dehydrate to FAL under acidic conditions, whereas glucose and galactose dehydrate to 5-HMF, which can be further hydrolyzed to LA and formic acid [45]. Acidic conditions thus lead to the formation of FAL and, to a lesser extent, 5-HMF and LA [46]. In this study, we intended to omit the step in which inhibitors are removed from the hydrolysate, because this step reduces the PHB substrate concentration and increases the cost of the large-scale reaction process. Thus, PPH and PCH without detoxification were considered for feasible low-cost PHB production. Fermentation of PPH, PCH, and CAE by C. necator strain A-04 In this study, PPH produced by 1.5% (v/v) H2SO4 hydrolysis and PCH produced by 1.5% (v/v) H3PO4 hydrolysis were selected, because these showed the highest yields of PHB substrates. To evaluate the PHB production performance of C. necator strain A-04 using PPH, PCH, and CAE, the concentration of PHB substrates in the production medium was set to 20 g/L. No available nitrogen source was found in PPH, PCH, or CAE; therefore, ammonium sulfate was supplied as a nitrogen source, and the carbon-to-nitrogen (C/N) ratio was set to 200 for shake-flask fermentation [47]. The results are shown in Fig. 5a–d as time courses of the CDW, PHB production, and PHB content obtained with C. necator strain A-04. PCH (Fig. 5b) gave a CDW of 5.9 ± 0.2 g/L with a PHB content 35.6 ± 0.1% (w/w) (QPHB = 0.025 g/L/h), whereas PPH (Fig. 5a) resulted in 5.3 ± 0.1 g/L CDW with a PHB content of 12.7 ± 0.6% (w/w) (QPHB = 0.014 g/L/h), even though PPH contained lower amounts of inhibitors than PPC. These results demonstrated that C. necator strain A-04 tolerated the various inhibitors present in PPC, and thus, the removal of inhibitors in these mixtures is not necessary prior to their addition to C. necator strain A-04. Time course of cell dry weight (g/L), residual biomass (g/L), PHB substrate (g/L), produced PHB (g/L), and PHB content [% (w/w)] during bacterial growth on a pineapple peel hydrolysate with a C/N of 200 in production medium, b pineapple core hydrolysate with a C/N of 200 in production medium, c crude aqueous extract with a C/N of 200 in production medium, and d crude aqueous extract without supplementary nitrogen or production medium. The carbon source concentration was 20 g/L in all cases. The error bars represent the standard deviations (n = 3) In addition to PPC and PPH, CAE obtained from pineapple peel and core extracts was also investigated as a carbon source for the growth and PHB production of C. necator strain A-04. First, CAE was supplied in the production medium with a C/N ratio of 200 (Fig. 5c). The CDW was enhanced and reached 11.2 ± 0.1 g/L with a PHB content of 48.7 ± 0.2% (w/w) and QPHB = 0.124 g/L/h. This result implied that CAE is more favorable for PHB production by C. necator strain A-04 than PPC and PPH. Subsequently, CAE was used as a carbon source alone without any additional nitrogen or medium (Fig. 5d). Under these conditions, the CDW reached 13.6 ± 0.2 g/L, with the highest PHB content of 60.0% (w/w) and QPHB = 0.146 g/L/h. The PHB substrates (glucose and fructose) in CAE with a C/N ratio of 200 in the production medium were consumed more rapidly than those in CAE alone (data not shown). The wild-type C. necator strain A-04 efficiently synthesized PHB from PPH [5.3 ± 0.1 g/L CDW with 12.7 ± 0.6% (w/w) PHB], PPC [6.1 ± 0.1 g/L with 35.6 ± 0.1% (w/w) PHB], CAE with N and C/N 200 [11.2 ± 0.1 g/L CDW with 48.7 ± 0.2% (w/w) PHB], and CAE alone [13.6 ± 0.2 g/L with 60 ± 0.4% (w/w) PHB] (Table 4), resulting in high CDW and PHB contents comparable to those obtained with fructose (6.8 g/L CDW with 78% (w/w) PHB) [47]. The obtained results were also comparable to those obtained with Paracoccus sp. LL1 grown on corn stover hydrolysate (using the enzyme cellulase) containing 20 g/L sugar, which resulted in 7.18 ± 0.1 g/L CDW with 68.9 ± 1.6% (w/w) PHB [4]. Kinetic studies of growth, sugar consumption, and PHB production by C. necator strain A-04 using PPH, PCH, and CAE Table 3 summarizes a comparison of the kinetics of cell growth, specific PHB substrate consumption, specific PHB production, yield coefficient of the residual cell mass produced from the consumed PHB substrate, yield coefficient of PHB produced from the consumed PHB substrate, and productivity. The results revealed that CAE without supplementation with nitrogen and medium gave the highest \(Y_{{{P \mathord{\left/ {\vphantom {P S}} \right. \kern-0pt} S}}}\), of 0.45 g PHB/g PHB substrate, with a QPHB value of 0.160 g/(L h). PPH produced the highest specific growth rate, 0.010 (1/h), whereas CAE without supplementation with nitrogen and medium produced the lowest specific growth rate, 0.001 (1/h). This effect might be associated to a lack of nutrient elements in the production medium consisting of unsupplemented CAE. PPH also gave the highest value of \(Y_{{{X \mathord{\left/ {\vphantom {X S}} \right. \kern-0pt} S}}}\), 0.43 g CDW/g-SPHB, because PPH contained only trace amounts of inhibitors. In addition, C. necator strain A-04 consumed PHB substrates slowly (0.015 g-SPHB/g-CDW h) when grown on CAE without supplementation with nitrogen and medium, whereas the highest specific consumption rate (0.042 g-SPHB/g CDW h) was obtained with C. necator strain A-04 grown on CAE with a C/N of 200 and production medium, followed by PCH, with a value of 0.032 g-SPHB/g CDW h. This effect was observed, because PCH contained more fructose than that present in PPH. Kinetics of cell growth, sugar consumption, and PHB production by C. necator strain A-04 using PPH, PCH, CAE, and CAE without N and medium Kinetic parameters PPH C/N 200 Without N and medium PHB substrates (g/L) Maximum PHB concentration (g/L) Maximum cell dry weight (g/L) Maximum PHB content (%wt) Specific growth rate (1/h) Specific consumption rate (g-SPHB/g-CDW/h) Specific production rate (g-PHB/g-CDW/h) \(Y_{{{X \mathord{\left/ {\vphantom {X S}} \right. \kern-0pt} S}}}\) (g-CDW/g-SPHB) \(Y_{{{P \mathord{\left/ {\vphantom {P S}} \right. \kern-0pt} S}}}\) (g-PHB/g-SPHB) Productivity (g/(L h) PPH pineapple peel hydrolysate, PCH pineapple core hydrolysate, CAE crude aqueous extract Lignocellulose biomass has mainly been considered a potential substrate for low-cost bioethanol production, and some researchers have studied its potential for PHA production for some time. Selected literature reports describing similar hydrolysis methods were compared with the results of this study, and the data are shown in Table 4. The \(Y_{{{P \mathord{\left/ {\vphantom {P S}} \right. \kern-0pt} S}}}\) of 0.17 g PHB/g PHB substrate obtained with PCH was slightly lower than that described in the reported data (0.24 g/g), which was obtained with C. necator MTCC-1472 grown on water hyacinth hydrolyzed with H2SO4 and cellulase as well as activated charcoal for the removal of inhibitors. However, our results demonstrate the ability of C. necator strain A-04 to grow on PCH without inhibitor removal. C. necator strain A-04 showed superior tolerance to LA; in fact, it tolerated up to 14.7 g/L LA, which was a higher concentration than those previously reported for seven PHA-producing bacteria, namely, Azohydromonas lata ATCC 29714, Bacillus cereus ATCC 14579, Bacillus megaterium ATCC 14581, Burkholderia cepacia ATCC 17759, Pseudomonas oleovorans ATCC 29347, Pseudomonas pseudoflava ATCC 33668, and Ralstonia eutropha ATCC 17699 [48]. Furthermore, C. necator strain A-04 could utilize inhibitors as carbon sources, because the inhibitor concentrations decreased over time (data not shown). Comparison of PHB production from lignocellulosic hydrolysates Carbon source Tmax (h) DCW (g/L) PHB (g/L) PHB (%wt) \(Q_{{\text{PHB}}}\) (g/(L h) \(Y_{{{P \mathord{\left/ {\vphantom {P S}} \right. \kern-0pt} S}}}\) (g/g) C. necator strain A-04 CAE w/o nitrogen Not remove CAE, C/N 200 H3PO4 C. necator MTCC-1472 H2SO4 + cellulase Radhika and Murugesan [10] C. necator Yu and Stahi [6] Sphingopyxis PHBV Silva et al. [5] Macrogoltabida LMG 17324 Brevundimonas Vesicularis LMG P-23615 Bacillus mycoides Narayanan et al. [12] PPH pineapple peel hydrolysate, PCH pineapple core hydrolysate, CAE crude aqueous extract, ni not indicate, PHBV poly(3-hydroxybutyratae-co-3-hydroxyvalerate) The maximum \(Y_{{{P \mathord{\left/ {\vphantom {P S}} \right. \kern-0pt} S}}}\) (0.45 g PHB/g PHB substrate) obtained in this study was higher than the \(Y_{{{P \mathord{\left/ {\vphantom {P S}} \right. \kern-0pt} S}}}\) of 0.39 g PHB/g PHB substrate reported by Silva et al. for the utilization of sugarcane bagasse hydrolysate by Burkholderia cepacia IPT 101 [5]. Note that CaO and charcoal were used to remove inhibitors [5]. All the above results suggest that pineapple waste products might be a good renewable resource to produce biodegradable polymers and that C. necator strain A-04 may be a suitable wild-type bacterial strain for low-cost PHB production from pineapple waste. On one hand, the development of lignocellulosic biomass conversion technologies for PHA production has been a research focus over the last decades, and on the other hand, advances in many research areas, such as improved PHA-producing strains, cultivation systems, harvesting technologies, and biocomposite technologies, are required to displace fossil-derived feedstocks by competitive green technologies for PHA production. Our current work aims to establish a technoeconomic platform for PHA production using both wild-type and recombinant strains. The obtained kinetic parameters will be applied to production in a 10-L bioreactor. Furthermore, microcrystalline cellulose has been extracted from pineapple leaves, and its chemical structure has been modified. Finally, biocomposite films of PHB produced from C. necator strain A-04 using pineapple waste hydrolysate and pineapple leaf microcrystalline cellulose will be prepared, and their biodegradable, and thermal and mechanical properties will be tested. The outcomes will provide parameters that can be used to guide future research and development. The feasibility of using pineapple waste residue from the canned pineapple industry as a lignocellulosic feedstock for PHB production was evaluated. The highest \(Y_{{{{S_{{\text{PHB}}} } \mathord{\left/ {\vphantom {{S_{{\text{PHB}}} } {\text{PW}}}} \right. \kern-0pt} {\text{PW}}}}}\) value obtained was 0.57 g/g, and the highest \(Y_{{{{\text{FS}} \mathord{\left/ {\vphantom {{\text{FS}} {\text{PW}}}} \right. \kern-0pt} {\text{PW}}}}}\) value was 0.81 g/g. Detoxification was not required prior to the conversion of cellulose hydrolysate to PHB by C. necator strain A-04. This bacterial strain showed the ability to tolerate up to 14.7 g/L LA and 2.1 g/L 5-HMF. In addition to acid hydrolysis, CAE was shown to be a potential carbon source and medium, and the CDW, PHA content, and \(Y_{{{P \mathord{\left/ {\vphantom {P S}} \right. \kern-0pt} S}}}\) reached 13.6 ± 0.2 g/L, 60 ± 0.4% (w/w) PHB, and 0.45 g PHB/g PHB substrate, respectively. This simple chemical process, which requires neither an alkaline pretreatment nor detoxification steps prior to the PHB production step, could enable the use of crude biomass as the sole carbon source in a scalable biorefinery. PHB-producing strain Cupriavidus necator strain A-04, a Gram-negative PHB-producing strain isolated from soil in Thailand, was used in this study [47, 49]. The 16S rRNA gene sequence of C. necator strain A-04 has been studied and submitted to GenBank under Accession Number EF988626 [47]. Bacterial strain was maintained on a nutrient agar slant at 4 °C by subculturing at monthly intervals. Stock cultures were maintained at − 80 °C in a 15% (v/v) glycerol solution. Carbon sources The agro-industrial residues used in this study were pineapple waste products from the canned pineapple industry, i.e., pineapple peel, core, and CAE. These materials were obtained from Siam Food Products Public Company Limited (a Banbung factory at Tambol Nong-Irun, Amphoe Banbung, Chonburi, Thailand). The pineapple peel and core were dried separately in a hot-air oven (UN55, Memmert GmbH + Co. KG, Schwabach, Germany) at 65 °C for 24 h, milled using a laboratory blender (45,000 rpm, 1800-W, Healthy mix GP 3.5, Taiwan) and then sieved to fractionate the particles into three sizes: less than 0.841 mm (− 20 mesh), 0.841–0.420 mm (− 20/+ 40 mesh), and more than 0.420 mm (+ 40 mesh). The chemical compositions of the cellulose-containing materials were determined according to the Technical Association of Pulp and Paper Industry (TAPPI) standard methods for the following parameters: benzene extractives (TAPPI T 204 cm-07); α-cellulose, β-cellulose, and γ-cellulose (TAPPI T203 om-09); holocellulose (TAPPI T9 m-54; lignin (TAPPI T222 om-15); ash (TAPPI T-211). The composition and concentration of the sugars in CAE were analyzed using a high-performance liquid chromatograph as described in the section detailing the analytical methods. Pretreatment and hydrolysis of pineapple waste products from the canned pineapple industry Ten grams of samples representing each of the three particle sizes + 20 mesh, − 20/+ 40 mesh, and − 40 mesh were pretreated separately using NaOH and Ca(OH)2 solutions (0, 0.25, 0.5, 1, and 2% w/v), and followed by autoclaving at 121 °C for 15 min. Water was also run under identical conditions. The pretreated samples were filtered through Whatman filter paper (No. 1, pore size of 11 μm, Sigma-Aldrich Corp., St. Louis, MO, USA), neutralized using tap water, and dried overnight at 80 °C. The neutral-pretreated samples were added to 100 mL of solutions of H2SO4 and H3PO4 (1, 1.5, 2. 2.5 and 3% v/v), followed by autoclaving at 121 °C for 15 min [5, 6, 11, 50]. Subsequently, the resulting pretreated samples were filtered, and the filtrate was collected and refiltered through Whatman filter paper. Finally, the pH of the filtrate was adjusted to pH 7.0 using 2 M NaOH to obtain pineapple peel hydrolysate (PPH) or pineapple core hydrolysate (PCH). Culture conditions for PHB production from pineapple peel hydrolysate, pineapple core hydrolysate, or crude aqueous extract in production medium Inocula were prepared in 500-mL Erlenmeyer flasks with 100 mL of preculture medium consisting of 2-g/L yeast extract, 10-g/L polypeptone, and 1 g/L MgSO4·7H2O, and then grown on a rotary incubator shaker (Innova 4300, New Brunswick Scientific Co., Inc., Edison, NJ, USA) at 30 °C and 200 rpm for 24 h. The cells were harvested by centrifugation and washed to remove any nitrogen source with 0.85% sodium chloride solution. For the synthesis of PHB, the cells were inoculated into a production medium containing mineral salts, consisting of 4.5 g/L Na2HPO4, 1.5 g/L KH2PO4, 0.2 g/L MgSO4·7H2O, 0.05 g/L Fe(III)(NH4) citrate (17% Fe), 0.02 g/L CaCl2·2H2O and 1 mL of trace element solution [0.3 g/L H3BO4, 0.2 g/L CoCl2·6H2O, 0.01 g/L ZnSO4·7H2O, 0.04 g/L MnCl2·4H2O, 0.03 g/L (NH4)6Mo7O24·4H2O, 0.02 g/L NiCl2·6H2O, and 0.01 g/L CuSO4·5H2O]. The cultivation was performed at 30 °C in 500-mL Erlenmeyer flasks containing 100 mL of production medium with shaking at 200 rpm for 96 h. Culture samples were collected at 12-h intervals. The total fermentable sugar concentrations in PPH, PCH, or CAE were adjusted to 20 g/L, and the carbon-to-nitrogen ratio was set to 200 [51]. Culture conditions for PHB production from CAE without production medium Cultivation was performed at 30 °C in 500-mL Erlenmeyer flasks containing 100 mL of CAE without production medium and an adjusted C/N ratio; incubation was conducted on a rotary incubator shaker controlled to 200 rpm for 96 h. Culture samples were taken at 12-h intervals. The sugar contents of CAE, as analyzed by HPLC using the protocol described in the section detailing the analytical methods used in this study, were 24.48 g/L glucose, 20.14 g/L sucrose, 2.73 g/L fructose, and 0.30 g/L galactose. According to the previous reports, these sugars can be used as a carbon source for growth and PHB production by C. necator strain A-04 [47]. Cell growth was monitored by the CDW, which was determined by filtering 5 mL of the culture broth through pre-weighed cellulose nitrate membrane filters (pore size = 0.22 μm; Sartorius, Goettingen, Germany). The filters were dried at 80 °C for 2 days and stored in vacuum desiccators. The net biomass was defined as the residual biomass, which was calculated by subtracting the amount of PHB from the total biomass. The PHB in dried cells was methyl-esterified using a mixture of chloroform and 3% methanol-sulfuric acid (1:1 v/v) [52]. The resulting monomeric methyl esters were quantified by a gas chromatograph (Model CP3800, Varian Inc., Walnut Creek, CA, USA) using a Carbowax-PEG capillary column (0.25-μm df, 0.25-mm ID, 60-m length, Varian Inc.). The internal standard was benzoic acid, and the external standard was PHB (Sigma-Aldrich Corp.). The composition and concentration of the sugars (glucose, fructose, galactose, sucrose, cellobiose, and xylose) in PPH, PCH, and CAE were analyzed using a high-performance liquid chromatograph (Model 626, Alltech Inc., Nicholasville, KY, USA) equipped with an evaporative light-scattering detector (ELSD) (Model 2000ES, Alltech Inc., Nicholasville, KY, USA) and a Rezex RPM monosaccharide column (7.8-mm ID × 300-mm length, Phenomenex Inc., Torrance, CA, USA). Water was used as an elution solvent at a flow rate of 0.6 mL/min. The operating temperature was maintained at 60 °C. The parameters used for ELSD were as follows: the temperature of the drift tube was 105 °C, nitrogen was used as the carrier gas at a flow rate of 2.6 L/min, and the impactor was set in the off position. The composition and concentration of by-products (LA, 5-HMF, and FAL) in PPH and PCH were analyzed using a high-performance liquid chromatograph (model Prostar, Varian Inc., Walnut Creek, CA, USA) equipped with an ultraviolet (UV) detector (wavelength = 285 nm, Prostar 335, Varian Inc., Walnut Creek, CA, USA) and a ChromSpher C18 column (4.6-mm ID × 250-mm length, Varian Inc., Walnut Creek, CA, USA). The mobile phase was methanol:acetic acid:water (12:1:88, v/v) at a flow rate of 1.0 mL/min. The operating temperature was maintained at 25 °C. The total sugar content was determined through a phenol–sulfuric acid assay [53]. The total reducing sugar concentration was determined using a 3,5-dinitrosalicylic acid (DNS) assay [54], and the \({\text{NH}}_{4}^{ + }\) concentration in the culture medium was determined through a colorimetric assay [55]. All the data presented in this manuscript are representative of the results of three independent experiments and are expressed as the mean values ± standard deviations (SDs). Analysis of variance (one-way ANOVA) followed by Duncan's test for testing differences among means was conducted using SPSS version 22 (IBM Corp., Armonk, NY, USA). Differences were considered significant at P < 0.05. C/N: molar ratio of carbon to nitrogen (−); \(Y_{P/S}\): yield coefficient of PHB produced from consumed PHB substrate (g PHB/g PHB substrate); \(Y_{X/S}\): yield coefficient of the residual cell mass produced from the consumed PHB substrate (g RB/g PHB substrate); Tmax: time when maximal PHB produced was obtained (h); QPHB: PHB productivity (g PHB/L h). PHAs: polyhydroxyalkanoates; PHB: polyhydroxybutyrate; CDW: cell dry weight; RB: residual biomass; LA: levulinic acid; 5-HMF: 5-hydroxymethyl furfural; FAL: furfural; DSC: differential scanning calorimetry; GPC: gel permeation chromatography; PPH: pineapple peel hydrolysate; PCH: pineapple core hydrolysate; CAE: crude aqueous extract. VS performed the experiments. SCN provided guidance and suggestions for the experimental design, discussed the results, and wrote and edited the manuscript. Both authors read and approved the final manuscript. This research was partially supported by the 90th Anniversary of Chulalongkorn Univers ty Fund (Ratchadapiseksomphot Endowment Fund. The authors would like to thank Siam Food Products Public Company Limited for providing pineapple wastes used in this study. The authors agree to the publication of this manuscript in the journal. Department of Microbiology, Faculty of Science, Chulalongkorn University, Phayathai Road, Patumwan, Bangkok, 10330, Thailand Binder JB, Raines RT. Fermentable sugars by chemical hydrolysis of biomass. PNAS. 2010;107:4516–21.View ArticlePubMedGoogle Scholar Lambert S, Wagner M. Environmental performance of bio-based and biodegradable plastics: the road ahead. Chem Soc Rev. 2017;46:6855–71.View ArticlePubMedGoogle Scholar Kunaver M, Anžlovar A, Žagar E. The fast and effective isolation of nanocellulose from selected cellulosic feedstocks. Carbohydr Polym. 2016;148:251–8.View ArticlePubMedGoogle Scholar Sawant SS, Salunke BK, Kim BS. Degradation of corn stover by fungal cellulase cocktail for production of polyhydroxyalkanoates by moderate halophile Paracoccus sp. LL1. Bioresour Technol. 2015;194:247–55.View ArticlePubMedGoogle Scholar Silva L, Taciro M, Ramos MM, Carter J, Pradella J, Gomez J. Poly-3-hydroxybutyrate (P3HB) production by bacteria from xylose, glucose and sugarcane bagasse hydrolysate. J Ind Microbiol Biotechnol. 2004;31:245–54.View ArticleGoogle Scholar Yu J, Stahl H. Microbial utilization and biopolyester synthesis of bagasse hydrolysates. Bioresour Technol. 2008;99:8042–8.View ArticlePubMedGoogle Scholar Nduko JM, Suzuki W, Ki Matsumoto, Kobayashi H, Ooi T, Fukuoka A, et al. Polyhydroxyalkanoates production from cellulose hydrolysate in Escherichia coli LS5218 with superior resistance to 5-hydroxymethylfurfural. J Biosci Bioeng. 2012;113:70–2.View ArticlePubMedGoogle Scholar Oh YH, Lee SH, Jang Y-A, Choi JW, Hong KS, Yu JH, et al. Development of rice bran treatment process and its use for the synthesis of polyhydroxyalkanoates from rice bran hydrolysate solution. Bioresour Technol. 2015;181:283–90.View ArticlePubMedGoogle Scholar Zhang Y, Sun W, Wang H, Geng A. Polyhydroxybutyrate production from oil palm empty fruit bunch using Bacillus megaterium R11. Bioresour Technol. 2013;147:307–14.View ArticlePubMedGoogle Scholar Radhika D, Murugesan A. Bioproduction, statistical optimization and characterization of microbial plastic (poly 3-hydroxy butyrate) employing various hydrolysates of water hyacinth (Eichhornia crassipes) as sole carbon source. Bioresour Technol. 2012;121:83–92.View ArticlePubMedGoogle Scholar Silva JA, Tobella LM, Becerra J, Godoy F, Martínez MA. Biosynthesis of poly-β-hydroxyalkanoate by Brevundimonas vesicularis LMG P-23615 and Sphingopyxis macrogoltabida LMG 17324 using acid-hydrolyzed sawdust as carbon source. J Biosci Bioeng. 2007;103:542–6.View ArticleGoogle Scholar Narayanan A, Kumar VS, Ramana KV. Production and characterization of poly (3-hydroxybutyrate-co-3-hydroxyvalerate) from Bacillus mycoides DFC1 using rice husk hydrolyzate. Waste Biomass Valori. 2014;5:109–18.View ArticleGoogle Scholar Dietrich K, Dumont M-J, Schwinghamer T, Orsat V, Del Rio LF. Model study to assess softwood hemicellulose hydrolysates as the carbon source for PHB production in Paraburkholderia sacchari IPT 101. Biomacromol. 2017;19:188–200.View ArticleGoogle Scholar Sangudom T. Research and development on pineapple. Department of Agriculture, Ministry of Agriculture and Cooperatives 2015. http://www.doa.go.th/research/attachment.php?aid=2251. Accessed 5 Feb 2018. Win HE. Analysis of tropical fruits in Thailand. 2017 http://ap.fftc.agnet.org/files/ap_policy/818/818_1.pdf2017. Accessed 2 Feb 2018. Industrail sector codes of practice for pollution prevention (cleaner technology): Department of Industrial Works, Ministry of Industry. http://php.diw.go.th/ctu/files/pdf/codeofpractice_cannedfood_th.pdf. Accessed 5 Feb 2018. Jirapornvaree I, Suppadit T, Popan A. Use of pineapple waste for production of decomposable pots. Int J Recycl Org Waste Agric. 2017;6:345–50.View ArticleGoogle Scholar Namsree P, Suvajittanont W, Puttanlek C, Uttapap D, Rungsardthong V. Anaerobic digestion of pineapple pulp and peel in a plug-flow reactor. J Environ Manage. 2012;110:40–7.View ArticlePubMedGoogle Scholar Leeben Y, Soni P, Shivakoti GP. Indicators of sustainable development for assessing performance of pineapple canneries: conceptual framework and application. J Food Agric Environ. 2013;11:100–9.Google Scholar Correia RT, McCue P, Magalhães MM, Macêdo GR, Shetty K. Production of phenolic antioxidants by the solid-state bioconversion of pineapple waste mixed with soy flour using Rhizopus oligosporus. Process Biochem. 2004;39:2167–72.View ArticleGoogle Scholar Suwannasing W, Imai T, Kaewkannetra P. Cost-effective defined medium for the production of polyhydroxyalkanoates using agricultural raw materials. Bioresour Technol. 2015;194:67–74.View ArticlePubMedGoogle Scholar Dorta E, Sogi DS. Value added processing and utilization of pineapple by products. In: Lobo MG, Paull RE, editors. Handbook of pineapple technology: production, postharvest science, processing and nutrition. Hoboken: Wiley; 2016. p. 196–220.View ArticleGoogle Scholar Upadhyay A, Lama JP, Tawata S. Utilization of pineapple waste: a review. J Food Sci Technol Nepal. 2013;6:10–8.View ArticleGoogle Scholar Chanchareonsook J, Vacharotayan S, Suwannarat C, Thongpae S. Utilization of organic waste materials for calcareous soil improvement. Food and Agriculture Organization of United Nations. 1992. http://agris.fao.org/agris-search/search.do?recordID=TH2000002414. Accessed 29 June 2018. Ch'ng HY, Ahmed OH, Kassim S, Ab Majid NM. Co-composting of pineapple leaves and chicken manure slurry. Int J Recycl Org Waste Agric. 2013;2:23.View ArticleGoogle Scholar Jetana T, Suthikrai W, Usawang S, Vongpipatana C, Sophon S, Liang J. The effects of concentrate added to pineapple (Ananas comosus Linn. Mer.) Waste silage in differing ratios to form complete diets, on digestion, excretion of urinary purine derivatives and blood metabolites in growing, male, thai swamp buffaloes. Trop Anim Health Prod. 2009;41:449–59.View ArticlePubMedGoogle Scholar Sruamsiri S. Agricultural wastes as dairy feed in Chiang Mai. Anim Sci J. 2007;78:335–41.View ArticleGoogle Scholar Kengkhetkit N, Amornsakchai T. Utilisation of pineapple leaf waste for plastic reinforcement: 1. A novel extraction method for short pineapple leaf fiber. Ind Crops Prod. 2012;40:55–61.View ArticleGoogle Scholar Kengkhetkit N, Amornsakchai T. A new approach to "Greening" plastic composites using pineapple leaf waste for performance and cost effectiveness. Mater Design. 2014;55:292–9.View ArticleGoogle Scholar Fielding M, Aung MT. Bioeconomy in Thailand: a case study. Stockholm Environment Institute: Stockholm; 2018.Google Scholar Bardiya N, Somayaji D, Khanna S. Biomethanation of banana peel and pineapple waste. Bioresour Technol. 1996;58:73–6.View ArticleGoogle Scholar Ban-Koffi L, Han Y. Alcohol production from pineapple waste. World J Microbiol Biotechnol. 1990;6:281–4.View ArticlePubMedGoogle Scholar Tanaka K, Hilary ZD, Ishizaki A. Investigation of the utility of pineapple juice and pineapple waste material as low-cost substrate for ethanol fermentation by Zymomonas mobilis. J Biosci Bioeng. 1999;87:642–6.View ArticlePubMedGoogle Scholar Jawaid M, Tahir PM, Saba N. Lignocellulosic fibre and biomass-based composite materials: processing, properties and applications. Sawston: Woodhead Publishing; 2017.Google Scholar Pandey A, Soccol CR, Nigam P, Soccol VT. Biotechnological potential of agro-industrial residues. I: sugarcane bagasse. Bioresour Technol. 2000;74:69–80.View ArticleGoogle Scholar McMillan JD. Pretreatment of lignocellulosic biomass. Washington, D.C.: ACS Publications; 1994.View ArticleGoogle Scholar Jeetah P, Rossaye J, Mohee R. Effectiveness of alkaline pretreatment on fruit wastes for bioethanol production. University Mauritius Res J. 2016;22:134–53.Google Scholar Yang BY, Montgomery R. Alkaline degradation of invert sugar from molasses. Bioresour Technol. 2007;98:3084–9.View ArticlePubMedGoogle Scholar Cheng Y-S, Zheng Y, Yu CW, Dooley TM, Jenkins BM, VanderGheynst JS. Evaluation of high solids alkaline pretreatment of rice straw. Appl Biochem Biotechnol. 2010;162:1768–84.View ArticlePubMedPubMed CentralGoogle Scholar Peciulyte A, Karlström K, Larsson PT, Olsson L. Impact of the supramolecular structure of cellulose on the efficiency of enzymatic hydrolysis. Biotechnol Biofuels. 2015;8:56.View ArticlePubMedPubMed CentralGoogle Scholar Adani F, Papa G, Schievano A, Cardinale G, D'Imporzano G, Tambone F. Nanoscale structure of the cell wall protecting cellulose from enzyme attack. Environ Sci Technol. 2010;45:1107–13.View ArticlePubMedGoogle Scholar Chen X, Kuhn E, Wang W, Park S, Flanegan K, Trass O, et al. Comparison of different mechanical refining technologies on the enzymatic digestibility of low severity acid pretreated corn stover. Bioresour Technol. 2013;147:401–8.View ArticlePubMedGoogle Scholar Yeh A-I, Huang Y-C, Chen SH. Effect of particle size on the rate of enzymatic hydrolysis of cellulose. Carbohydr Polym. 2010;79:192–9.View ArticleGoogle Scholar Harun S, Balan V, Takriff MS, Hassan O, Jahim J, Dale BE. Performance of AFEX™ pretreated rice straw as source of fermentable sugars: the influence of particle size. Biotechnol Biofuels. 2013;6:40.View ArticlePubMedPubMed CentralGoogle Scholar Brandt-Talbot A, Gschwend FJ, Fennell PS, Lammens TM, Tan B, Weale J, et al. An economically viable ionic liquid for the fractionation of lignocellulosic biomass. Green Chem. 2017;19:3078–102.View ArticleGoogle Scholar Du B, Sharma LN, Becker C, Chen SF, Mowery RA, van Walsum GP, et al. Effect of varying feedstock–pretreatment chemistry combinations on the formation and accumulation of potentially inhibitory degradation products in biomass hydrolysates. Biotechnol Bioeng. 2010;107:430–40.View ArticlePubMedGoogle Scholar Chanprateep S, Katakura Y, Visetkoop S, Shimizu H, Kulpreecha S, Shioya S. Characterization of new isolated Ralstonia eutropha strain A-04 and kinetic study of biodegradable copolyester poly (3-hydroxybutyrate-co-4-hydroxybutyrate) production. J Ind Microbiol Biotechnol. 2008;35:1205–15.View ArticlePubMedGoogle Scholar Dietrich D, Illman B, Crooks C. Differential sensitivity of polyhydroxyalkanoate producing bacteria to fermentation inhibitors and comparison of polyhydroxybutyrate production from Burkholderia cepacia and Pseudomonas pseudoflava. BMC Res Notes. 2013;6:219.View ArticlePubMedPubMed CentralGoogle Scholar Chanprateep S, Kulpreecha S. Production and characterization of biodegradable terpolymer poly (3-hydroxybutyrate-co-3-hydroxyvalerate-co-4-hydroxybutyrate) by Alcaligenes sp. A-04. J Biosci Bioeng. 2006;101:51–6.View ArticlePubMedGoogle Scholar Agbor VB, Cicek N, Sparling R, Berlin A, Levin DB. Biomass pretreatment: fundamentals toward application. Biotechnol Adv. 2011;29:675–85.View ArticleGoogle Scholar Chanprateep S, Buasri K, Muangwong A, Utiswannakul P. Biosynthesis and biocompatibility of biodegradable poly (3-hydroxybutyrate-co-4-hydroxybutyrate). Polym Degrad Stab. 2010;95:2003–12.View ArticleGoogle Scholar Braunegg G, Sonnleitner B, Lafferty R. A rapid gas chromatographic method for the determination of poly-β-hydroxybutyric acid in microbial biomass. Appl Microbiol Biotechnol. 1978;6:29–37.View ArticleGoogle Scholar DuBois M, Gilles KA, Hamilton JK, Rebers PT, Smith F. Colorimetric method for determination of sugars and related substances. Anal Chem. 1956;28:350–6.View ArticleGoogle Scholar Miller GL. Use of dinitrosalicylic acid reagent for determination of reducing sugar. Anal Chem. 1959;31:426–8.View ArticleGoogle Scholar Kemper A. Determination of sub-micro quantities of ammonium and nitrate in soils with phenol, sodium nitropusside and hypochloride. Geoderma. 1974;12:201–6.View ArticleGoogle Scholar
CommonCrawl
Feynman's explanation of virtual work given in his book Feynman's lectures on Physics In his book Chapter 4 Conservation of Energy, on Gravitational potential energy the discussion goes... Take now the somewhat more complicated example shown in Fig. 4-6. A rod or bar, 8 feet long, is supported at one end. In the middle of the bar is a weight of 60 pounds, and at a distance of two feet from the support there is a weight of 100 pounds. How hard do we have to lift the end of the bar in order to keep it balanced, disregarding the weight of the bar? Suppose we put a pulley at one end and hang a weight on the pulley. How big would the weight W have to be in order for it to balance? We imagine that the weight falls any arbitrary distance — to make it easy for ourselves suppose it goes down 4 inches—how high would-the two load weights rise? The center rises 2 inches, and the point a quarter of the way from the fixed end lifts 1 inch. Therefore, the principle that the sum of the heights times the weights does not change tells us that the weight W times 4 inches down, plus 60 pounds times 2 inches up, plus 100 pounds times 1 inch has to add up to nothing: ... But how do we know that the end point of the rod that is connected to the rope goes up 4 inches when the weight $W$ goes 4 inches down. My argument according to him is that if the weight $W$ goes 4 inches downwards. The rod is lifted a little less than 4 inches because the point where the rod and the rope is connected goes in a circular path. And the length of the path is 4 inches achieved by the weight $W$ by going downwards. Yet the concept of virtual work is true. So, the 60 pound weight does not move 2 inch upward and the 100 pound weight does not move 1 inch upward. From triangle similarity if the rod moves vertically upwards with 4 inches, the rest of the weight moves according to Feynman's argument. But my argument is that it will be lesser than the value given in the book. So, if someone could help me resolve this, it'll be a very enriching experience. homework-and-exercises newtonian-mechanics lagrangian-formalism energy-conservation potential-energy Paul Razvan Berg Jyotishraj ThoudamJyotishraj Thoudam $\begingroup$ Welcome to "the art of approximation" where $\sin x \approx \tan x \approx x$. :-) $\endgroup$ – CuriousOne $\begingroup$ Another word for a mild cheat to complete the sentence, right? $\endgroup$ – Jyotishraj Thoudam $\begingroup$ Very mild. It's really just an engineering problem... you could make some kind of cam-mechanism that keeps the string straight and compensates. It wouldn't change the physics, at all, and annoy the heck out of everyone. Look at it this way... you caught Feynman's tiny slight of hand, which makes you the smartest person in the room. That counts for something, for quite a bit, actually. $\endgroup$ $\begingroup$ @CuriousOne would you care to explain the solution presented in this link physics.stackexchange.com/q/265664. I tried to use the argument given above but I don't know how it'll work. $\endgroup$ He made an approximation and didn't make it clear why (to avoid complicating things). He had to move the bar a small amount to compare the movement of the pulley weight to the other two weights. He took an "arbitrary" small distance of 4" to make the math easy. In reality the bar doesn't really move much at all, the weights are resisting movement in both directions. The distance is infinitesimally small as sammy gerbil said. In that case the Small Angle Approximation applies, and $$\cos\theta \approx 1 - \frac {\theta ^2}{2}$$ and in this situation we have an infinitesimally small angle, so this theorem definitely applies, it also makes theta infinitesimally small, so $$\theta \approx 0\\\cos\theta \approx 1 - \frac {0}{2} = 1$$ if cosine is 1, so is the ratio of the two lines $$\cos\theta = \frac{adjacent}{hypotenuse}\\1 \approx \frac{adjacent}{hypotenuse}\\hypotenuse \approx adjacent$$ which means the distance of the rod from where it started should be essentially no further than when it began. I'm sure there was a 50x more elegant way to show that; but it's what I came up with. JMacJMac Feynman is using definite small quantities (inches) in place of infinitesimals $\delta x$ etc. Probably he wanted to avoid non-essential mathematical formality which would happen if he talked about 'infinitesimals.' This is in line with his casual, hand-waving persona. He is imagining that the rotation of the beam is extremely small, so that the change in the direction of the rope attaching the end of the beam to the pulley is negligible. He is imagining that the displacements are infinitesimally small, which means that they are practically nothing. He could express these displacements in nano-inches or femto-inches or something even smaller. However instead he is measuring them in whole inches because it is a convenient unit and it doesn't matter what units we are using - the units all cancel out in the end. It isn't the absolute value of the displacements which matter. (We do not need to know the value of work done in Joules.) It is only their ratios which are significant. sammy gerbilsammy gerbil The other answers are good, I just want to add that there's an audio version of the lectures selling on audible here (this one contains the Chapter 4) that shows that Feynman did explicitly state that this was an approximation, but got lost in the translation to the Feynman Lectures. My crude transcription of the relevant section is as follows ...and we use a very small motion, because if I try to use 400 feet(s) here, it would get a little bit confuse(ing) by having gone around so many times *laugh* so we use a very tiny motion so it's easy to figure out that smaller the more easier it is to figure out. Actually, it goes in a curve and it isn't exactly 2 inches and so on, but you take an infinitesimal distance and that's called the principle of virtual work. PS the audio chapter and the FLP chapter do not line up, they're all jumbled up, a conversion table can be found here Henry OhHenry Oh $\begingroup$ Thank you Henry. $\endgroup$ Not the answer you're looking for? Browse other questions tagged homework-and-exercises newtonian-mechanics lagrangian-formalism energy-conservation potential-energy or ask your own question. Pendulum water pump - apparent free energy Generalization of Feynman's derivation of the formula for gravitational potential energy in FLP volume I Calculating the force required to lift a weight with a screw Feynman screw jack device The Feynman Lectures Choosing signs in free body diagrams of subsystems Understanding Feynman's screw jack visually
CommonCrawl
\begin{document} \title{Fractal Autoencoders for Feature Selection} \begin{abstract} Feature selection reduces the dimensionality of data by identifying a subset of the most informative features. In this paper, we propose an innovative framework for unsupervised feature selection, called fractal autoencoders (FAE). It trains a neural network to pinpoint informative features for global exploring of representability and for local excavating of diversity. Architecturally, FAE extends autoencoders by adding a one-to-one scoring layer and a small sub-neural network for feature selection in an unsupervised fashion. With such a concise architecture, FAE achieves state-of-the-art performances; extensive experimental results on fourteen datasets, including very high-dimensional data, have demonstrated the superiority of FAE over existing contemporary methods for unsupervised feature selection. In particular, FAE exhibits substantial advantages on gene expression data exploration, reducing measurement cost by about $15$\% over the widely used L1000 landmark genes. Further, we show that the FAE framework is easily extensible with an application. \end{abstract} \section{Introduction} \noindent High-dimensional data is pervasive in almost every area of modern data science~\citep{Clarke,Blum}. Dealing with high-dimensional data is challenging due to the known phenomenon -- the curse of dimensionality~\citep{Bellman}. In numerous applications, principal component analysis (PCA)~\citep{Pearson} and autoencoders (AE)~\citep{David,Ballard}, two traditional and simple approaches, are typically used for dimensionality reduction. For example, PCA is adopted to reduce the dimensions of gene expression data before the extraction of the samples' rhythmic structures~\citep{Ron}; AE are employed to process high-dimensional datasets prior to clustering~\citep{Xie} and subspace learning~\citep{Ji}. Despite their widespread usage, the interpretation of lower-dimensional feature spaces produced by PCA and AE is not straightforward, because these feature spaces are different from the original feature space. In contrast to PCA and AE, feature selection allows for ready interpretability with the input features, by identifying and retaining a subset of important features directly from the original feature space~\citep{Guyon}. There exist various feature selection approaches. According to whether labels are used, they can be categorized as supervised~\citep{cheng2010fisher}, semi-supervised, and unsupervised methods~\citep{Alelyani}. Unsupervised approaches have potentially extensive applications, since they do not require labels that can be rare or expensive to obtain. A variety of techniques for unsupervised feature selection have been proposed, e.g., Laplacian score (LS)~\citep{He} and concrete autoencoders (CAE)~\citep{Abubakar}. While often used, the existing approaches may still exhibit suboptimal performance in downstream learning tasks on many datasets, which can be seen, e.g., in Table~\ref{table3}. There are two major reasons that cause such under-performance. First, the space to search for potentially important subsets of features in the absence of the guidance by labels is often very large, which renders unsupervised feature selection to be like finding a needle in a haystack. Second, it is necessary, yet challenging, to take account of the inter-feature interactions. Ideally, the selected features should be globally representative and as diverse as possible. If the selected features are all important yet highly correlated, they may be capable of representing only partial data, and thus they would hardly comprise a globally representative feature subset. For example, if a pixel of a natural image is important, then some neighboring ones are also likely to be so because of the typical spatial dependence in images; thus, to select diverse, salient features to represent the overall contents, if a pixel is important and selected, those neighboring pixels of high correlations with it should not be included into the feature subset. Existing unsupervised approaches have limited abilities to simultaneously explore the large search space for features that can represent the overall contents and take into account the diversity, which is reflected by the inter-correlation of features, thus leading to suboptimal performances. To overcome these difficulties, in this paper we propose a novel unsupervised feature selection framework, called fractal autoencoders (FAE). It trains a neural network (NN) to identify potentially informative features for representing the contents globally; simultaneously, it exploits a dependence sub-NN to select a subset locally from the globally informative features to examine their diversity, which is efficiently measured by their abilities to reconstruct the original data. In this way, the sub-NN enables FAE to effectively screen out the highly correlated features; the global AE component of FAE turns out to play a crucial role of regularization to stabilize the feature selecting process, aside from its standard role of feature extraction. With our new architecture, FAE merges feature selection and feature extraction into one model, facilitating the identification of a subset of the most representative and diverse input features. To illustrate the extensive ability of the FAE framework, we use it to derive an $h$-Hierarchy FAE ($h$-HFAE) application to identity multiple subsets of important features. In summary, our main contributions include: \begin{itemize} \item We propose a novel framework, FAE, for feature selection to meet the challenges of existing methods. It combines global exploration of representative features and local excavation of their diversity, thereby enhancing the generalization of the selected feature subset and accounting for inter-feature correlations. \item As our framework can be readily applicable or extensible to other tasks, we show an application to identity multiple hierarchical subsets of salient features simultaneously. \item We validate FAE with extensive experiments on fourteen real datasets. Although simple, it demonstrates state-of-the-art performance for reconstruction on many benchmarking datasets. It also yields superior performance in a downstream learning task of classification on most of the benchmarking datasets. As a biological application, FAE reduces gene expression measurements by about $15$\% compared with L1000 landmark genes. Further, FAE exhibits more stable performance on varying numbers of selected features than contemporary methods. \end{itemize} The notations and definitions are given as follows. Let $n$, $m$, $k$, and $d$ be the numbers of samples, features, selected features, and reduced dimensions, respectively. Let $\mathbf{X}\in\mathbb{R}^{n\times m}$ be a matrix containing the input data. A bold capital letter such as $\mathbf{W}$ denotes a matrix; a lowercase bold capital letter such as $\mathbf{w}$ denotes a vector; Diag($\mathbf{w}$) represents a diagonal matrix with the diagonal $\mathbf{w}$; $\mathbf{w}^{\mathrm{max}_k}$ is an operation to keep the $k$ largest entries of $\mathbf{w}$ while making other entries $0$. $\|\cdot\|_{\mathrm{F}}$ denotes the Frobenius norm. The remaining of the paper is organized as follows. We first discuss the related work, then present our proposed approach, followed by extensive experiments. Finally, we apply FAE to identify multiple subsets of informative features. \section{Related Work} A variety of feature selection approaches have been proposed. They are usually classified into four categories~\citep{Alelyani,Li}: filter methods, which are independent of learning models; wrapper methods, which rely on learning models for selection criteria; embedder approaches, which embed the feature selection into learning models to also achieve model fitting simultaneously; hybrid approaches, which are a combination of more than one of above three. Alternatively, the approaches are categorized as supervised, semi-supervised, and unsupervised methods according to whether label information is utilized. Unsupervised feature selection has potentially broad applications because it requires no label information~\citep{peng2016feature,peng2017nonnegative}; yet, it is also arguably more challenging due to the lack of labels to guide the identification of relevant features. In this paper, we focus on unsupervised feature selection and briefly review typical methods below. {LS~\citep{He} is a filter method that uses the nearest neighbor graph to model the local geometric structures of the data. By selecting the features which are locally the smoothest on the graph, LS focuses on the local property yet neglects the global structure. SPEC~\citep{Zhao} is a filter method based on general similarity matrix. It employs the spectrum of the graph to measure feature relevance and unifies supervised and unsupervised feature selection. Principal feature analysis (PFA)~\citep{Lu} utilizes the structure of the principal components of a set of features to select the subset of relevant features. It can be regarded as a wrapper method to optimize the PC coefficients, and it mainly focuses on globality. Multi-cluster feature selection (MCFS)~\citep{Cai} selects a subset of features to cover the multi-cluster structure of the data, where spectral analysis is used to find the inter-relationship between different features. Unsupervised discriminative feature selection (UDFS)~\citep{Yang} incorporates the discriminative analysis and $\ell_{2,1}$ regularization to identify the most useful features. Nonnegative discriminative feature selection (NDFS)~\citep{Zechao} jointly learns the cluster labels and feature selection matrix to select discriminative features. It uses a nonnegative constraint on the class indicator to learn cluster labels and adopts an $\ell_{2,1}$ constraint to reduce the redundant or noisy features. Infinite feature selection (Inf-FS)~\citep{Roffo} implements feature selection by taking into account all the possible feature subsets as paths on a graph, and it is also a filter method. Recently, a few AE-based feature selection methods have been developed. Autoencoder feature selector (AEFS)~\citep{Han} combines autoencoders regression and $\ell_{2,1}$ regularization on the weights of the encoder to obtain a subset of useful features. It exploits both linear and nonlinear information in the features. Agnostic feature selection (AgnoS)~\citep{Doquet} adopts AE with explicit objective function regularizations, such as the $\ell_{2,1}$ norm on the weights of the first layer of AE (AgnoS-W), $\ell_{2,1}$ norm on the gradient of the encoder (AgnoS-G), and $\ell_1$ norm on the slack variables that constitute the first layer of AE (AgnoS-S), to implement feature selection. AgnoS-S is the best of the three, so in this study we will compare our approach with AgnoS-S. CAE~\citep{Abubakar} replaces the first hidden layer of AE with a \lq\lq concrete selector\rq\rq\,\,layer, which is the relaxation of a discrete distribution called concrete distribution~\citep{Maddison}, and then it picks the features with an extremely high probability of connecting to the nodes of the concrete selection layer. The parameters of this layer are estimated by the reparametrization trick~\citep{Kingma}. CAE reports superior performance over other competing methods. MCFS, UDFS, NDFS, AEFS, AgnoS, and CAE can be all regarded as embedded approaches. Though our proposed FAE model also embeds the feature selection into AE, which looks similar to AgnoS, AEFS, and CAE, it essentially differs from these existing methods: AEFS and AgnoS mainly depend on exploiting sparsity norm regularizations such as $\ell_{2,1}$ and $\ell_{1}$ on the weights of AE to select features, which do not consider diversity; in contrast, FAE consists of two NNs, and it innovatively adopts a sub-NN to explicitly impose the desired diversity requirement on informative features. CAE adopts a probability distribution on the first layer of AE and selects features by their parameters. However, several neurons in the concrete selector layer may potentially select the same or redundant features, and the training requires that the average of the maximum probability of connecting to these neurons in the concrete selection layer exceed a pre-specified threshold close to $1$, which may be hard to attain for high-dimensional datasets; meanwhile, the second and third top features at different nodes of the concrete selector layer may be insignificant because of their trivial average probability. These potential drawbacks can limit the performance of CAE. In contrast, the proposed FAE does not depend on any probability distribution; rather, its sub-NN, with the guidance by the global-NN, directly pinpoints a subset of selected features, which makes FAE concise in architecture and easily applicable to different tasks. \section{Proposed Approach} In this section, we will present the architecture and formulation of FAE. \subsection{Overview of Our Approach} The architecture of our FAE approach is depicted in Figure~\ref{fig:02}. It enlists the AE architecture as a basic building block; yet, its structure is particularly tailored to feature selection. In the following, we will explain the architecture and its components in detail. \begin{figure}\label{fig:02} \end{figure} \subsection{Formalization of Autoencoders} \iffalse We formalize AE as $\min_{f,g}\|\mathbf{X}-f(g(\mathbf{X}))\|_{\mathrm{F}}^2$, where $g$ is an encoder, and $f$ is a decoder. In this paper, we mainly consider the linear case, where $g(\mathbf{X})=\mathbf{X}\mathbf{W}_{\mathrm{E}}$, $\mathbf{W}_{\mathrm{E}}\in\mathbb{R}^{m\times k}$, and $f(g(\mathbf{X}))=(g(\mathbf{X}))\mathbf{W}_{\mathrm{D}}$, $\mathbf{W}_{\mathrm{D}}\in\mathbb{R}^{k\times m}$. Then, we can write it as{\footnote{For simplicity, we ignore the bias in equations.}} $\min_{\mathbf{W}_{\mathrm{E}},\mathbf{W}_{\mathrm{D}}}\|\mathbf{X}-(\mathbf{X}\mathbf{W}_{\mathrm{E}})\mathbf{W}_{\mathrm{D}}\|_{\mathrm{F}}^2$, where $\mathbf{X}\mathbf{W}_{\mathrm{E}}$ produces the latent space $\mathbb{R}^{m\times k}$. Taking MNIST as an example, for $k=49$, we visualize the latent space in Figure~\ref{fig:01} (b). After being transformed from the original space, the contents of each sample are not visually meaningful anymore in the latent space. PCA also reduces dimensions by linearly transforming the original space into a latent space. PCA is a special case of linear AE. Under constraints such as tied weights and orthogonal weights, PCA is equivalent to linear AE~\citep{Pierre,Mohri}. For AE, we formalize it as follows: $$ \displaystyle\min_{f,g}\|\mathbf{X}-f(g(\mathbf{X}))\|_{\mathrm{F}}^2, $$ where $g$ is an encoder, and $f$ is a decoder. In this paper, we mainly consider the linear case, where $g(\mathbf{X})=\mathbf{X}\mathbf{W}_{\mathrm{E}}$, $\mathbf{W}_{\mathrm{E}}\in\mathbb{R}^{m\times k}$, and $f(g(\mathbf{X}))=(g(\mathbf{X}))\mathbf{W}_{\mathrm{D}}$, $\mathbf{W}_{\mathrm{D}}\in\mathbb{R}^{k\times m}$. Then, we can write (\ref{AE}) as{\footnote{For simplicity, we ignore the bias in equations.}} $$ \displaystyle\min_{\mathbf{W}_{\mathrm{E}},\mathbf{W}_{\mathrm{D}}}\|\mathbf{X}-(\mathbf{X}\mathbf{W}_{\mathrm{E}})\mathbf{W}_{\mathrm{D}}\|_{\mathrm{F}}^2, $$ where $\mathbf{X}\mathbf{W}_{\mathrm{E}}$ gives the latent space $\mathbb{R}^{m\times k}$. Taking MNIST as an example, for $k=49$, we visualize the latent space in Figure~\ref{fig:01} (b). The latent space is linearly transformed from the original space, and thus the contents of each sample are not visually meaningful anymore in the latent space. PCA also reduces dimensions by linearly transforming the original space into a latent space. PCA is a special case of linear AE. Under constraints such as tied weights and orthogonal weights, PCA is equivalent to linear AE~\citep{Pierre,Mohri}. \fi For AE, we formalize it as follows: \begin{equation}{\label{AE}} \displaystyle\min_{f,g}\|\mathbf{X}-f(g(\mathbf{X}))\|_{\mathrm{F}}^2, \end{equation} where $g$ is an encoder, and $f$ is a decoder. $g(\mathbf{X})$ embeds the input data into a latent space $\mathbb{R}^{n\times d}$, where $d$ is the dimension of the bottleneck layer of AE. Taking MNIST as an example, for $d=49$, we visualize the encoded samples in Figure~\ref{fig:01} (b). After being transformed, either nonlinearly or linearly, from the original space, the contents of each sample are not visually meaningful in the latent space. \subsection{Formalization of Unsupervised Feature Selection} \iffalse Feature selection is to identify a subset of essential features in the original feature space, and it can be formalized as $\min_{S^k,H}\|H(\mathbf{X}_{S^k})-\mathbf{X}\|_{\mathrm{F}}^2$, where $S^k$ denotes a subset of $k$ features, $\mathbf{X}_{S^k}$ is the derived data set from $\mathbf{X}$ based on $S^k$, and $H$ denotes a mapping from the space spanned by $\mathbf{X}_{S^k}$ to ${\mathbb{R}}^{n \times m}$. The optimization problem is typically NP-hard~\citep{Natarajan,Hamo}. Next we will develop an algorithm to effectively approximate the solution. \fi Feature selection is to identify a subset of informative features in the original feature space, and it can be formalized as follows: \begin{equation}{\label{FS}} \displaystyle\min_{S^k,H}\|H(\mathbf{X}_{S^k})-\mathbf{X}\|_{\mathrm{F}}^2, \end{equation} where $S^k$ denotes the subset of the specified $k$ features, $\mathbf{X}_{S^k}$ is the derived data set from $\mathbf{X}$ based on $S^k$, and $H$ denotes a mapping from the space spanned by $\mathbf{X}_{S^k}$ to ${\mathbb{R}}^{n \times m}$ in the absence of information about labels. The optimization problem in (\ref{FS}) is NP-hard~\citep{Natarajan,Hamo}. This paper will develop an effective algorithm to approximate the solution of (\ref{FS}) for unsupervised feature selection. \subsection{Identification Autoencoders (IAE)} To perform feature selection in the original space, our first attempt is to add a simple one-to-one layer between the input and hidden layers of AE to weigh the importance of each input feature. It is also natural to exploit the sparsity property of $l_1$ regularization for the weights of this layer for feature selection, inspired by Lasso~\citep{Tibshirani}. Then, we have the following formulation: \[ \displaystyle\min_{\mathbf{W}_{\mathrm{I}},f,g}\|\mathbf{X}-f(g(\mathbf{X}\mathbf{W}_{\mathrm{I}}))\|_{\mathrm{F}}^2+\lambda_1\|\mathbf{W}_{\mathrm{I}}\|_{1},\,\, \mathrm{s.t.}\,\,\mathbf{W}_{\mathrm{I}}\geqslant 0, \] where $\mathbf{W}_{\mathrm{I}}=$ Diag($\mathbf{w}$), $\mathbf{w}\in{\mathbb{R}}^m$, and $\lambda_1$ is a parameter balancing between the reconstruction error and sparsity regularization. The $\ell_{1}$ norm induces sparsity and shrinks the less important features’ weights to $0$, and it may make the features more discriminative as well. Here, we require that the entries of $\mathbf{W}_{\mathrm{I}}$ should be nonnegative since they represent the importance of the features and the non-negativity constraint would make their interpretation more meaningful~\citep{Xu}. The fully connected concrete layer~\citep{Abubakar} has taken a similar non-negativity constraint, albeit for a full matrix. For AE with such an additional one-to-one layer and the modified objective function, we call it an identification autoencoder (IAE) only for notational purpose. Actually, IAE is a general case of AgnoS-S~\citep{Doquet}, where it does not impose any constraint on the dimension of the bottleneck layer of AE. After training, the features corresponding to the $k$ largest entries of $\mathbf{W}_{\mathrm{I}}$ are selected as the most informative features. Compared with standard AE, IAE clearly increases no more than $m$ additional parameters. We may visualize the selected features by IAE in Figure~\ref{fig:01} (c) and (e). It is seen that IAE captures a part of key features from the original samples; however, it cannot capture other key features on the skeleton of the digits, and the selected features fail to recover the original contents, as shown in Figure~\ref{fig:01} (g). In general, the selected features by unsupervised feature selection are to be representative of the input data, implying that the selected features should reconstruct the original samples well. Thus, IAE cannot serve the purpose of feature selection in itself. Its failure is mainly due to the lack of diversity of its selected features. The $\ell_1$ regularization term in IAE may promote the sparsity of the feature weight vector; however, it cannot ensure a sufficient level of diversity needed by a representative subset of features. Because the features in real data often have significant inter-correlations and even redundancy, without properly taking account of them, the selected features would have high correlations yet lack necessary diversity. Directly computing the pairwise interactions of all features requires a full $m \times m$ weight matrix, which may be computationally costly for high-dimensional data. Accounting for higher-order interactions between features would require even higher complexities. To address this problem, we propose a simple yet effective approach by using a sub-NN to locally excavate for diversity information from the feature weights, thereby reducing the search space significantly. We will introduce this sub-network below, which leads to the architecture of FAE. \subsection{Fractal Autoencoders (FAE)} To remedy the diversity issue of IAE, we further design a sub-NN term, which requires that the subset of $k$ selected features from $\mathbf{W}_{\mathrm{I}}$ should be so diverse as to still represent the global contents of original samples as much as possible. Putting together, our proposed model is as follows: \begin{equation}{\label{FAE}} \begin{array}{l} \displaystyle\min_{\mathbf{W}_{\mathrm{I}},f,g}\|\mathbf{X}-f(g(\mathbf{X}\mathbf{W}_{\mathrm{I}}))\|_{\mathrm{F}}^2+\lambda_1\|\mathbf{X}-f(g(\mathbf{X}\mathbf{W}_{\mathrm{I}}^{\mathrm{max}_k}))\|_{\mathrm{F}}^2\\ \displaystyle+\lambda_2\|\mathbf{W}_{\mathrm{I}}\|_{1},\,\, \mathrm{s.t.}\,\,\mathbf{W}_{\mathrm{I}}\geqslant 0, \end{array} \end{equation} \iffalse \begin{equation}{\label{FAE1}} \displaystyle\min_{\mathbf{W}_{\mathrm{I}},\mathbf{W}_{\mathrm{E}},\mathbf{W}_{\mathrm{D}}}\|\mathbf{X}-((\mathbf{X}\mathbf{W}_{\mathrm{I}})\mathbf{W}_{\mathrm{E}})\mathbf{W}_{\mathrm{D}}\|_{\mathrm{F}}^2+\lambda_1\|\mathbf{X}-((\mathbf{X}\mathbf{W}_{\mathrm{I}}^{\mathrm{max}_k})\mathbf{W}_{\mathrm{E}})\mathbf{W}_{\mathrm{D}}\|_{\mathrm{F}}^2+\lambda_2\|\mathbf{W}_{\mathrm{I}}\|_{1}, \end{equation} \begin{strip} \vskip -0.3in \begin{align}{\label{FAE}} \displaystyle\min_{\mathbf{W}_{\mathrm{I}},f,g}\|\mathbf{X}-f(g(\mathbf{X}\mathbf{W}_{\mathrm{I}}))\|_{\mathrm{F}}^2+\lambda_1\|\mathbf{X}-f(g(\mathbf{X}\mathbf{W}_{\mathrm{I}}^{\mathrm{max}_k}))\|_{\mathrm{F}}^2+\lambda_2\|\mathbf{W}_{\mathrm{I}}\|_{1},\,\, \mathrm{s.t.}\,\,\mathbf{W}_{\mathrm{I}}\geqslant 0, \end{align} \vskip -0.25in \end{strip} \fi where $\mathbf{W}_{\mathrm{I}}^{\mathrm{max}_k}=$ Diag$(\mathbf{w}^{\mathrm{max}_k})$, and $\lambda_1$ and $\lambda_2$ are nonnegative balancing parameters. We call the neural network corresponding to (\ref{FAE}) fractal autoencoders (FAE), due to its seemingly self-similarity characteristic: A small proportion of features in the second term achieve a similar performance to the whole set of features in the first term for reconstructing the original data. This characteristic will be manifested more clearly when applying FAE to extract multiple subsets of features later. In training, we solve $\mathbf{W}_{\mathrm{I}}^{\mathrm{max}_k}$ by jointly optimizing the global-NN and sub-NN. After training FAE, we obtain $\mathbf{W}_{\mathrm{I}}^{\mathrm{max}_k}$ which can be used to perform feature selection on new samples during testing. We illustrate the selected features, the selected features superimposed on the original samples (for easy visualization), and reconstructed samples with these features in (d), (f), and (h) of Figure~\ref{fig:01}, respectively, for 9 random samples from MNIST. \iffalse Because we solve $\mathbf{W}_{\mathrm{I}}^{\mathrm{max}_k}$ by jointly optimizing the global-NN and sub-NN, if we re-optimize the selected features to reconstruct the original data, the reconstruction error is expected to be smaller than that from FAE, which is stated in the following Theorem~\ref{Better}. This theorem shows the potential benefit of re-optimization based on the selected features by FAE. It motivates the later use of the linear regression model on the selected features for the GEO dataset. Due to space constraint, the proof of Theorem~\ref{Better} and its related discussion are provided in the Supplementary Material. \begin{theorem}{\label{Better}} Let ($\mathbf{W}^{*}_{\mathrm{I}},f^*$,$g^*$) be an optimal solution of (\ref{FAE}). If $\mathrm{Pos}(\mathbf{W}^{*}_{\mathrm{I}}-(\mathbf{W}_{\mathrm{I}}^{*})^{\mathrm{max}_k})>0$, then there exists ($f^{*'}$,$g^{*'}$), such that $$ \begin{array}{ll} &\displaystyle \|\mathbf{X}-f^{*'}(g^{*'}(\mathbf{X}(\mathbf{W}_{\mathrm{I}}^{*})^{\mathrm{max}_k})\|_{\mathrm{F}}^2\\ \leqslant&\displaystyle\|\mathbf{X}-f^*(g^*(\mathbf{X}(\mathbf{W}_{\mathrm{I}}^{*})^{\mathrm{max}_k})\|_{\mathrm{F}}^2. \end{array} $$ \end{theorem} \fi \begin{figure} \caption{(a) Testing samples randomly chosen from MNIST; (b) 49 features extracted by AE (size enlarged for visualization); (c) 50 features selected by IAE; (d) 50 features selected by FAE; (e) key features by IAE shown with original samples; (f) key features by FAE shown with original samples; (g) IAE's reconstruction based on the 50 features; (h) FAE's reconstruction based on the 50 features; (c)-(h) are best viewed with enlarging.} \label{fig:01} \end{figure} \iffalse \begin{figure*}\label{fig:02} \end{figure*} \fi \section{Experiments} In this section, we will perform experiments to extensively assess FAE by comparing it with contemporary methods on many benchmarking datasets. \begin{table}[!htbp] \centering \resizebox{0.85\columnwidth}{!}{ {\setlength{\aboverulesep}{0pt} \setlength{\belowrulesep}{0pt} \begin{tabular}{ll!{\vrule width0.8pt}lll} \toprule &Dataset&\# Sample&\# Feature/\# Gene&\# Class\\ \midrule 1&Mice Protein & 1,080 & 77 & 8\\ 2&COIL-20 & 1,440 & 400 & 20\\ 3&Activity & 5,744 & 561 & 6\\ 4&ISOLET & 7,797 & 617 & 26\\ 5&MNIST & 10,000 & 784 & 10\\ 6&MNIST-Fashion & 10,000 & 784 & 10\\ 7&USPS & 9,298 & 256 & 10\\ 8&GLIOMA & 50 & 4,434 & 4\\ 9&leukemia & 72 & 7,070 & 2\\ 10&pixraw10P & 100 & 10,000 & 10\\ 11&Prostate$\_$GE & 102 & 5,966 & 2\\ 12&warpAR10P & 130 & 2,400 & 10\\ 13&SMK$\_$CAN$\_$187 & 187 & 19,993 & 2\\ 14&arcene & 200 & 10,000 & 2\\ 15&GEO & 111,009 & 10,463 & Null\\ \bottomrule \end{tabular}}} \caption{Statistics of datasets.} \label{table1} \end{table} \subsection{Datasets to Be Used} The benchmarking datasets used in this paper are Mice Protein Expression~\citep{UCI2}, COIL-20~\citep{COIL20}, Smartphone Dataset for Human Activity Recognition in Ambient Assisted Living~\citep{Anguita}, ISOLET~\citep{UCI1}, MNIST~\citep{LeCun}, MNIST-Fashion~\citep{MNISTFashion}, GEO{\footnote{Obtained from https://cbcl.ics.uci.edu/public$\_$data/D-GEX/~\citep{Chen}.}}, USPS, GLIOMA, leukemia, pixraw10P, Prostate$\_$GE, warpAR10P, SMK$\_$CAN$\_$187, and arcene{\footnote{The last eight datasets are from the scikit-feature feature selection repository~\citep{Li}.}}. We summarize the statistics of these datasets in Table~\ref{table1}. Following CAE~\citep{Abubakar} and considering the long runtime of UDFS, for MNIST and MNIST-Fashion, we randomly choose $6,000$ samples from each training set to train and validate and $4,000$ from each testing set for testing. And we randomly split $6,000$ samples into training and validation sets at a ratio of $90:10$. For GEO, we randomly split the preprocessed GEO in the same way as D-GEX~\citep{Chen}: $88,807$ for training, $11,101$ for validating, and $11,101$ for testing{\footnote{\citet{Abubakar} stated that they used the same preprocessing scheme with D-GEX. Though having the same number of features, we note that their dataset has a slightly different sample size $112,171$ from ours and that in~\citep{Chen}.}}. For other datasets, we randomly split them into training, validation, and testing sets by a ratio of $72:8:20$. \subsection{Design of Experiments} In experiments of FAE, we set the maximum number of epochs to be $1,000$ for datasets 1-14 and $200$ for dataset 15. We initialize the weights of feature selection layer by sampling uniformly from $\mathrm{U}[0.999999, 0.9999999]$ and the other layers with the Xavier normal initializer. We adopt the Adam optimizer~\citep{Kingma1} with an initialized learning rate of $0.001$. We set $\lambda_1$ and $\lambda_2$ in \eqref{FAE} to $2$ and $0.1$, respectively. For the hyper-parameter setting, we perform a grid search on the validation set, and then choose the optimal one. In the following experiments, we only use the linear version of FAE for simplicity, that is, $g(\mathbf{X})=\mathbf{X}\mathbf{W}_{\mathrm{E}}$, $\mathbf{W}_{\mathrm{E}}\in\mathbb{R}^{m\times k}$, and $f(g(\mathbf{X}))=(g(\mathbf{X}))\mathbf{W}_{\mathrm{D}}$, $\mathbf{W}_{\mathrm{D}}\in\mathbb{R}^{k\times m}$. The simple, linear version of FAE can already achieve superior performance, as shown below. For the specified number of selected features $k$, we adopt two options: 1) We take $k=10$ for Mice Protein dataset, $50$ for datasets $2$-$7$ following CAE~\citep{Abubakar}, and $64$ for high-dimensional datasets $8$-$14$. For all baseline methods, we adopt this option. 2) For FAE, we additionally use fewer features, with $k=8$ for Mice Protein dataset, $36$ for datasets $2$-$7$, and $50$ for datasets $8$-$14$, to further show its superior representative ability over competing methods. We set the dimension of the latent space to $k$ and denote FAE with these two options as Opt1 and Opt2, respectively. Two metrics are used for evaluating the models: 1) reconstruction error, which is measured in mean squared error (MSE); 2) classification accuracy, which is measured by passing the selected features to a downstream classifier as a viable means to benchmark the quality of the selected subset of features. For fair comparison, following CAE~\citep{Abubakar}, after selecting the features, we train a linear regression model with no regularization to reconstruct the original features, and the resulting linear reconstruction error is used as the first metric{\footnote{For reconstruction error only, it denotes the error from the second term of~\eqref{FAE}, that is, $\|\mathbf{X}-f(g(\mathbf{X}\mathbf{W}_{\mathrm{I}}^{\mathrm{max}_k}))\|_{\mathrm{F}}^2$.}}. Meanwhile, for the second metric we use the extremely randomized trees~\citep{Geurts} as the classifier. All experiments are implemented with Python 3.7.8, Tensorflow 1.14, and Keras 2.2.5. The codes can be found at https://github.com/xinxingwu-uk/FAE. \subsection{Results on Fourteen Datasets} The experimental results on reconstruction and classification with the selected features by different algorithms are reported in Tables~\ref{table2} and~\ref{table3}. For all the results, we implement $5$ runs with random splits on the fixed dataset to present mean results and standard errors. From Table~\ref{table2}, it is seen that FAE yields smaller reconstruction errors than baseline methods on majority datasets, indicating its strong ability for representing the original data. From Table~\ref{table3}, it is evident that FAE exhibits consistently superior performance in the downstream classification task on most of the benchmarking datasets. \iffalse \begin{table*}[t]  \centering \resizebox{1.5\columnwidth}{!}{ {\setlength{\aboverulesep}{0pt} \setlength{\belowrulesep}{0pt} \begin{tabular}{l!{\vrule width0.8pt}ccccccccc|cc} \toprule \multirow{2}*{Dataset}&\multirow{2}*{LS}&\multirow{2}*{SPEC}&\multirow{2}*{NDFS}&\multirow{2}*{AEFS}&\multirow{2}*{UDFS}&\multirow{2}*{MCFS}&\multirow{2}*{PFA}&\multirow{2}*{AgnoS-S}&\multirow{2}*{CAE}&\multicolumn{2}{c}{FAE}\\ & & & & & & & & & &Opt1 &Opt2\\ \midrule Mice Protein &0.603 &0.051 &0.041 &0.783 &0.867&0.695 &0.871&0.013 &0.372 &{\bf 0.012} &0.015\\ COIL-20 &0.126 &0.413 &0.134 &0.061 &0.116&0.085 &0.061&0.038 &0.093 &{\bf 0.010} &0.013 \\ Activity &0.139 &0.127 &144.353 &0.112 &0.173&0.170 &0.010&0.010 &0.108 &{\bf 0.004} &0.005\\ ISOLET &0.344 &0.119 &0.129 &0.301 &0.375&0.471 &0.316&0.042 &0.299 &{\bf 0.014} &0.016\\ MNIST &0.070 &0.068 &0.136 &0.033 &0.035&0.064 &0.051&0.051 &0.026 &{\bf 0.019} &0.025\\ MNIST-Fashion &0.128 &0.107 &0.127 &0.047 &0.133&0.096 &0.043&0.024 &0.041 &{\bf 0.019} &0.023\\ USPS &3.528 &1.120 &0.918 &0.025 &0.034&1.050 &0.022&0.017 &{\bf 0.010}&0.011 &0.021\\ GLIOMA &0.140 &0.210 &0.404 &0.060 &0.060&0.173 &0.055&{\bf0.054} &0.063 &{\bf 0.054} &0.189\\ leukemia &6.978 &8.127 &7.889 &7.751 &7.387 &20.016&8.568&7.426 &{\bf 4.442}&6.370 &6.850\\ pixraw10P &32.758 &0.524 &0.874 &{\bf 0.002}&0.007&0.073&0.003 &{\bf 0.002}&0.006 &0.004&{\bf 0.002}\\ ProstateGE &1.694 &0.605 &4.506 &0.280 &0.228 &1.929&0.180 &0.387 &{\bf 0.048} &0.208&0.058\\ warpAR10P &1.187 &0.320 &0.534 &0.034 &0.107&2.548 &0.035&0.031 &0.055 &0.028&{\bf 0.026}\\ SMK$\_$CAN$\_$187 &7.344 &0.118 &3.005 &0.102 &$\backslash$ &5.492&0.089 &0.096 &{\bf 0.077} &0.087&0.085\\ arcene &0.328 &0.045 &1335.029&0.025 &$\backslash$ &4.826&0.042 &0.031 &0.029 &{\bf 0.023}&{\bf 0.023}\\ \bottomrule \end{tabular}}} \caption{Linear reconstruction error with selected features by different algorithms. The \lq\lq$\backslash$\rq\rq\, mark denotes the case with prohibitive running time, where the algorithm ran for more than a week without getting a result and thus was stopped. } \label{table2} \end{table*} \begin{table*}[t]  \centering \resizebox{1.5\columnwidth}{!}{ {\setlength{\aboverulesep}{0pt} \setlength{\belowrulesep}{0pt} \begin{tabular}{l!{\vrule width0.8pt}ccccccccc|cc} \toprule \multirow{2}*{Dataset}&\multirow{2}*{LS}&\multirow{2}*{SPEC}&\multirow{2}*{NDFS}&\multirow{2}*{AEFS}&\multirow{2}*{UDFS}&\multirow{2}*{MCFS}&\multirow{2}*{PFA}&\multirow{2}*{AgnoS-S}&\multirow{2}*{CAE}&\multicolumn{2}{c}{FAE}\\ & & & & & & & & & &Opt1 &Opt2\\ \midrule Mice Protein &0.134&0.245&0.083 &0.125 &0.139 &0.139&0.130 &0.519 &0.134 &{\bf 0.546}&0.440\\ COIL-20 &0.389&0.149&0.212 &0.580 &0.556 &0.635&0.642 &0.892 &0.586 &0.983&{\bf 0.986}\\ Activity &0.280&0.203&0.188 &0.240 &0.287 &0.295&0.364 &0.561 &0.420 &{\bf 0.911}&0.875\\ ISOLET &0.407&0.058&0.073 &0.576 &0.455 &0.522&0.622 &0.181 &0.685 &{\bf 0.898}&0.851\\ MNIST &0.646&0.114& \,\,\,\,\,\,0.138\,\,\,\,\,\, &0.690 &0.892 &0.807&0.852 &0.550 &0.906 &{\bf 0.921}&0.912 \\ MNIST-Fashion &0.517&0.276&0.138 &0.580 &0.547 &0.513&0.683 &0.791 &0.677 &{\bf 0.815}&0.788\\ USPS &0.363&0.463&0.119 &0.942 &0.944 &0.126&0.960 &0.956 &0.955 &{\bf 0.963}&0.962\\ GLIOMA &0.500&0.200&0.400 &0.800 &0.700 &0.400&0.800 &0.800 &0.500 &{\bf 0.900}&{\bf 0.900}\\ leukemia &0.533&0.467&0.467 &0.733 &0.800 &0.533&0.733 &0.667 &{\bf 0.867} &{\bf 0.867}&0.733\\ pixraw10P &0.500&0.100&0.250 &{\bf 1.000}&0.950 &0.100&{\bf 1.000}&{\bf 1.000} &{\bf 1.000}&{\bf 1.000}&{\bf 1.000}\\ ProstateGE &0.524&0.476&0.476 &0.857 &0.905 &0.571&0.905 &0.762 &0.857 &{\bf 0.952}&0.905\\ warpAR10P &0.077&0.077&0.192 &0.808 &0.577 &0.077&{\bf 0.923} &0.462 &0.692 &{\bf 0.923}&0.846\\ SMK$\_$CAN$\_$187 &0.579&0.658&0.421 &0.500 &$\backslash$&0.447&0.658 &0.658 &0.737 &{\bf 0.763}&0.737\\ arcene &0.625&0.325&0.700 &0.750 &$\backslash$&0.625&0.775 &0.775 &0.775 &{\bf 0.850}&0.825\\ \midrule Average &0.434&0.272&0.276 &0.656 &0.646 &0.414&0.718 &0.684 &0.700 &{\bf 0.878}&0.840\\ \bottomrule \end{tabular}}} \caption{Classification accuracy with selected features by different algorithms. The mark \lq\lq$\backslash$\rq\rq\, is used similarly to Table~\ref{table2}.} \label{table3} \end{table*} \begin{table*}[!htbp]  \centering \resizebox{2.1\columnwidth}{!}{ {\setlength{\aboverulesep}{0pt} \setlength{\belowrulesep}{0pt} \begin{tabular}{l!{\vrule width0.8pt}cccccccccc|cc} \toprule \multirow{2}*{{\Large Dataset}}&\multirow{2}*{{\Large LS}}&\multirow{2}*{{\Large SPEC}}&\multirow{2}*{{\Large NDFS}}&\multirow{2}*{{\Large AEFS}}&\multirow{2}*{{\Large UDFS}}&\multirow{2}*{{\Large MCFS}}&\multirow{2}*{{\Large PFA}}&\multirow{2}*{{\Large Inf-FS}}&\multirow{2}*{{\Large AgnoS-S}}&\multirow{2}*{{\Large CAE}}&\multicolumn{2}{c}{{\Large FAE}}\\ & & & & & & & & & & &Opt1 &Opt2\\ \midrule Mice Protein & 0.575$\pm$0.1177 & 1.324$\pm$1.7774 & 1.686$\pm$0.9749 & 0.020$\pm$0.0070 & {\bf 0.009$\pm$0.0056} & 16.513$\pm$30.3574 & 0.028$\pm$0.0017 & 0.443$\pm$0.0397 & 0.038$\pm$0.0137 & 0.032$\pm$0.0010 & 0.014$\pm$0.0050 & 0.015$\pm$0.0046\\ COIL-20 & 0.225$\pm$0.0353 & 0.711$\pm$0.6263 & 0.144$\pm$0.0157 & {\bf 0.011$\pm$0.0003} & 0.015$\pm$0.0011 & 2.890$\pm$2.4658 & 0.009$\pm$0.0003 & 0.134$\pm$0.0127 & 0.035$\pm$0.0088 & 0.011$\pm$0.0005 & 0.011$\pm$0.0008 & 0.013$\pm$0.0006 \\ Activity & 4166.517$\pm$776.2622 & 0.153$\pm$0.0482 & 284.396$\pm$137.3668 & 0.005$\pm$0.0002 & 0.004$\pm$0.0001 & 63.428$\pm$109.3454 & 0.005$\pm$0.0002 & 0.207$\pm$0.043 & 0.009$\pm$0.0014 & {\bf 0.004$\pm$0.0001} & 0.005$\pm$0.0005 & 0.005$\pm$0.0005\\ ISOLET &0.304$\pm$0.0473 & 0.104$\pm$0.0065 & 0.144$\pm$0.0052 & 0.016$\pm$0.0004 & 0.019$\pm$0.0011 & 0.154$\pm$0.0324 & 0.015$\pm$0.0002 & 0.099$\pm$0.0153 & 0.035$\pm$0.0048 & {\bf 0.013$\pm$0.0002} & 0.015$\pm$0.0003 & 0.017$\pm$0.0002\\ MNIST & 0.305$\pm$0.0000 & 0.067$\pm$0.0005 & 0.134$\pm$0.0038 & 0.037$\pm$0.0016 & 0.029$\pm$0.0018 & 0.128$\pm$0.0025 & 0.028$\pm$0.0012 & 0.101$\pm$0.0031 & 0.055$\pm$0.0051 & {\bf 0.019$\pm$0.0002} & 0.019$\pm$0.0004 & 0.025$\pm$0.0006\\ MNIST-Fashion & 11.352$\pm$22.5123 & 0.109$\pm$0.0066 & 0.139$\pm$0.0099 & 0.023$\pm$0.0003 & 0.027$\pm$0.0025 & 0.458$\pm$0.6574 & 0.022$\pm$0.0004 & 0.105$\pm$0.0059 & 0.025$\pm$0.0013 & 0.019$\pm$0.0004 & {\bf 0.019$\pm$0.0001} & 0.022$\pm$0.0001\\ USPS & 2.986$\pm$0.7332 & 1.275$\pm$0.1552 & 1.072$\pm$0.0786 & 0.027$\pm$0.0028 & 0.032$\pm$0.0016 & 1.065$\pm$0.0740 & 0.018$\pm$0.0021 & 4.048 $\pm$0.6308& 0.017$\pm$0.0028 & 0.012$\pm$0.0007 & {\bf 0.011$\pm$0.0005} & 0.021$\pm$0.0012\\ GLIOMA &0.226$\pm$0.0333 & 0.259$\pm$0.0301 & 0.347$\pm$0.0442 & {\bf 0.065$\pm$0.0083} & 0.072$\pm$0.0043 & 0.249$\pm$0.0193 & 0.067$\pm$0.0112 & 0.211$\pm$0.0650 & 0.068$\pm$0.0122 & 0.066$\pm$0.0106 & 0.069$\pm$0.0095 & 0.141$\pm$0.0321\\ leukemia &10.652$\pm$5.7118 & 8.504$\pm$1.4051 & 12.680$\pm$3.9499 & 7.777$\pm$2.4148 & $\backslash$ & 14.059$\pm$4.0052 & 6.299$\pm$1.2145 & 12.311$\pm$2.7244 & {\bf 6.142$\pm$1.0737} & 397.766$\pm$736.0781 & 7.011$\pm$0.9403 & 9.068$\pm$3.5562\\ pixraw10P &31.138$\pm$29.7659 & 0.554$\pm$0.1547 & 0.645$\pm$0.4003 & 0.006$\pm$0.0024 & $\backslash$ & 0.163$\pm$0.0407 & 0.003$\pm$0.0007 & 1.357$\pm$0.6999 & 0.009$\pm$0.0035 & 0.013$\pm$0.0106 & 0.005$\pm$0.0040 & {\bf 0.002$\pm$0.0005}\\ ProstateGE &1.361$\pm$0.4222 & 0.404$\pm$0.3099 & 3.910$\pm$2.0535 & 0.242$\pm$0.0944 & $\backslash$ & 3.002$\pm$3.3570 & 0.142$\pm$0.0387 & 0.273 $\pm$0.0933 & 0.146$\pm$0.0264 & 0.202$\pm$0.1372 & 0.144$\pm$0.0390 & {\bf 0.068$\pm$0.0158}\\ warpAR10P &1.277$\pm$0.6057 & 4.734$\pm$3.9503 & 0.597$\pm$0.1184 & 0.039$\pm$0.0072 & 0.086$\pm$0.0288 & 1.077$\pm$0.3250 & 0.036$\pm$0.0047 & 3.677$\pm$1.1488 & 0.045$\pm$0.0064 & 0.074$\pm$0.0267 & 0.040$\pm$0.0050 & {\bf 0.033$\pm$0.0047}\\ SMK$\_$CAN$\_$187 &5.865$\pm$1.0775 & 0.127$\pm$0.0204 & $\backslash$ & 0.114$\pm$0.0217 & $\backslash$ & 6.236$\pm$1.0024 & 0.110$\pm$0.0163 & 6.843$\pm$2.5290 & 0.102$\pm$0.0121 & 0.100$\pm$0.0149 & 0.105$\pm$0.0186 & {\bf 0.097$\pm$0.0214}\\ arcene &0.410$\pm$0.2496 & 0.045$\pm$0.0006 & 1493.348$\pm$267.3695 & 0.055$\pm$0.0426 & $\backslash$ & 1.856$\pm$0.6150 & 0.035$\pm$0.0091 & 478.260$\pm$205.6201 & 0.030$\pm$0.0121 & 0.027$\pm$0.0005 & 0.025$\pm$0.0009 & {\bf 0.023$\pm$0.0010}\\ \bottomrule \end{tabular}}} \caption{Linear reconstruction error with selected features by different algorithms. The \lq\lq $\backslash$\rq\rq\, mark denotes the case with prohibitive running time, where the algorithm ran for more than one week without getting the result and thus was stopped. } \label{table2} \end{table*} \begin{table*}[!htbp]  \centering \resizebox{2.0\columnwidth}{!}{ {\setlength{\aboverulesep}{0pt} \setlength{\belowrulesep}{0pt} \begin{tabular}{l!{\vrule width0.8pt}cccccccccc|cc} \toprule \multirow{2}*{{\Large Dataset}}&\multirow{2}*{{\Large LS}}&\multirow{2}*{{\Large SPEC}}&\multirow{2}*{{\Large NDFS}}&\multirow{2}*{{\Large AEFS}}&\multirow{2}*{{\Large UDFS}}&\multirow{2}*{{\Large MCFS}}&\multirow{2}*{{\Large PFA}}&\multirow{2}*{{\Large Inf-FS}}&\multirow{2}*{{\Large AgnoS-S}}&\multirow{2}*{{\Large CAE}}&\multicolumn{2}{c}{{\Large FAE}}\\ & & & & & & & & & & &Opt1 &Opt2\\ \midrule Mice Protein &0.173$\pm$0.0404 & 0.137$\pm$0.0364 & 0.156$\pm$0.1052 & 0.885$\pm$0.0395 & {\bf 0.951$\pm$0.0402} & 0.174$\pm$0.0590 & 0.928$\pm$0.0285 & 0.193$\pm$0.0685 & 0.632$\pm$0.3718 & 0.669$\pm$0.0483 & 0.878$\pm$0.0733 & 0.786$\pm$0.1823\\ COIL-20 & 0.160$\pm$0.0336 & 0.164$\pm$0.0261 & 0.168$\pm$0.0386 & 0.993$\pm$0.0022 & 0.976$\pm$0.0230 & 0.105$\pm$0.0257 & 0.994$\pm$0.0040 & 0.344$\pm$0.0904 & 0.835$\pm$0.1453 & 0.988$\pm$0.0047 & {\bf 0.996$\pm$0.0026} & 0.990$\pm$0.0097\\ Activity & 0.290$\pm$0.0143 & 0.203$\pm$0.0113 & 0.182$\pm$0.0115 & 0.887$\pm$0.0173 & 0.917$\pm$0.0151 & 0.216$\pm$0.0462 & 0.888$\pm$0.0144 & 0.241$\pm$0.0504 & 0.749$\pm$0.1066 & {\bf 0.919$\pm$0.0103} & 0.914$\pm$0.0113 & 0.888$\pm$0.0125\\ ISOLET &0.111$\pm$0.0067 & 0.034$\pm$0.0086 & 0.084$\pm$0.0156 & 0.829$\pm$0.0206 & 0.739$\pm$0.0575 & 0.060$\pm$0.0124 & 0.865$\pm$0.0134 & 0.140$\pm$0.0190 & 0.362$\pm$0.1701 & 0.879$\pm$0.0064 & {\bf 0.890$\pm$0.0059} & 0.870$\pm$0.0131\\ MNIST & 0.124$\pm$0.0104 & 0.112$\pm$0.0103 & 0.105$\pm$0.0191 & 0.802$\pm$0.0256 & 0.881$\pm$0.0159 & 0.130$\pm$0.0411 & 0.885$\pm$0.0198 & 0.132$\pm$0.0132 & 0.435$\pm$0.1564 & 0.925$\pm$0.0043 & {\bf 0.929$\pm$0.0065} & 0.908$\pm$0.0087 \\ MNIST-Fashion & 0.171$\pm$0.0278 & 0.272$\pm$0.0174 & 0.152$\pm$0.0452 & 0.794$\pm$0.0149 & 0.793$\pm$0.0122 & 0.163$\pm$0.0085 & 0.803$\pm$0.0105 & 0.188$\pm$0.0378 & 0.784$\pm$0.0128 & 0.823$\pm$0.0101 & {\bf 0.825$\pm$0.0062} & 0.808$\pm$0.0090\\ USPS & 0.342$\pm$0.0481 & 0.362$\pm$0.0692 & 0.106$\pm$0.0183 & 0.949$\pm$0.0069 & 0.945$\pm$0.0034 & 0.123$\pm$0.0335 & 0.957$\pm$0.0030 & 0.182 $\pm$0.0423 & 0.956$\pm$0.0035 & 0.962$\pm$0.0039 & {\bf 0.963$\pm$0.0017} & 0.962$\pm$0.0034\\ GLIOMA &0.460$\pm$0.1744 & 0.220$\pm$0.1600 & 0.340$\pm$0.0800 & 0.680$\pm$0.1327 & 0.760$\pm$0.2059 & 0.460$\pm$0.0800 & 0.660$\pm$0.0800 & 0.420$\pm$0.1166 & 0.620$\pm$0.0748 & 0.700$\pm$0.1673 & {\bf 0.760$\pm$0.0800} & 0.720$\pm$0.1600\\ leukemia &0.520$\pm$0.1222 & 0.587$\pm$0.0884 & 0.573$\pm$0.1373 & 0.760$\pm$0.1236 & $\backslash$ & 0.547$\pm$0.1067 & 0.720$\pm$0.1485 & 0.560$\pm$0.0680 & 0.720$\pm$0.1485 & 0.653$\pm$0.1293 & 0.800$\pm$0.1116 & {\bf 0.800$\pm$0.0596}\\ pixraw10P &0.510$\pm$0.0860 & 0.090$\pm$0.0583 & 0.190$\pm$0.0800 & {\bf 1.000$\pm$0.0000} & $\backslash$ & 0.110$\pm$0.0583 & {\bf 1.000$\pm$0.0000} & 0.410$\pm$0.2354 & {\bf 1.000$\pm$0.0000} & 0.990$\pm$0.0200 & {\bf 1.000$\pm$0.0000} & {\bf 1.000$\pm$0.0000}\\ ProstateGE &0.438$\pm$0.0923 & 0.552$\pm$0.1367 & 0.562$\pm$0.1741 & 0.857$\pm$0.0852 & $\backslash$ & 0.467$\pm$0.1102 & 0.810$\pm$0.0999 & 0.571 $\pm$0.1565 & 0.819$\pm$0.0467 & 0.781$\pm$0.0981 & 0.829$\pm$0.0571 & {\bf 0.857$\pm$0.0426}\\ warpAR10P &0.115$\pm$0.0544 & 0.077$\pm$0.0487 & 0.123$\pm$0.0705 & 0.769$\pm$0.0544 & 0.646$\pm$0.0890 & 0.154$\pm$0.0544 & {\bf 0.823$\pm$0.0671} & 0.108$\pm$0.0377 & 0.715$\pm$0.0713 & 0.700$\pm$0.1071 & 0.715$\pm$0.0308 & 0.638$\pm$0.0792\\ SMK$\_$CAN$\_$187 &0.553$\pm$0.0706 & 0.658$\pm$0.0686 & $\backslash$ & 0.611$\pm$0.0453 & $\backslash$ & 0.505$\pm$0.0421 & 0.700$\pm$0.0268 & 0.474$\pm$0.0288 & 0.716$\pm$0.0562 & 0.695$\pm$0.0825 & {\bf 0.721$\pm$0.0657} & 0.711$\pm$0.0849\\ arcene &0.595$\pm$0.0696 & 0.515$\pm$0.0982 & 0.575$\pm$0.0725 & 0.810$\pm$0.0515 & $\backslash$ & 0.565$\pm$0.0464 & 0.800$\pm$0.0570 & 0.595$\pm$0.0534 & 0.800$\pm$0.0447 & 0.745$\pm$0.0886 & {\bf 0.840$\pm$0.0663} & 0.800$\pm$0.0447\\ \bottomrule \end{tabular}}} \caption{Classification accuracy with selected features by different algorithms. The mark \lq\lq $\backslash$ \rq\rq\, is used similarly to Table~\ref{table2}.} \label{table3} \end{table*} \begin{table*}[!htbp]  \centering \resizebox{2.1\columnwidth}{!}{ {\setlength{\aboverulesep}{0pt} \setlength{\belowrulesep}{0pt} \begin{tabular}{l!{\vrule width0.8pt}cccccccccc|cc} \toprule \multirow{2}*{{\Large Dataset}}&\multirow{2}*{{\Large LS}}&\multirow{2}*{{\Large SPEC}}&\multirow{2}*{{\Large NDFS}}&\multirow{2}*{{\Large AEFS}}&\multirow{2}*{{\Large UDFS}}&\multirow{2}*{{\Large MCFS}}&\multirow{2}*{{\Large PFA}}&\multirow{2}*{{\Large Inf-FS}}&\multirow{2}*{{\Large AgnoS-S}}&\multirow{2}*{{\Large CAE}}&\multicolumn{2}{c}{{\Large FAE}}\\ & & & & & & & & & & &Opt1 &Opt2\\ \midrule Mice Protein & .575$\pm$.118 & 1.324$\pm$1.777 & 1.686$\pm$0.975 & .020$\pm$.007 & {\bf .009$\pm$.006} & 16.51$\pm$30.36 & .028$\pm$.002 & .443$\pm$.040 & .038$\pm$.014 & .032$\pm$.001 & .014$\pm$.005 & .015$\pm$.005\\ COIL-20 & .225$\pm$.035 & .711$\pm$.626 & .144$\pm$.016 & {\bf .011$\pm$.000} & .015$\pm$.001 & 2.890$\pm$2.466 & .009$\pm$.000 & .134$\pm$.013 & .035$\pm$.009 & .011$\pm$.001 & .011$\pm$.001 & .013$\pm$.001 \\ Activity & 4166.5$\pm$776.3 & .153$\pm$.048 & 284.4$\pm$137.4 & .005$\pm$.000 & .004$\pm$.000 & 63.4$\pm$109.3 & .005$\pm$.000 & .207$\pm$.043 & .009$\pm$.001 & {\bf .004$\pm$.000} & .005$\pm$.001 & .005$\pm$.001\\ ISOLET &.304$\pm$.047 & .104$\pm$.007 & .144$\pm$.005 & .016$\pm$.000 & .019$\pm$.001 & .154$\pm$.032 & .015$\pm$.000 & .099$\pm$.015 & .035$\pm$.005 & {\bf .013$\pm$.000} & .015$\pm$.000 & .017$\pm$.000\\ MNIST & .305$\pm$.000 & .067$\pm$.001 & .134$\pm$.004 & .037$\pm$.002 & .029$\pm$.002 & .128$\pm$.003 & .028$\pm$.001 & .101$\pm$.003 & .055$\pm$0.005 & {\bf .019$\pm$.000} & .019$\pm$.000 & .025$\pm$.001\\ MNIST-Fashion & 1.1e1$\pm$2.3e1 & .109$\pm$.007 & .139$\pm$.010 & .023$\pm$.000 & .027$\pm$.003 & .458$\pm$.657 & .022$\pm$.000 & .105$\pm$.006 & .025$\pm$.001 & .019$\pm$.000 & {\bf .019$\pm$.000} & .022$\pm$.000\\ USPS & 2.986$\pm$.733 & 1.275$\pm$.155 & 1.072$\pm$.079 & .027$\pm$.003 & .032$\pm$.002 & 1.065$\pm$.074 & .018$\pm$.002 & 4.048 $\pm$.631& .017$\pm$.003 & .012$\pm$.001 & {\bf .011$\pm$.001} & .021$\pm$.001\\ GLIOMA &.226$\pm$.033 & .259$\pm$.030 & .347$\pm$.044 & {\bf .065$\pm$.008} & .072$\pm$.004 & .249$\pm$.019 & .067$\pm$.011 & .211$\pm$0.0650 & .068$\pm$.012 & .066$\pm$.011 & .069$\pm$.010 & .141$\pm$.032\\ leukemia &10.652$\pm$5.712 & 8.504$\pm$1.405 & 12.680$\pm$3.950 & 7.777$\pm$2.415 & $\backslash$ & 14.059$\pm$4.005 & 6.299$\pm$1.215 & 12.311$\pm$2.724 & {\bf 6.142$\pm$1.074} & 397.8$\pm$736.1 & 7.011$\pm$.940 & 9.068$\pm$3.556\\ pixraw10P & 31.1$\pm$29.8 & .554$\pm$.155 & .645$\pm$.400 & .006$\pm$.002 & $\backslash$ & .163$\pm$.041 & .003$\pm$.001 & 1.357$\pm$.700 & .009$\pm$.004 & .013$\pm$.011 & .005$\pm$.004 & {\bf .002$\pm$.001}\\ ProstateGE &1.361$\pm$.422 & .404$\pm$.310 & 3.910$\pm$2.054 & .242$\pm$.094 & $\backslash$ & 3.002$\pm$3.357 & .142$\pm$.039 & .273 $\pm$.093 & .146$\pm$.026 & .202$\pm$.137 & .144$\pm$.039 & {\bf .068$\pm$.016}\\ warpAR10P &1.277$\pm$.606 & 4.734$\pm$3.950 & .597$\pm$.118 & .039$\pm$.007 & .086$\pm$.029 & 1.077$\pm$.325 & .036$\pm$.005 & 3.677$\pm$1.149 & .045$\pm$.006 & .074$\pm$.027 & .040$\pm$.005 & {\bf .033$\pm$.005}\\ SMK$\_$CAN$\_$187 &5.865$\pm$1.078 & .127$\pm$.020 & $\backslash$ & .114$\pm$.022 & $\backslash$ & 6.236$\pm$1.002 & .110$\pm$.016 & 6.843$\pm$2.529 & .102$\pm$.012 & .100$\pm$.015 & .105$\pm$.019 & {\bf .097$\pm$.021}\\ arcene &.410$\pm$.250 & .045$\pm$.001 & 1493.3$\pm$267.4 & .055$\pm$.043 & $\backslash$ & 1.856$\pm$.615 & .035$\pm$.009 &478.3$\pm$205.6& .030$\pm$.012 & .027$\pm$.001 & .025$\pm$.001 & {\bf .023$\pm$.001}\\ \bottomrule \end{tabular}}} \caption{Linear reconstruction error with selected features by different algorithms. The \lq\lq $\backslash$\rq\rq\, mark denotes the case with prohibitive running time, where the algorithm ran for more than one week without getting the result and thus was stopped. } \label{table2} \end{table*} \fi \begin{table*}[!htbp]  \centering \resizebox{1.9\columnwidth}{!}{ {\setlength{\aboverulesep}{0pt} \setlength{\belowrulesep}{0pt} \begin{tabular}{l!{\vrule width0.8pt}cccccccccc|cc} \toprule \multirow{2}*{{\Large Dataset}}&\multirow{2}*{{\Large LS}}&\multirow{2}*{{\Large SPEC}}&\multirow{2}*{{\Large NDFS}}&\multirow{2}*{{\Large AEFS}}&\multirow{2}*{{\Large UDFS}}&\multirow{2}*{{\Large MCFS}}&\multirow{2}*{{\Large PFA}}&\multirow{2}*{{\Large Inf-FS}}&\multirow{2}*{{\Large AgnoS-S}}&\multirow{2}*{{\Large CAE}}&\multicolumn{2}{c}{{\Large FAE}}\\ & & & & & & & & & & &Opt1 &Opt2\\ \midrule Mice Protein & .575$\pm$.118 & 1.32$\pm$1.78 & 1.69$\pm$0.98 & .020$\pm$.007 & {\bf .009$\pm$.006} & 16.5$\pm$30.4 & .028$\pm$.002 & .443$\pm$.040 & .038$\pm$.014 & .032$\pm$.001 & .014$\pm$.005 & .015$\pm$.005\\ COIL-20 & .225$\pm$.035 & .711$\pm$.626 & .144$\pm$.016 & .011$\pm$.0 & .015$\pm$.001 & 2.89$\pm$2.47 & {\bf .009$\pm$.0} & .134$\pm$.013 & .035$\pm$.009 & .011$\pm$.001 & .011$\pm$.001 & .013$\pm$.001 \\ Activity & 4166$\pm$776 & .153$\pm$.048 & 284$\pm$137 & .005$\pm$.0 & {\bf .004$\pm$.0} & 63.4$\pm$109.3 & .005$\pm$.0 & .207$\pm$.043 & .009$\pm$.001 & {\bf .004$\pm$.0} & .005$\pm$.001 & .005$\pm$.001\\ ISOLET &.304$\pm$.047 & .104$\pm$.007 & .144$\pm$.005 & .016$\pm$.0 & .019$\pm$.001 & .154$\pm$.032 & .015$\pm$.0 & .099$\pm$.015 & .035$\pm$.005 & {\bf .013$\pm$.0} & .015$\pm$.0 & .017$\pm$.0\\ MNIST & .305$\pm$.0 & .067$\pm$.001 & .134$\pm$.004 & .037$\pm$.002 & .029$\pm$.002 & .128$\pm$.003 & .028$\pm$.001 & .101$\pm$.003 & .055$\pm$.005 & {\bf .019$\pm$.0} & {\bf .019$\pm$.0} & .025$\pm$.001\\ MNIST-Fashion & 11.4$\pm$22.5 & .109$\pm$.007 & .139$\pm$.010 & .023$\pm$.0 & .027$\pm$.003 & .458$\pm$.657 & .022$\pm$.0 & .105$\pm$.006 & .025$\pm$.001 & {\bf .019$\pm$.0} & {\bf .019$\pm$.0} & .022$\pm$.0\\ USPS & 2.99$\pm$.73 & 1.28$\pm$.16 & 1.07$\pm$.08 & .027$\pm$.003 & .032$\pm$.002 & 1.07$\pm$.07 & .018$\pm$.002 & 4.05 $\pm$.63& .017$\pm$.003 & .012$\pm$.001 & {\bf .011$\pm$.001} & .021$\pm$.001\\ GLIOMA &.226$\pm$.033 & .259$\pm$.030 & .347$\pm$.044 & {\bf .065$\pm$.008} & .072$\pm$.004 & .249$\pm$.019 & .067$\pm$.011 & .211$\pm$.065 & .068$\pm$.012 & .066$\pm$.011 & .069$\pm$.010 & .141$\pm$.032\\ leukemia &10.7$\pm$5.7 & 8.50$\pm$1.41 & 12.7$\pm$4.0 & 7.78$\pm$2.42 & $\backslash$ & 14.1$\pm$4.0 & 6.30$\pm$1.22 & 12.3$\pm$2.7 & {\bf 6.14$\pm$1.07} & 398$\pm$736 & 7.01$\pm$.94 & 9.07$\pm$3.56\\ pixraw10P & 31.1$\pm$29.8 & .554$\pm$.155 & .645$\pm$.400 & .006$\pm$.002 & $\backslash$ & .163$\pm$.041 & .003$\pm$.001 & 1.357$\pm$.700 & .009$\pm$.004 & .013$\pm$.011 & .005$\pm$.004 & {\bf .002$\pm$.001}\\ ProstateGE &1.36$\pm$.42 & .404$\pm$.310 & 3.91$\pm$2.05 & .242$\pm$.094 & $\backslash$ & 3.00$\pm$3.36 & .142$\pm$.039 & .273 $\pm$.093 & .146$\pm$.026 & .202$\pm$.137 & .144$\pm$.039 & {\bf .068$\pm$.016}\\ warpAR10P &1.28$\pm$.61 & 4.73$\pm$3.95 & .597$\pm$.118 & .039$\pm$.007 & .086$\pm$.029 & 1.08$\pm$.33 & .036$\pm$.005 & 3.68$\pm$1.15 & .045$\pm$.006 & .074$\pm$.027 & .040$\pm$.005 & {\bf .033$\pm$.005}\\ SMK$\_$CAN$\_$187 &5.87$\pm$1.08 & .127$\pm$.020 & $3.52\pm.62$ & .114$\pm$.022 & $\backslash$ & 6.24$\pm$1.00 & .110$\pm$.016 & 6.84$\pm$2.53 & .102$\pm$.012 & .100$\pm$.015 & .105$\pm$.019 & {\bf .097$\pm$.021}\\ arcene &.410$\pm$.250 & .045$\pm$.001 & 1493$\pm$267 & .055$\pm$.043 & $\backslash$ & 1.86$\pm$.62 & .035$\pm$.009 &478$\pm$205& .030$\pm$.012 & .027$\pm$.001 & .025$\pm$.001 & {\bf .023$\pm$.001}\\ \bottomrule \end{tabular}}} \caption{Linear reconstruction error with selected features by different algorithms. The \lq\lq $\backslash$\rq\rq\, mark denotes the case with prohibitive running time, where the algorithm ran for more than one week without getting the result and thus was stopped. } \label{table2} \end{table*} \begin{table*}[!htbp]  \centering \resizebox{1.90\columnwidth}{!}{ {\setlength{\aboverulesep}{0pt} \setlength{\belowrulesep}{0pt} \begin{tabular}{l!{\vrule width0.8pt}cccccccccc|cc} \toprule \multirow{2}*{{\Large Dataset}}&\multirow{2}*{{\Large LS}}&\multirow{2}*{{\Large SPEC}}&\multirow{2}*{{\Large NDFS}}&\multirow{2}*{{\Large AEFS}}&\multirow{2}*{{\Large UDFS}}&\multirow{2}*{{\Large MCFS}}&\multirow{2}*{{\Large PFA}}&\multirow{2}*{{\Large Inf-FS}}&\multirow{2}*{{\Large AgnoS-S}}&\multirow{2}*{{\Large CAE}}&\multicolumn{2}{c}{{\Large FAE}}\\ & & & & & & & & & & &Opt1 &Opt2\\ \midrule Mice Protein &17.3$\pm$4.0 & 13.7$\pm$3.6 & 15.6$\pm$10.5 & 88.5$\pm$4.0 & {\bf 95.1$\pm$4.0} & 17.4$\pm$5.9 & 92.8$\pm$2.9 & 19.3$\pm$6.9 & 63.2$\pm$37.2 & 66.9$\pm$4.8 & 87.8$\pm$7.3 & 78.6$\pm$18.2\\ COIL-20 & 16.0$\pm$3.4 & 16.4$\pm$2.6 & 16.8$\pm$3.9 & 99.3$\pm$.2 & 97.6$\pm$2.3 & 10.5$\pm$2.6 & 99.4$\pm$.4 & 34.4$\pm$9.0 & 83.5$\pm$14.5 & 98.8$\pm$.5 & {\bf 99.6$\pm$.3} & 99.0$\pm$1.0\\ Activity & 29.0$\pm$1.4 & 20.3$\pm$1.1 & 18.2$\pm$1.2 & 88.7$\pm$1.7 & 91.7$\pm$1.5 & 21.6$\pm$4.6 & 88.8$\pm$1.4 & 24.1$\pm$5.0 & 74.9$\pm$10.7 & {\bf 91.9$\pm$1.0} & 91.4$\pm$1.1 & 88.8$\pm$1.3\\ ISOLET &11.1$\pm$.7 & 3.4$\pm$.9 & 8.4$\pm$1.6 & 82.9$\pm$2.1 & 73.9$\pm$5.8 & 6.0$\pm$1.2 & 86.5$\pm$1.3 & 14.0$\pm$1.9 & 36.2$\pm$17.0 & 87.9$\pm$.6 & {\bf 89.0$\pm$.6} & 87.0$\pm$1.3\\ MNIST & 12.4$\pm$1.0 & 11.2$\pm$1.0 & 10.5$\pm$1.9 & 80.2$\pm$2.6 & 88.1$\pm$1.6 & 13.0$\pm$4.1 & 88.5$\pm$2.0 & 13.2$\pm$1.3 & 43.5$\pm$15.6 & 92.5$\pm$.4 & {\bf 92.9$\pm$.7} & 90.8$\pm$.9 \\ MNIST-Fashion & 17.1$\pm$2.8 & 27.2$\pm$1.7 & 15.2$\pm$4.5 & 79.4$\pm$1.5 & 79.3$\pm$1.2 & 16.3$\pm$.9 & 80.3$\pm$10.5 & 18.8$\pm$3.8 & 78.4$\pm$1.3 & 82.3$\pm$1.0 & {\bf 82.5$\pm$.6} & 80.8$\pm$.9\\ USPS & 34.2$\pm$4.8 & 36.2$\pm$6.9 & 10.6$\pm$1.8 & 94.9$\pm$.7 & 94.5$\pm$.3 & 12.3$\pm$3.4 & 95.7$\pm$.3 & 18.2$\pm$4.2 & 95.6$\pm$.4 & 96.2$\pm$.4 & {\bf 96.3$\pm$.2} & 96.2$\pm$.3\\ GLIOMA &46.0$\pm$17.4 & 22.0$\pm$16.0 & 34.0$\pm$8.0 & 68.0$\pm$13.3 & 76.0$\pm$20.6 & 46.0$\pm$8.0 & 66.0$\pm$8.0 & 42.0$\pm$11.7 & 62.0$\pm$7.5 & 70.0$\pm$16.7 & {\bf 76.0$\pm$8.0} & 72.0$\pm$16.0\\ leukemia &52.0$\pm$12.2 & 58.7$\pm$8.8 & 57.3$\pm$13.7 & 76.0$\pm$12.4 & $\backslash$ & 54.7$\pm$10.7 & 72.0$\pm$14.9 & 56.0$\pm$6.8 & 72.0$\pm$14.9 & 65.3$\pm$12.9 & 80.0$\pm$11.2 & {\bf 80.0$\pm$6.0}\\ pixraw10P &51.0$\pm$8.6 & 9.0$\pm$5.8 & 19.0$\pm$8.0 & {\bf 100.0$\pm$0.0} & $\backslash$ & 11.0$\pm$5.8 & {\bf 100.0$\pm$0.0} & 41.0$\pm$23.5 & {\bf 100.0$\pm$0.0} & 99.0$\pm$2.0 & {\bf 100.0$\pm$0.0} & {\bf 100.0$\pm$0.0}\\ ProstateGE &43.8$\pm$9.2 & 55.2$\pm$13.7 & 56.2$\pm$17.4 & 85.7$\pm$8.5 & $\backslash$ & 46.7$\pm$11.0 & 81.0$\pm$10.0 & 57.1 $\pm$15.7 & 81.9$\pm$4.7 & 78.1$\pm$9.8 & 82.9$\pm$5.7 & {\bf 85.7$\pm$4.2}\\ warpAR10P &11.5$\pm$5.4 & 7.7$\pm$4.9 & 12.3$\pm$7.1 & 76.9$\pm$5.4 & 64.6$\pm$8.9 & 15.4$\pm$5.4 & {\bf 82.3$\pm$6.7} & 10.8$\pm$3.8 & 71.5$\pm$7.1 & 70.0$\pm$10.7 & 71.5$\pm$3.1 & 63.8$\pm$7.9\\ SMK$\_$CAN$\_$187 &55.3$\pm$7.1 & 65.8$\pm$6.9 & $53.2\pm14.6$ & 61.1$\pm$4.5 & $\backslash$ & 50.5$\pm$4.2 & 70.0$\pm$2.7 & 47.4$\pm$2.9 & 71.6$\pm$5.6 & 69.5$\pm$8.3 & {\bf 72.1$\pm$6.6} & 71.1$\pm$8.5\\ arcene &59.5$\pm$7.0 & 51.5$\pm$9.8 & 57.5$\pm$7.3 & 81.0$\pm$5.2 & $\backslash$ & 56.5$\pm$4.6 & 80.0$\pm$5.7 & 59.5$\pm$5.3 & 80.0$\pm$4.5 & 74.5$\pm$8.9 & {\bf 84.0$\pm$6.6} & 80.0$\pm$4.5\\ \bottomrule \end{tabular}}} \caption{Classification accuracy (\%) with selected features by different algorithms. The mark \lq\lq $\backslash$ \rq\rq\, is used similarly to Table~\ref{table2}.} \label{table3} \end{table*} Further, we compare the behaviors of FAE with respect to $k$ with those of the baseline algorithms. By varying $k$ on ISOLET and arcene, we obtain the corresponding linear reconstruction errors and classification accuracies. We plot the results in Figure~\ref{fig:0712}{\footnote{For better visualization, we ignore the algorithms with large linear reconstruction errors.}}. The results show that FAE performs better and more stable than other algorithms in most cases. \iffalse \begin{figure} \caption{Reconstruction and classification results versus $k$ on ISOLET.} \label{fig:071} \end{figure} \begin{figure} \caption{Reconstruction and classification results versus $k$ on arcene.} \label{fig:072} \end{figure} \fi \begin{figure} \caption{Reconstruction and classification results versus $k$.} \label{fig:0712} \end{figure} \subsubsection{Feature Importance} To examine the importance of the features selected by FAE, we rank and partition them into two equal groups, that is, each group has $25$ features. The results are shown in Figure~\ref{fig:featurerank}. We can observe that, the classification accuracy of the first group is generally better than the second group. However, since FAE is unsupervised, some selected features that are essential for reconstruction might not be important for classification. \begin{figure}\label{fig:featurerank} \end{figure} \subsection{Computational Complexity} Experimentally, the computational time of our algorithm (\ref{FAE}) is about twice that of sparse AE. FAE has only an additional sub-NN compared to sparse AE and shares parameters with the global-NN. Also, the fitting error term of sub-NN is quadratic and similar to sparse AE's fitting error term. Thus, the overall computational complexity of FAE is of the same order as sparse AE. \subsection{Analysis of L1000 Gene Expression} It is expensive to measure all gene expressions. To reduce the cost, researchers from the LINCS program{\footnote{See http://www.lincsproject.org/LINCS/}} have found that a carefully selected set of genes can capture most gene expression information of the entire human transcriptome because the expression of genes is usually correlated under different conditions. Based on the selected genes, a linear regression model was used to infer the gene expression values of the remaining genes \citep{Chen}. Recently, \citet{Abubakar} have used CAE to further reduce the number of genes to 750 to achieve a similar linear reconstruction error about $0.3$ to the original $943$ landmark genes of L1000. \begin{figure} \caption{Gene selection by using FAE for GEO. (a) Reconstruction error by FAE; (b) Reconstruction error by using the linear regression model on L1000 landmark genes and FAE-selected genes.} \label{fig:genes} \end{figure} Now we apply FAE on the preprocessed GEO to select varying numbers of representative genes from $500$ to $943$. Figure~\ref{fig:genes} (a) shows that, by using $600$ genes, FAE achieves a reconstruction error better than that with $750$ selected genes by CAE. However, CAE uses a slightly different number of samples with ours. For consistency, we mainly compare FAE with L1000. We compute the reconstruction error by using the linear regression model on the genes selected by FAE and the landmark genes of L1000, and the results in MSE are depicted in Figure~\ref{fig:genes} (b). Evidently, using $800$ genes by FAE achieves a similar reconstruction to L1000. Thus, FAE reduces the number of genes by about $15$\% compared to L1000. The selected genes by FAE are displayed in Figure~\ref{FAEselectedgene}. It is observed that, with different numbers of selected genes, a few genes sometimes are selected and sometimes not, which may be attributed to the significant correlations among genes. In addition, when selecting the same number of 943 genes, only $90$ genes selected by CAE are among the landmark genes, while $121$ by FAE are among the landmark genes. These results indicate that L1000 landmark genes can be significantly enhanced in representation power. \begin{figure} \caption{Comparison of different numbers of selected genes with 943 landmark gene. The white lines denote those selected genes by FAE. The purple dashed line separates $943$ landmark genes (color coded in yellow) from the other genes. The purple numbers $72$, $72$, $84$, $108$, $140$, and $121$ denote respectively the numbers of overlapping genes between the landmark genes and those selected by FAE with $k$ being 500, 600, 700, 800, 900, and 943.} \label{FAEselectedgene} \end{figure} \iffalse \begin{figure} \caption{Gene selection for GEO. (a) Reconstruction error over different numbers of selected genes; (b) comparison of different numbers of selected genes with 943 landmark genes. The red lines denote FAE-selected genes. The purple dashed line separates $943$ landmark genes (color coded in yellow) from other genes. The purple numbers $84$, $108$, $140$, and $121$ denote respectively the numbers of overlapping genes between the landmark genes and those selected by FAE with $k$ being 500, 600, 700, 800, 900, and 943.} \label{fig:genes} \end{figure} \fi \section{An Application of FAE} FAE is applicable and easily extensible to different tasks. Here we show an application of exploiting multiple hierarchical subsets of the key features. \subsection{$h$-HFAE} For an image, usually there are many pixels highly correlated with each other. Thus, the subsets of key features might not be unique; indeed, there often exists more than one subset of informative features that can recover the original data well. Excavating these potential subsets of meaningful features is conductive to facilitate data compression~\citep{Sousa} and better understand the structure and inter-relationship of the features. Yet, almost all existing feature selection approaches have little ability to explore these potential subsets. To achieve such an ability, we develop an application in the framework of FAE, which selects multiple non-overlapping subsets of representative features. For clarity, we formalize it as follows: \begin{equation}{\label{HFAE}} \begin{array}{l} \displaystyle\min_{\mathbf{W}_{\mathrm{I}}^{\mathrm{max}_{k,i}},\mathbf{W}_{\mathrm{E}},\mathbf{W}_{\mathrm{D}}}\|\mathbf{X}-((\mathbf{X}\mathbf{W}_{\mathrm{I}})\mathbf{W}_{\mathrm{E}})\mathbf{W}_{\mathrm{D}}\|_{\mathrm{F}}^2+\lambda_{0}\|\mathbf{W}_{\mathrm{I}}\|_{1}\\ +\sum_{i=1}^{h}\lambda_i\|\mathbf{X}-((\mathbf{X}\mathbf{W}_{\mathrm{I}}^{\mathrm{max}_{k,i}})\mathbf{W}_{\mathrm{E}})\mathbf{W}_{\mathrm{D}}\|_{\mathrm{F}}^2,\,\, \mathrm{s.t.}\,\,\mathbf{W}_{\mathrm{I}}\geqslant 0, \end{array} \end{equation} where $\mathbf{W}_{\mathrm{I}}^{\mathrm{max}_{k,1}}=$ Diag$(\mathbf{w}^{\mathrm{max}_{k,1}})$, $\mathbf{W}_{\mathrm{I}}^{\mathrm{max}_{k,i}}=$ Diag$((\mathbf{w}/\mathbf{w}^{\mathrm{max}_{k,{i-1}}})^{\mathrm{max}_{k,i}})$, $i=2,\ldots,h$, $h$ is the number of desired subsets of relevant features, $\lambda_i, i=0,\ldots,h$, are hyper-parameters, and $(\mathbf{w}/\mathbf{w}^{\mathrm{max}_{k,{i-1}}})^{\mathrm{max}_{k,i}}$ is an operation to retain the $i$-th group of $k$ largest entries from $\mathbf{w}$ while making zero all the other entries including the $(i-1)$ groups of $k$ largest entries of $\mathbf{w}^{\mathrm{max}_{k,1}},\mathbf{w}^{{\mathrm{max}_{k,2}}}, \ldots$, and $\mathbf{w}^{{\mathrm{max}_{k,{i-1}}}}$. In (\ref{HFAE}), the first two terms estimate the importance of each input feature globally; then, the remaining terms organize the top $kh$ features into $h$ hierarchical subsets in descending order of importance values, with each subset having $k$ features. These $h$ subsets are selected by using $h$ sub-NNs, which work together in an orchestrated way: The $(i+1)$-th sub-NN exploits the $(i+1)$-th hierarchical subset of features by leaving out the $i$ subsets of features found by the previous $i$ sub-NN(s). For notational convenience, we denote this application for identifying multiple hierarchical subsets of features by $h$-HFAE. To verify the effectiveness of $h$-HFAE, we set $h=3$ and apply it to MNIST. The reconstruction and classification results together with those of FAE are shown in Figure~\ref{fig:05}, where we set the hyper-parameters $\lambda_0$, $\lambda_1$, $\lambda_2$, and $\lambda_3$ in (\ref{HFAE}) to be $0.05$, $1.5$, $2$, and $3$, respectively. With 50 selected features per group, different hierarchies of $3$-HFAE achieve almost the same accuracy; with 36 selected features per group, the third group of features from $3$-HFAE-$\mathrm{H}_3^3$ has slightly worse accuracy than the other two groups. This result implies that 50 selected features per group are more stable for $3$-HFAE. For reconstruction error, the vanilla version of FAE is the best among all results. \iffalse \begin{equation}{\label{HFAE}} \begin{array}{l} \displaystyle\min_{\mathbf{W}_{\mathrm{I}}^{\mathrm{max}_{k^i}},\mathbf{W}_{\mathrm{E}},\mathbf{W}_{\mathrm{D}}}\|\mathbf{X}-((\mathbf{X}\mathbf{W}_{\mathrm{I}})\mathbf{W}_{\mathrm{E}})\mathbf{W}_{\mathrm{D}}\|_{\mathrm{F}}^2+\lambda_{0}\|\mathbf{W}_{\mathrm{I}}\|_{1}\\ +\sum_{i=1}^{h}\lambda_i\|\mathbf{X}-((\mathbf{X}\mathbf{W}_{\mathrm{I}}^{\mathrm{max}_{k^i}})\mathbf{W}_{\mathrm{E}})\mathbf{W}_{\mathrm{D}}\|_{\mathrm{F}}^2, \end{array} \end{equation} \begin{equation}{\label{HFAE}} \displaystyle\min_{\mathbf{W}_{\mathrm{I}}^{\mathrm{max}_{k^i}},\mathbf{W}_{\mathrm{E}},\mathbf{W}_{\mathrm{D}}}\|\mathbf{X}-((\mathbf{X}\mathbf{W}_{\mathrm{I}})\mathbf{W}_{\mathrm{E}})\mathbf{W}_{\mathrm{D}}\|_{\mathrm{F}}^2+\sum_{i=1}^{h}\lambda_i\|\mathbf{X}-((\mathbf{X}\mathbf{W}_{\mathrm{I}}^{\mathrm{max}_{k^i}})\mathbf{W}_{\mathrm{E}})\mathbf{W}_{\mathrm{D}}\|_{\mathrm{F}}^2+\lambda_{h+1}\|\mathbf{W}_{\mathrm{I}}\|_{1}, \end{equation} \begin{strip} \vskip -0.25in \begin{align}{\label{HFAE}} \displaystyle\min_{\mathbf{W}_{\mathrm{I}}^{\mathrm{max}_{k^i}},\mathbf{W}_{\mathrm{E}},\mathbf{W}_{\mathrm{D}}}\|\mathbf{X}-((\mathbf{X}\mathbf{W}_{\mathrm{I}})\mathbf{W}_{\mathrm{E}})\mathbf{W}_{\mathrm{D}}\|_{\mathrm{F}}^2+\sum_{i=1}^{h}\lambda_i\|\mathbf{X}-((\mathbf{X}\mathbf{W}_{\mathrm{I}}^{\mathrm{max}_{k^i}})\mathbf{W}_{\mathrm{E}})\mathbf{W}_{\mathrm{D}}\|_{\mathrm{F}}^2+\lambda_{h+1}\|\mathbf{W}_{\mathrm{I}}\|_{1},\,\, \mathrm{s.t.}\,\,\mathbf{W}_{\mathrm{I}}\geqslant 0, \end{align} \vskip -0.25in \end{strip} where $\mathbf{W}_{\mathrm{I}}^{\mathrm{max}_{k^1}}=$Diag$(\mathbf{w}^{\mathrm{max}_{k^1}})$, $\mathbf{W}_{\mathrm{I}}^{\mathrm{max}_{k^i}}=$ Diag$((\mathbf{w}/\mathbf{w}^{\mathrm{max}_{k^{i-1}}})^{\mathrm{max}_{k^i}})$, $i=2,\ldots,h$, $h$ is the number of desired subsets of relevant features, $\lambda_i, i=0,\ldots,h$ are hyper-parameters, and $(\mathbf{w}/\mathbf{w}^{\mathrm{max}_{k^{i-1}}})^{\mathrm{max}_{k^i}}$ is an operation to retain the $i$-th group of $k$ largest entries from $\mathbf{w}$ while making zero all the other entries including the $(i-1)$ groups of $k$ largest entries of $\mathbf{w}^{\mathrm{max}_{k^1}},\mathbf{w}^{{\mathrm{max}_{k^2}}}, \ldots$, and $\mathbf{w}^{{\mathrm{max}_{k^{i-1}}}}$. In (\ref{HFAE}), the first two terms estimate the importance of each input feature globally; then, the remaining terms organize the top $kh$ features into $h$ hierarchical subsets in descending order of their importance values with each subset having $k$ features. These $h$ subsets are selected by using $h$ sub-NNs, which work together in an orchestrated way: The $(i+1)$-th sub-NN exploits the $(i+1)$-th hierarchical subset of features by leaving out the $i$ subsets of features found by the previous $i$ sub-NN(s). For notational convenience, we denote this application for identifying multiple hierarchical subsets of features by $h$-HFAE. The architecture of $h$-HFAE is shown in the Supplementary Material. To verify the effectiveness of $h$-HFAE, we set $h=3$ and apply it to MNIST. The reconstruction and classification results together with that of FAE are shown in Figure~\ref{fig:05}, where we set the hyper-parameters $\lambda_0$, $\lambda_1$, $\lambda_2$, and $\lambda_3$ in (\ref{HFAE}) to be $0.05$, $1.5$, $2$, and $3$, respectively. With 50 selected features per group, different hierarchies of $3$-HFAE achieve almost the same accuracy; With 36 selected features per group, the third group of features from $3$-HFAE-$\mathrm{H}_3^3$ has slightly worse accuracy than the other two groups. This result implies that 50 selected features per group are more stable for $3$-HFAE. For reconstruction error, the vanilla version of FAE is the best among all results. \fi \begin{figure} \caption{Reconstruction and classification results of FAE and $3$-HFAE on MNIST. $3$-HFAE-$\mathrm{H}_3^1$, $3$-HFAE-$\mathrm{H}_3^2$, and $3$-HFAE-$\mathrm{H}_3^3$ denote respectively the first three hierarchical subsets of selected features.} \label{fig:05} \end{figure} CAE~\citep{Abubakar} displays the relationships of the top $3$ selected features at each node of the concrete selector layer; however, the second and third top features might be insignificant due to the potentially trivial average probability ($\leqslant0.01$). Different from CAE, $h$-HFAE uses the weights to assign the features into different hierarchical subsets for selection and exploration. In the Supplementary Material, we demonstrate that for $3$-HFAE there exists a considerable degree of similarity between different hierarchical subsets of selected features. Thus, $h$-HFAE can reveal the redundancy or high correlations among features. \section{Conclusions} In this paper, we propose a new framework for unsupervised feature selection, which extends AE by adding a simple one-to-one layer and a sub-NN to achieve both global exploring of representative abilities of the features and local mining for their diversity. Extensive assessment of the new framework has been performed on real datasets. Experimental results demonstrate its superior performance over contemporary methods. Moreover, this new framework is applicable and easily extensible to other tasks and we will further extend it in our future work. \section{Acknowledgments} This work is supported in part by NSF OIA2040665, NIH R56NS117587, and R01HD101508. We sincerely thank the anonymous reviewers for their valuable comments. \end{document}
arXiv
Key Formulas Exponent Laws and Logarithm Laws Trig Formulas and Identities Differentiation Rules Trig Function Derivatives Table of Derivatives Table of Integrals Jump to solved problems Evaluating Limits Limits at Infinity Limits at Infinity with Square Roots Calculating Derivatives Equation of a Tangent Line Mean Value Theorem & Rolle's Theorem Least expensive open-topped can Printed poster Related Rates Snowball melts Snowball melts, area decreases at given rate How fast is the ladder's top sliding Angle changes as a ladder slides Lamp post casts shadow of man walking Water drains from a cone Given an equation, find a rate Get notified when there is new free material Home » Calculus 1 » Related Rates Calculus Related Rates Problem: As a snowball melts, how fast is its radius changing? A spherical snowball melts at the rate of $2 \pi$ cm$^3$/hr. It melts symmetrically such that it is always a sphere. How fast is its radius changing at the instant $r = 10$ cm? Hint: The volume of a sphere is related to its radius according to $V = \dfrac{4}{3} \pi r^3$. Calculus Solution Let's unpack the question statement: We're told that the snowball's volume V is changing at the rate of $\dfrac{dV}{dt} = -2 \pi$ cm$^3$/hr. (We must insert the negative sign "by hand" since we are told that the snowball is melting, and hence its volume is decreasing.) As a result, its radius is changing, at the rate $\dfrac{dr}{dt}$, which is the quantity we're after. The snowball always remains a sphere. Toward the end of our solution, we'll need to remember that the problem is asking us about $\dfrac{dr}{dt}$ at a particular instant, when $r = 10$ cm. To solve this problem, we will use our standard 4-step Related Rates Problem Solving Strategy. 1. Draw a picture of the physical situation. See the figure. 2. Write an equation that relates the quantities of interest. B. To develop your equation, you will probably use. . . a simple geometric fact. This is the hardest part of Related Rates problem for most students initially: you have to know how to develop the equation you need, how to pull that "out of thin air." By working through these problems you'll develop this skill. The key is to recognize which of the few sub-types of problem it is; we've listed each on our Related Rates page. In this problem, the diagram above reminds us that the snowball always remains a sphere, which is a Big Clue. We need to develop a relationship between the rate we're given, $\dfrac{dV}{dt} = -2 \pi$ cm$^3$/hr, and the rate we're after, $\dfrac{dr}{dt}$. We thus first need to write down a relationship between the sphere's volume V and its radius r. But we know that relationship since it's a simple geometric fact that you could look up, and here it was given in the hint: $$V = \frac{4}{3}\pi r^3$$ That's it — that's the key relationship we need to be able to proceed with our solution. 3. Take the derivative with respect to time of both sides of your equation. Remember the chain rule. \frac{d}{dt}V & = \frac{d}{dt} \left( \frac{4}{3}\pi r^3\right) \\[12px] \frac{dV}{dt} &= \frac{4}{3}\pi \frac{d}{dt} \left( r^3\right) \\[12px] &= \frac{4}{3}\pi \left( 3r^2 \frac{dr}{dt} \right) \\[12px] &= 4 \pi\, r^2 \frac{dr}{dt} Open to read why that dr/dt is there. Are you wondering why that $\dfrac{dr}{dt}$ appears? The answer is the Chain Rule. While the derivative of $r^3$ with respect to r is $\dfrac{d}{dr}r^3 = 3r^2$, the derivative of $r^3$ with respect to time t is $\dfrac{d}{dt}r^3 = 3r^2\dfrac{dr}{dt}$. Remember that r is a function of time t: the radius changes as time passes and the snowball melts. We could have captured this time-dependence explicitly by writing our relation as $$V(t) = \frac{4}{3}\pi [r(t)]^3$$ to remind ourselves that both V and r are functions of time t. Then when we take the derivative, \frac{d}{dt}V(t) &= \frac{d}{dt}\left[ \frac{4}{3}\pi [r(t)]^3\right] \\ \\ \frac{dV(t)}{dt} &= \frac{4}{3} \pi\, 3[r(t)]^2 \left[\frac{d}{dt}r(t)\right]\\ \\ &= 4\pi [r(t)]^2 \left[\frac{dr(t)}{dt}\right] \end{align*} [Recall $\dfrac{dV}{dt} = -2 \pi$ cm$^3$/hr in this problem, and we're looking for $\dfrac{dr}{dt}$.] Most people find that writing the explicit time-dependence V(t) and r(t) annoying, and so just write V and r instead. Regardless, you must remember that r depends on t, and so when you take the derivative with respect to time the Chain Rule applies and you have the $\dfrac{dr}{dt}$ term. [collapse] 4. Solve for the quantity you're after. Solving the equation above for $\dfrac{dr}{dt}$: \frac{dV}{dt} &= 4 \pi r^2 \frac{dr}{dt} \\[12px] \frac{dr}{dt} &= \frac{1}{4 \pi r^2} \frac{dV}{dt} \\[12px] \end{align*} Now we just have to substitute values. Recall $\dfrac{dV}{dt} = -2 \pi$ cm$^3$/hr, and the problem asks about when $r=10$ cm: \frac{dr}{dt} &= \frac{1}{4 \pi r^2} \frac{dV}{dt} \\[12px] &= \frac{1}{4 \pi (10)^2} (-2 \pi) \\[12px] &= -\frac{1}{200} \text{ cm/hr} \quad \cmark That's the answer. The negative value indicates that the radius is decreasing as the snowball melts, as we expect. Caution: IF you are using a web-based homework system and the question asks, At what rate does the radius decrease? then the system has already accounted for the negative sign and so to be correct you must enter a POSITIVE VALUE: $\boxed{\dfrac{1}{200}} \, \dfrac{\text{cm}}{\text{hr}} \quad \checkmark$ Return to Related Rates Problems Want access to all of our Calculus problems and solutions? Buy full access now — it's quick and easy! Do you need immediate help with a particular textbook problem? Head over to our partners at Chegg Study and gain (1) immediate access to step-by-step solutions to most textbook problems, probably including yours; (2) answers from a math expert about specific questions you have; AND (3) 30 minutes of free online tutoring. Please visit Chegg Study now. If you use Chegg Study, we'd greatly appreciate hearing your super-quick feedback about your experience to make sure you're getting the help you need. What are your thoughts and questions? The comment form collects the name and email you enter, and the content, to allow us keep track of the comments placed on the website. Please read and accept our website Terms and Privacy Policy to post a comment. I'd like to be notified of all new comments to this postnotified of new replies to all of my comments tbh…I still don't really get the concept Matheno Thanks for letting us know. Our overall discussion of the topic might be more helpful: 4 Steps to Solve Any Related Rates Problem. We hope that will be more help you understand the general idea more clearly! Check out our free materials: Full detailed and clear solutions to typical problems, and concise problem-solving strategies. Access free materials Get complete access: LOTS of problems with complete, clear solutions; tips & tools; bookmark problems for later review; + MORE! AP® is a trademark registered by the College Board, which is not affiliated with, and does not endorse, this site. © 2014–2020 Matheno, Inc.
CommonCrawl
Data-driven discovery of the governing equations of dynamical systems via moving horizon optimization Sparse inference and active learning of stochastic differential equations from data Yunfei Huang, Youssef Mabrouk, … Benedikt Sabass Robust learning from noisy, incomplete, high-dimensional experimental data via physically constrained symbolic regression Patrick A. K. Reinbold, Logan M. Kageorge, … Roman O. Grigoriev Data-driven discovery of dimensionless numbers and governing laws from scarce measurements Xiaoyu Xie, Arash Samaei, … Zhengtao Gan Cost function for low-dimensional manifold topology assessment Kamila Zdybał, Elizabeth Armstrong, … Alessandro Parente Learning dominant physical processes with data-driven balance models Jared L. Callaham, James V. Koch, … Steven L. Brunton The imperative of physics-based modeling and inverse theory in computational science Karen E. Willcox, Omar Ghattas & Patrick Heimbach Dimensionally consistent learning with Buckingham Pi Joseph Bakarji, Jared Callaham, … J. Nathan Kutz Discovering sparse interpretable dynamics from partial observations Peter Y. Lu, Joan Ariño Bernad & Marin Soljačić A machine learning framework for computationally expensive transient models Prashant Kumar, Kushal Sinha, … Ahmad Y. Sheikh Fernando Lejarza1 & Michael Baldea1,2 Scientific Reports volume 12, Article number: 11836 (2022) Cite this article Discovering the governing laws underpinning physical and chemical phenomena entirely from data is a key step towards understanding and ultimately controlling systems in science and engineering. Noisy measurements and complex, highly nonlinear underlying dynamics hinder the identification of such governing laws. In this work, we introduce a machine learning framework rooted in moving horizon nonlinear optimization for identifying governing equations in the form of ordinary differential equations from noisy experimental data sets. Our approach evaluates sequential subsets of measurement data, and exploits statistical arguments to learn truly parsimonious governing equations from a large dictionary of basis functions. The proposed framework reduces gradient approximation errors by implicitly embedding an advanced numerical discretization scheme, which improves robustness to noise as well as to model stiffness. Canonical nonlinear dynamical system examples are used to demonstrate that our approach can accurately recover parsimonious governing laws under increasing levels of measurement noise, and outperform state of the art frameworks in the literature. Further, we consider a non-isothermal chemical reactor example to demonstrate that the proposed framework can cope with basis functions that have nonlinear (unknown) parameterizations. Differential equation models play a critical role in describing the governing behavior of a variety of systems encountered in science and engineering. As minimal order expressions describing the system behavior, governing models are generalizable, readily interpretable, and have good extrapolation capabilities. Historically, the discovery and formulation of fundamental governing equations has been a relatively lengthy process, supported by careful experimentation and data collection using prototype experimental systems. Nonetheless, the data sets that have, through the history of science, supported the discovery of such fundamental natural laws may seem "small" by today's standards. With decreasing costs of sensors, data storage systems, and computing hardware, immense quantities of data can be easily collected and efficiently stored. As a consequence, the applications of machine learning (ML) and artificial intelligence (AI) have witnessed meteoric growth in science and engineering. ML techniques can perform exceptionally well in regression and classification tasks, but the resulting models are most often black- or grey-box in nature, offering little physical insight, and extrapolate poorly to regimes beyond the range of the training data. Moreover, prediction accuracy typically comes at the cost of model complexity, which is at odds with the typically parsimonious nature of a system's governing dynamics derived e.g. via first principles analysis. Leveraging ML/AI frameworks to discover (as opposed to merely fit) the governing equations of physical systems from large amounts of data offers intriguing possibilities and remains an open field of research. Related recent efforts towards physically-constrained ML include physics-informed discovery strategies (of which a recent review can be found in1) combining first principles arguments with ML models, such as deep neural networks2. These models embed "informative priors", e.g., mass and energy conservation laws, within ML architectures in order to improve their interpretability and reduce extrapolation error. Physics-informed neural networks (PINNs)2 are an example of such frameworks and have recently attracted significant attention in the literature, witnessing numerous extensions (e.g.,3,4,5,6). Nevertheless, it should be noted that these methods are mostly geared towards finding high fidelity data-driven solutions to partial differential equations (PDEs), as well as solving associated inverse problems by means of knowledge embedding, generally without emphasizing knowledge discovery. Past work on automated data-driven discovery of governing equations is based on nonlinear regression strategies7. Initial efforts exploited symbolic regression8 and genetic programming algorithms9. However, the combinatorial nature of these approaches can render them computationally prohibitive, restricting their applicability to low-dimensional systems and to considering relatively small initial sets of candidate symbolic expressions (which inherently diminishes the success rate of identifying the true underlying system dynamics). Furthermore, symbolic regression strategies can be prone to overfitting, i.e., generating overly complex expressions in an attempt to decrease prediction error8. Later works10,11 combine deep learning architectures with established model fitting techniques (e.g., dimensional analysis, polynomial and symbolic regression), which result in improved performance relative to original symbolic regression schemes8,12. Despite their potential computational performance limitations, schemes based on genetic programming benefit from not requiring a complete library of basis functions (i.e., new terms can be generated through crossover and mutation steps13). In a different vein, sparse nonlinear regression techniques14,15 have been proposed to improve on the computational complexity of symbolic regression-based frameworks. Brunton et al.16 employed sequentially thresholded least-squares (STLSQ) and least absolute shrinkage and selection operator (LASSO), that is, \(\ell _1\) regularized regression, for sparse identification of nonlinear dynamics (SINDy). These algorithms recover governing equations by identifying a small number of relevant nonlinear functions from within an a priori specified large set of candidate basis functions using state measurement data sets. Theoretical convergence properties have been established17, and the framework has been implemented as an open-source code18. Numerous extensions to SINDy have since been proposed, addressing a variety of classes of systems and problem settings including, e.g., PDEs19,20, unknown coordinate systems21, biological networks22, model predictive control23, isothermal chemical reactions24. A similar approach based on elastic net regression (i.e., a combination of both \(\ell _1\) and \(\ell _2\) regularizations) was introduced in25, but resulted in less parsimonious equations relative to e.g. LASSO regression. In a related effort26, the selection of basis functions was performed via mixed-integer nonlinear programming, with a view towards identifying low-order surrogate representations of nonlinear algebraic models. Recent efforts have focused on extending the SINDy concept to cope with corrupted data sets, as well as dynamical systems of higher complexity. For example, to overcome the instability associated with computing derivatives directly from noisy data, the integral (or weak) formulation of the system dynamics was used for systems of ODEs27,28 and (high-order) PDEs29. Similarly, Goyal et al.30 incorporated Runge-Kutta integration schemes within the dynamics discovery formulation to bypass direct estimation of derivatives from data and reduce gradient estimation errors relative to lower order methods. Extensions for automatically denoising measurement data, learning and parametrizing the associated noise distribution, and subsequently inferring the underlying parsimonious governing dynamical system were developed31. Similarly, Cao et al.32 proposed employing the Fourier transform of the original time series data to identify governing PDEs using the low frequency component of the signal, which significantly improves robustness to noise. Tran et al.33 proposed an \(\ell _1\) minimization formulation and an alternating solution algorithm to discover chaotic systems when the data are corrupted with a large percentage of outliers. A later work34 introduced a unified framework leveraging non-convex sparsity promoting regularization functions (e.g., \(\ell _0\) penalties) to detect and trim outliers while inferring the governing equations. The framework proposed in34 also allows for considering parametric dependencies in the basis functions, as well as physical constraints. Recently, Fasel et al.35 reported the use of bootstrap aggregation ("bagging") by identifying an ensemble of SINDy models, which substantially improves robustness to noise while also allowing for uncertainty quantification and probabilistic forecasts. A related effort36 combined a weak formulation of differential equations with ensemble symbolic regression to identify fluid flow dynamics from a pool of variables and candidate models derived from general physical principles. In this work, we propose a framework for discovering governing equations from noisy measurement data that is formulated via nonlinear dynamic optimization, a mathematical programming technique that offers substantially more flexibility than the previously cited examples. The proposed formulation aligns with ideas introduced in weak SINDy implementations27,28,29, in that it implicitly leverages advanced numerical integration (orthogonal collocation on finite elements37) to represent the underlying candidate governing equations, thus circumventing derivative estimation from data, reducing gradient approximation errors relative to lower order methods, and improving numerical stability. Further, we propose a method inspired by control theory, namely, moving horizon estimation38 and its counterpart, model predictive control39, to efficiently solve the aforementioned dynamic nonlinear program (DNLP). Moving horizon optimization strategies have been extensively used for reconstructing state values from noisy process measurements38,40, estimating model parameters from data41, or determining the optimal inputs for controlling dynamical systems whose equations have a known structure. Conversely, the methodology introduced here differs significantly from the moving horizon estimation canon in that the true structural form of the system dynamics is unknown, and sparsifying strategies are developed to recover parsimonious governing equations from a dictionary of candidate basis functions. The resulting sequence of discovered governing equations is aggregated (e.g., by taking the average of the estimated model coefficients and basis functions parameters), which is expected to further improve robustness to noise and allow for statistical characterization of the coefficient estimates in a similar sense as ensemble SINDy35. Hence, relative to several of the previously cited works, the main contributions of the proposed framework are: (i) a general nonlinear programming-based optimization approach that can cope with parametric basis function libraries and can include domain knowledge-derived constraints, (ii) a moving horizon optimization framework that improves scalability and leverages statistical arguments to promote sparsity in the identified governing equations, and (iii) implicitly embedded discretization schemes that are stable and of high-order which improve robustness to noisy data, and are able to capture multiscale or stiff dynamics. Problem formulation We consider dynamical systems governed by ordinary differential equations (ODEs) of the form: $$\begin{aligned} \frac{d\mathbf{x }(t)}{dt} = \mathbf{f }(\mathbf{x }(t), \mathbf{u }(t)) \end{aligned}$$ where \(\mathbf{x }(t) \in {\mathbb {R}}^{n_x}\) is the vector of states and \(\mathbf{u }(t) \in {\mathbb {R}}^{n_u}\) is a vector of control (manipulated) inputs at time t, and the map \(\mathbf{f }(\cdot ):{\mathbb {R}}^{n_x}\times {\mathbb {R}}^{n_u}\rightarrow {\mathbb {R}}^{n_x}\) represents the (nonlinear) dynamics of the system. The function \(\mathbf{f }\) is unknown and is precisely what we attempt to infer from a given set of time-resolved measurement data. To that end, we collect a sequence of measurements \((\hat{\mathbf{x }}({\hat{t}}_j),\hat{\mathbf{u }}({\hat{t}}_j))\) of the state and input variables observed at sampling times \({\hat{t}}_0,\dots ,{\hat{t}}_m\), and assume that the derivative \(\hat{\dot{\mathbf{x }}}({\hat{t}}_j)\) cannot be directly measured. The data are assumed to be contaminated with (Gaussian, zero-mean) measurement noise, and smoothing techniques and statistical tests are used to perform pre-processing of the training data set (See Methods). The resulting pre-processed data are denoted by \(\tilde{\mathbf{x }}({\hat{t}}_j), \tilde{\mathbf{u }}({\hat{t}}_j) \; \forall j=0,\dots ,m\). The proposed framework in the present form is intended to be used for analyzing systems whose dynamics are governed by ordinary differential equations. As it will be discussed at length subsequently, the ideas proposed herein can be readily extended to systems with spatial differential operators and to partial differential equations, an extension that will constitute a direction of future research. To discover the underlying governing equations, we consider a dictionary of \(n_\theta \) candidate symbolic nonlinear basis functions denoted as \(\Theta (\mathbf{x }^T,\mathbf{u }^T,\mathbf{c })\), where \( \Theta (\cdot ): {\mathbb {R}}^{1\times n_x}\times {\mathbb {R}}^{1\times n_u} \rightarrow {\mathbb {R}}^{1\times n_\theta }\) and \(\mathbf{c }\in {\mathbb {R}}^{n_c}\) is a vector that captures some unknown parametrization of the basis functions (e.g. let basis function i be \(\theta _i\) such that \( \theta _i(\mathbf{x }^T,\mathbf{u }^T) = \mathbf{u }^T\exp (-c_i\mathbf{x }^T)\)). The dictionary is defined a priori, potentially leveraging some domain insights regarding the underlying system25. We assume that the governing equations can be expressed as a linear combination of the basis functions in this dictionary as: $$\begin{aligned} \frac{d}{dt}\mathbf{x }(t) = \mathbf{f }(\mathbf{x }(t),\mathbf{u }(t)) = \Xi ^T \Theta (\mathbf{x }(t)^T,\mathbf{u }(t)^T,\mathbf{c })^T \end{aligned}$$ where \(\Xi \in {\mathbb {R}}^{n_\theta \times n_x}\) is a matrix of coefficients whose columns are given by the sparse (i.e., most entries are zero) vectors \(\pmb {\xi }_1, \dots ,\pmb {\xi }_{n_x}\). This sparse model structure is illustrated in Fig. 1C using the well-known two-dimensional Lotka–Volterra predator–prey model as an example. Further, we distinguish between active coefficients (i.e., \(\xi _{i,j} \ne 0\) then basis function j is active for state variable i in the true governing equations), and inactive coefficients (i.e., \(\xi _{i,j}=0\) in the true governing equations). Following the sparsity argument, the fundamental challenge of discovering the governing equations translates to identifying the (few) active coefficients associated with the basis functions that are truly present in \(\mathbf{f }(\mathbf{x }(t),\mathbf{u }(t))\). We note that the vast majority of existing frameworks concerned with using regression mechanisms to learn governing equations from data16,25,27,28,29,35 rely on solving a linear optimization problem to estimate the unknown coefficients \(\Xi \), and cannot cope with parameterized libraries of basis functions that require estimating coefficients \(\Xi \) and parameters \(\mathbf{c }\) in function \(\Theta (\cdot )\) in (2), simultaneously. Equivalently, the existing frameworks assume implicitly or explicitly that the dynamics in (1) can be expressed as \(\Xi ^T\Theta (\mathbf{x } (t)^T,\mathbf{u } (t)^T)\), i.e., a function that is linear with respect to all unknown parameters and coefficients. The ability to handle the structural form in (2) represents a key advantage of the framework proposed here and one of the major contributions of this paper. In the aforementioned works, sparse regression techniques are used, whereby for each state variable \(i \in \{1,\dots ,n\}\) the sparse vector \(\pmb {\xi }_i\) is estimated as the solution of the following norm-regularized optimization problem: $$\begin{aligned} \pmb {\xi }_i,\mathbf{c }_i \in {{\,\mathrm{arg\,min}\,}}\frac{1}{2m} \sum _{j=0}^{m}||\Theta (\tilde{\mathbf{x }}({\hat{t}}_j)^T,\tilde{\mathbf{u }}({\hat{t}}_j)^T,\mathbf{c })\pmb {\xi }_i - \dot{\tilde{\mathbf{x }}}({\hat{t}}_j) ||_2^2 + \lambda \rho ||\pmb {\xi }_i||_1 + \frac{\lambda (1-\rho )}{2} ||\pmb {\xi }_i||_2^2 \end{aligned}$$ The goal is thus to minimize, in a norm sense, the difference between the value of the state derivatives predicted by the model, \(\Theta (\tilde{\mathbf{x }}({\hat{t}}_j)^T,\tilde{\mathbf{u }}({\hat{t}}_j)^T,\mathbf{c })\pmb {\xi }_i\), and the corresponding derivative values \(\dot{\tilde{\mathbf{x }}}({\hat{t}}_j)\) that are typically approximated numerically (via e.g. finite difference equations) from the m data samples at each sample time \({\hat{t}}_j\). The second part of the expression above is a regularization function composed of the \(\ell _1\) and \(\ell _2\) norms weighted by parameters \(\lambda \) and \(\rho \), thereby enforcing the sparsity of the solution by penalizing non-zero values of \(\pmb {\xi }\). Note that there is no regularization with respect to parameters \(\mathbf{c }\), as \(\Xi \) alone determines the sparsity of the recovered governing equations. An \(\ell _0\) regularization penalty was considered in34 for improved sparsity of the discovered dynamics, which was solved using the SR3 framework42 that allows for tackling nonconvex instances of the optimization problem (3). Nonlinear dynamic optimization In contrast to what has been proposed in the literature, in order to simultaneously learn the sparse coefficient matrix \(\Xi \) and parameters \(\mathbf{c }\) in (2), we formulate a constrained dynamic nonlinear program (DNLP) minimizing a given error metric. A discretization scheme using collocation on finite elements37 is implemented to convert the candidate model from the continuous-time ODE form in (2) to a system of nonlinear algebraic equations, which is embedded in the constraints of the DNLP43. The proposed discretization scheme is A-stable, high-order and can handle nonsmooth events at the element boundaries (e.g., nonsmooth or discontinuous control profiles), an improvement relative to the numerical properties of prior works (e.g.30,34), thus making it particularly suitable for stiff differential equations. As an additional contribution, algebraic constraints reflecting lower and upper bounds on the coefficients \(\Xi \) and parameters \(\mathbf{c }\), derived via pre-processing and/or available domain knowledge, can be seamlessly incorporated in the problem formulation to improve the convergence of the DNLP to the most parsimonious and accurate version of the governing equations (See Methods for further details on how these constraints can be derived). The DNLP is thus formulated in discrete time to minimize the mean \(\ell _2\)-error plus some regularization term as follows: $$\begin{aligned} \begin{aligned}{}&\min _{\Xi , \mathbf{c },\mathbf{x }}&\frac{1}{2NK} \sum _{i=0}^{N} \sum _{j=0}^{K}||\mathbf{x }(t_{ij})-\tilde{\mathbf{x }}(t_{ij})||_2^2 + \lambda \ell (\Xi )\\&\; \text {s.t.}&\mathbf{g }(\Theta (\mathbf{x }(t_{ij})^T, \tilde{\mathbf{u }}(t_{ij})^T,\mathbf{c}) ,\Xi ) = 0 \\&\;&\Xi \in \{ \Xi ^{L}, \Xi ^{U} \}, \;\;\;\mathbf{c } \in \{ \mathbf{c }^{L}, \mathbf{c }^{U} \} \end{aligned} \end{aligned}$$ where \(\mathbf{x }(t_{ij})\) are the states predicted by the candidate governing law at time point \(t_{ij}\) (i.e., in finite element i and collocation point j), the algebraic constrains \(\mathbf{g }(\cdot )=0\) represent the discretized version of the nonlinear dynamics given in (2), \(\Xi ^{L}, \mathbf{c }^L\) and \(\Xi ^{U},\mathbf{c }^U\) are respectively the lower and upper bounds of the estimated coefficients and parameters, and K is the number of collocation points on each of the N finite elements. The collocation equations form a set of algebraic constraint given by: $$\begin{aligned} \begin{aligned}{}&\left. \frac{d\mathbf{x }(t)}{dt} \right| _{t_{ij}} = \frac{1}{h_i}\sum _{k=0}^{K}\mathbf{x }_{ik}\frac{d\ell _k(\tau _j)}{d\tau }, \;\; j \in \{1,\dots ,K\}, \; i\in \{ 1,\dots ,N\} \\&\left. \frac{d\mathbf{x }(t)}{dt} \right| _{t_{ij}} = \Xi ^T \Theta (\mathbf{x }(t_{ij})^T, \mathbf{u }(t_{ij})^T,\mathbf{c })^T, \;\; j \in \{1,\dots ,K\}, \; i\in \{ 1,\dots ,N\} \\&\mathbf{x }(t_{i+1,0}) = \sum ^{K}_{k=0} \ell _k(1)\mathbf{x }(t_{ik}), \;\; i\in \{1,\dots ,N-1\} \\ \end{aligned} \end{aligned}$$ where \(h_i\) is the length of finite element i, \(t_{i-1} = t_{ij} -\tau _jh_i \), and the state variable \(\mathbf{x }(t)\) is interpolated using Lagrange polynomials \(\ell _k\) as follows: $$\begin{aligned} \begin{aligned}{}&\mathbf{x }(t) = \sum ^{K}_{k=0}\ell _k(\tau )\mathbf{x }_{ik}, \;\; t\in [t_{i-1}, t_i], \; \tau \in [0,1] \\&\ell _k(\tau ) = \prod ^{K}_{j=0,\ne k} \frac{\tau -\tau _j}{\tau _k-\tau _j} \end{aligned} \end{aligned}$$ The optimal choice of the interpolation points \(\tau _j\) is derived in detail in37 (Theorem 10.1). Note that the data might need to be resampled (e.g., via interpolation) to estimate the measurements at the defined collocation points. In compact form, we denote (5) and (6) by \( \mathbf{g }(\Theta (\mathbf{x }(t_{ij})^T, \tilde{\mathbf{u }}(t_{ij})^T,\mathbf{c}) ,\Xi ) = 0\). It should be noted that in (4) the basis functions in the dictionary are not directly evaluated on the measurement data \(\tilde{\mathbf{x }}\), as is the case in (3). Estimating derivatives directly from data further propagates measurement noise increasing differentiation error, irrespective of the numerical scheme used. Rather, (4) represents a symbolic nonlinear function of the predicted states and does not directly require an approximation of \(\tilde{\dot{\mathbf{x }}}\) to derive the coefficients \(\Xi \) (as is done in (3)), thus reducing errors related to noise propagation due to numerical differentiation. The proposed approach is analogous to the weak formulations introduced in27,28,29, whereby the system dynamics are expressed in integral from so as not to directly use the estimated derivative in the regression calculation. While the approximation error of (5) is evidently nonzero, as with any discretization scheme, in this case the truncation error is \({\mathcal {O}}(h^{2K-1})\) for Lagrange-Radau collocation points37. Thus, the gradient approximation error is expected to be lower than in the case of approaches that directly compute \(\dot{\tilde{\mathbf{x }}}\)16,34, or that use lower order methods28,30, as long as a sufficiently high number of finite elements and collocation points is used. Moving horizon optimization and thresholding Evidently, the dimension of the DNLP (4) increases with the granularity of the discretization used, as indicated by K and N, and indirectly with the dimension the data m (i.e., data for longer time horizons require more discretization points for an equivalent approximation error). Similarly, the dimension \(n_\theta \) of the dictionary of basis functions (and associated parameterization) is positively correlated with the number of decision variables in (4), as the coefficient matrix has dimensions \(\Xi \in {\mathbb {R}}^{n_\theta \times n_x}\). Nonethless, increasing N, K, m or \(n_\theta \) are all desirable attributes that improve the likelihood of discovering the true governing equations. Thus, solving (4) while taking into consideration the entire available data set, as is generally done in existing discovery frameworks based on sparse regression16,25,30,34, is likely computationally expensive (particularly for granular discretizations and large dictionaries of basis functions). Noting that solving (4) is generally NP-hard, the actual computational effort and solution time cannot be predicted from the above problem dimensions. An additional fundamental challenge is related to imposing parsimony in the learned model. This sparsification entails eliminating the basis functions that are not part of the true model (1) by setting the corresponding coefficients \(\Xi \) to zero. A particular difficulty arises when the true value of a coefficient is "small:" while the corresponding estimate may also be small, it is difficult to discern whether this outcome is correct or the non-zero estimated value is the result of ovefitting (i.e., a spurious attempt to further decrease the value of the objective function in (4) for the training data by increasing model complexity and retaining a larger number of basis functions). A thresholding approach consisting of eliminating terms whose estimated coefficient magnitudes are below a specific value (determined via cross-validation) can in principle be employed16, but its performance is expected to degrade with increasing model stiffness and for systems having a broad range of coefficient magnitudes. The framework proposed here addresses both fundamental challenges described above. To deal with dimensionality, the DNLP in (4) is decomposed into a sequence of lower-dimensional problems defined on shorter time horizons (i.e., using smaller subsets of the available data), for which optimal solutions to (4) can be obtained with significantly lower computational effort. An illustration of the structure of data subsets is shown in Supporting Fig. S.2. After a solution of (4) is computed, a new data subset is selected (intuitively—but not necessarily—by shifting the time window forward by a smaller step than the window size) which is then used to solve (4) again. The repetition of this procedure allows for efficiently learning and refining a sequence of governing equation models each with different coefficient estimates (further details are provided in Algorithm 2 in Methods). This temporal decomposition of the optimization problem is the premise of model predictive control39and moving horizon state estimation38, by which significant computational speedup has been attained. In conjunction with this moving horizon strategy, the following thresholding claim is made: Claim 1 The coefficients in \(\Xi \) corresponding to inactive basis functions typically contribute to overfitting. Small, non-zero values reflect the use of the corresponding functions to fit the noise in the training data. Hence, coefficient estimates derived from the sequence of problems described above that correspond to inactive basis functions are likely to have a relatively high variance. The converse argument can be made for coefficients of active basis functions: the variance of a sequence of estimates is expected to be relatively low. These claims then support the use of dispersion metrics from statistics (e.g., the coefficient of variation) for the parameters obtained in a sequence of estimates based on subsets of the data, to infer whether a basis function belongs to the true dynamics or not (i.e., if it is active or inactive). In this way, the proposed framework allows for learning dynamics that are truly sparse, similarly to what would be expected from \(\ell _0\) regularization for coefficient selection. Once convergence is established (e.g., the structure of the discovered equations does not change after a certain number of iterations), the average or the median of the coefficients can then be used in the aggregated model. The proposed training mechanism is in a way similar to ensemble methods (e.g.,35,44) commonly used in machine learning, where a pool of models is trained on different subsets of the data (typically randomly sampled by means of bootstrapping), to reduce variance in the coefficient estimates and thus improve the performance of the final aggregated model prediction. A comprehensive illustration of our entire proposed framework is shown in Fig. 1 for the Lotka–Volterra predator–prey model, for which the dynamics have the form of \(\dot{\mathbf{x }} = \Xi ^T \Theta (\mathbf{x }^T)^T\). That is, the governing equations do not involve control inputs \(\mathbf{u }\) or parametric basis functions dependent on \(\mathbf{c }\), as in the more general model in (2). We consider this type of systems first in order to demonstrate the performance of our approach and compare it against existing frameworks. Subsequently, we present a case study showing that the proposed framework can cope with dynamics of the form in (2). Summary of the present framework using the Lotka–Volterra system as an illustrative example, and for simplicity without considering control inputs \(\mathbf{u }\) or parametric basis functions \(\Theta (\mathbf{x }^T, \mathbf{c })\). (A) Noisy data \(\hat{\mathbf{x }}\) are collected from physical experimentation or simulation. (B) The data are smoothed using filtering techniques and preliminary statistical analyses are performed to refine the library of basis functions and initialize the DNLP. (C) The dynamics are discretized and the DNLP is solved for the current window of data. (D) Every \(\omega \) iterations of the moving horizon algorithm, coefficient thresholding is performed to eliminate basis functions having a coefficient of variation greater than a given tolerance (exemplified by histograms with grey background). (E) Once convergence is established the recovered equations are validated via simulation, by subsequently computing suitable regression error metrics, and by applying visualization techniques. Validation using canonical nonlinear dynamical system examples To validate our framework, we consider a series of canonical nonlinear dynamical systems shown in Fig. 2A. For each system, the true model in Fig. 2A was used to generate simulated measurement data which were artificially contaminated with white zero-mean uncorrelated noise of increasing standard deviation \(\sigma \). This approach and the amount of noise considered are comparable to prior works in the literature16,27,28,29,35. The results obtained and validation the discovered governing equations are summarized in Fig. 2B–E. The convergence of our method is shown in Fig. 2B, where even for increasing noise levels the number of terms in the discovered equations converges to the black dashed line indicating the number of terms or basis functions in the true dynamics. For data sets with increasing noise variance, the pre-processing step fails to reduce the size of the basis which results in a greater number of iterations to converge to the discovered equations. These results indicate that the proposed moving horizon can cope with larger libraries of basis functions at the expense of a greater number of iterations. The evolution of the coefficient of variation (CV) of the basis functions retained after pre-processing is shown in Fig. 2C, which is the statistical metric used for the proposed tresholding strategy. The fact that the CV of active variables is noticeably below the established threshold and is considerably lower than for inactive variable, supports and validates the arguments made in Claim 1. Interestingly, per the results in Fig. 2C, active and inactive coefficients can be quantitatively and discretely distinguished which allows for discrete basis function selection, similarly to solving a \(\ell _0\) regularized regression problem, while at the same time circumventing the complexities associated with integer programming26. We note that for all dynamical systems and noise settings the variability tolerance was set to \(\psi = 1\) (a natural choice to distinguish between coefficients with high and low variability, independently of the underlying dynamical systems). Conversely, determining the optimal choice of hyperparameters for prior sparse regression approaches (e.g., determining the regularization penalty for LASSO regression or the coefficient cut-off threshold for sequentially thresholded least squares16 is typically highly dependent on and related to the underlying system dynamics and its associated coefficient magnitudes. Further demonstrating the performance of our framework, supporting Figs. S.3 and S.4 show the performance of our approach with different values of hyperparameters including the number of finite elements (N) and collocation points (K), variability tolerance (\(\psi \)), and optimization horizon length (H). In particular, the results for varying H emphasize the scalability advantages associated with the moving horizon scheme employed (i.e., CPU time increases dramatically as H approaches the entire time horizon in which the data was sampled). Coarse discretizations lead to high approximation errors, while fine discretizations lead to optimization problems with more variables and constraints, that may be more challenging to solve. Figure S.6 shows the simultaneous effect of measurement noise and sampling frequency on the accuracy and success rate of the proposed discovery framework for the Lotka–Volterra model. The results suggest that the approach maintains good performance even for decreasing sampling frequency (i.e., increasing sparsity of measurement data) when the measurement noise is not substantial. Figure 2D shows the mean squared error (MSE) between the simulated discovered equations and the true system dynamics for each noise instance considered. While the structural form of the governing equations was correctly identified, increasing measurement noise results in relatively higher error and variance in the values of the estimated coefficients, which intuitively increases the MSE between the two trajectories. Note that the associated coefficients were accurately estimated even for the highest noise instances (the average coefficient estimate error was 0.32%, 1.6%, 0.41%, 2.5% respectively for Lotka–Volterra, van der Pol, Brusselator and Lorenz dynamical systems). Nonetheless, even for the highest noise setting for each system, Fig. 2E indicates that the discovered equations via DySMHO accurately capture the dynamics in the measured data and the overall behavior of the true system dynamics. Similar to ensemble methods35, the sequence of coefficient estimates corresponding to iterations where the structure of the discovered equations no longer changes (e.g., convergence) can be used to quantify uncertainty and derive confidence intervals. This characterization then allows for using Monte Carlo sampling strategies to simulate and validate the discovered dynamics for a range of possible coefficient values45. Numerical experiments on canonical nonlinear dynamical systems with governing equations of the form of \(\dot{\mathbf{x }} = \Xi ^T \Theta (\mathbf{x }^T)^T\). (A) System name and true governing equations. (B) Mean number of basis functions in the discovered model as a function of iterations over 10 random samples of the simulated measurement data for each noise level considered (shaded area corresponds to two standard deviations from the mean, dashed black line is used to indicate the number of terms in the true dynamics). Two-dimensional systems were initialized with 28 basis functions, and 3-dimensional systems with 66 (See Supplementary Information for further details). (C) Coefficient of variation for each basis function after each thresholding step (results correspond to the highest noise setting considered for each system). (D) Average mean squared error (MSE) between simulations of the true dynamics and the discovered dynamics. (E) Comparison of discovered dynamics against true model and measured data (results correspond to the highest noise setting considered for each system). Benchmarking against existing sparse regression-based discovery frameworks To evaluate the performance of the present framework relative to state of the art governing equation discovery schemes, we perform benchmarking experiments against several versions of the SINDy algorithm16, assessing different available options (mainly the choice of optimization algorithm and the value of the regularization coefficient) using the Python implementation PySINDy18. We consider three benchmarks corresponding to conventional SINDy, weak SINDy (W-SINDy)29, and ensemble SINDy (E-SINDy)35, the latter two being some of the latest developments in the area. All three aforementioned instances of SINDy were tested with three different optimization routines; specifically sequential thresholded least squares (STLSQ)16, LASSO regression14,16, and sparse relaxed regularized regression (SR3)34,42. For the SINDy and E-SINDy methods, that rely on direct state derivative estimation from measurement data, we used the smoothed finite difference option18, which employs the Savitzky-Golay filter and was observed to perform the best among the other differentiation methods available. Each combination of SINDy method and optimization solver was screened for a range of hyperparameters for which further details can be found in the Supporting Information. For the present framework, the coefficient of variation thresholding tolerance \(\psi \) was the main hyperparmater evaluated, as it dictates the extent of regularization, and for which an intuitive choice for all systems is \(\psi = 1\). The benchmarking results for all canonical nonlinear dynamics under increasing noise variance are shown in Fig. 3, where the set of results for each method corresponds to different combinations of hyperparmaters. In order to compute the complexity for SINDy models, we set all "small" (but nonzero) coefficients estimates to zero if their absolute values were lower than 10% of the smallest coefficient in the true governing equations. While this alteration favors the complexity score of the models identified with SINDy (i.e., it yields a lower complexity than was actually identified), it highlights the inherent advantage of our proposed thresholding method which results in truly sparse dynamics (i.e. the coefficients of basis functions that are deemed inactive are set exactly to zero). This property, together with the more intuitive selection of the thresholding tolerance \(\psi \) for the present work, results in discovering models that are generally sparser than those identified via SINDy methods, as can be noted from the results shown in Fig. 3. We note that, in particular, SINDy methods failed to eliminate trigonometric basis functions (\(\sin (\mathbf{x })\) and \(\cos (\mathbf{x })\)) and constant terms, which are likely low in magnitude relative to the measurement data and serve to reduce MSE by overfitting noise. The coefficients associated with these (inactive) basis functions exhibit considerable variability when estimated on sequential data sets, and thus are pruned in our framework, resulting in identified governing equations of relatively lower complexity. For low noise instances, our approach exhibits comparable performance (in terms of MSE and model complexity) to that of the different SINDy methods when the appropriate regularization parameter is used. We note that for the Lorenz oscillator dynamics, due to its chaotic nature, even very minor coefficient estimation errors can result in significant deviations between the simulated trajectories and the original model, which in turn results in a large MSE for the discovered model. For example, for the present framework, the average coefficient error was 2.23% with standard deviation of 0.14 for the highest noise instance corresponding to \(\sigma =0.5\). The relative performance of our proposed approach clearly improves for increasing measurement noise across all dynamical system examples. While in some instances the proposed framework fails to discover the correct model for large and small values of hyperparameter \(\psi \), it consistently identifies the true dynamics for \(\psi \approx 1\), which is an intuitive setting to distinguish between high and low variance coefficients (as was empirically demonstrated in Fig. 2C). Further, the high-order implicit dicretization in the proposed framework appears to be advantageous for dynamical systems where the order of magnitude of the coefficients is very different (e.g., Lotka Volterra), and for systems with stiff dynamics (e.g., van der Pol and Brusselator). Benchmarking experiments of the proposed framework against the SINDy family of methods. The results shown correspond to the discovered model's complexity (reflected in the number of terms) versus the model's predictive ability (reflected in the mean squared error). Each marker corresponds to a specific instance of a given method for a given set of hyperparameters(see Supporting Information for associated values used). The dashed vertical line indicates the number of terms present in the true governing equation for each system. Validation using a non-isothermal chemical reactor system with exogenous inputs To further demonstrate the advantages of the proposed framework relative to the previously considered sparse regression frameworks16,18,29,35, we consider a case study corresponding to a controlled non-isothermal continuously stirred tank reactor (CSTR) system. A similar non-isothermal case study was reported previously in45, where further details regarding the system can be found. We note that SINDy methods have been previously applied for inferring chemical kinetics22,24 but, unlike the current case, only isothermal reaction conditions were considered. In particular, we show that our moving horizon nonlinear optimization scheme can recover governing equations that are of the form of \(\Xi ^T\Theta (\mathbf{x }(t)^T,\mathbf{u }(t)^T, \mathbf{c })^T\), with exogenous inputs \(\mathbf{u }(t) \in {\mathbb {R}}^{n_u}\) (which can be used as an input signal to excite the dynamics or to drive the states of the system to a desired target value (setpoint)) and parametric basis functions. The governing dynamics of the CSTR are given by the coupled material and energy balance equations: $$\begin{aligned} \begin{aligned}{}&\frac{dC_A(t)}{dt} = \frac{q}{V}(C_{A,i} - C_A(t))-k_0 e^{-E_a/RT(t)}C_A(t) \\&\frac{dT(t)}{dt} = \frac{q}{V}(T_i -T(t)) + \frac{(-\Delta H_R)}{\rho C}k_0 e^{-E_a/RT(t)}C_A(t)+\frac{U{\mathscr {A}}}{V \rho C}(T_c(t)-T(t)) \end{aligned} \end{aligned}$$ where \(C_A(t)\) and T(t) are the system states \(\mathbf{x }(t)\) and represent, respectively, the concentration of species A and the temperature. Further, \(\rho \) is the density of the liquid, C is its heat capacity, \((\Delta H_R)\) is the heat of reaction, \(T_i\) is the temperature of the inlet stream, \(T_c\) is the temperature of the coolant, U is the overall heat transfer coefficient, \({\mathcal {A}}\) is the heat transfer area, \(k_0\) is the pre-exponential factor, \(E_a\) is the activation energy, and R is the universal gas constant. Differently from45, that considers q(t) as the manipulated input in the isothermal case, this example assumes \(T_c(t)\) to be the manipulated input in a non-isothermal setting in which the temperature dynamics in (7) must be accounted for. Further details regarding the case study parameters and hyperparameters used can be found in the Supporting Information. Results for the discovery of the CSTR governing mass and energy balances are shown in Fig. 4, corresponding to ten randomly generated data instances for which the true functional forms of the equations in (7) were identified. The discovered equations correctly capture the dynamics for both composition and temperature as shown in Fig. 4C, accurately reproducing the slow and fast frequency of the oscillations observed in the data. Nonetheless, similarly to chaotic systems (e.g., Lorenz oscillator in Fig. 2), it should be the noted that for stiff and multi-scale systems such as this CSTR example, even small parameter estimation errors as the ones observed in Fig. 4D can result in significant deviations from the true simulated trajectory. The plots in Fig. 4E show that as the algorithm proceeds and an increasing number of basis functions is pruned from the dictionary, the predictive capability (in terms of MSE) of the discovered model remains relatively constant, which is indicative of convergence to the most parsimonious governing equation. Numerical experiments on CSTR case study with dynamics of the form of \(\dot{\mathbf{x }} = \Xi ^T \Theta (\mathbf{x }^T, \mathbf{u }^T,\mathbf{c })^T\). (A) Schematic of CSTR system showing states, inputs, and key parameters. (B) Control input signal used to excite dynamics given by \(T_c(t)=305\times (1+\sin (\pi t/5)/125)\). (C) State variable trajectories (composition (top) and temperature (bottom)) corresponding to: measurements with noise, true trajectory, and sample discovered trajectory. (D) Comparison of coefficient values of the true governing equation and that of the discovered dynamics (coefficient estimates were scaled to be of the same order of magnitude). (E) Average MSE as a function of the number of terms remaining in the dynamics throughout the iterations for ten randomly generated data sets (shaded area corresponds to two standard deviations from the mean). Dashed vertical line indicates the number of basis functions present in the true governing equations. Limitations and directions of future work While the numerical experiments performed show that our proposed framework has multiple promising features and benefits relative to existing approaches, there are several future research questions to be addressed. For example, the open source code developed for the present framework is only capable of handling ODEs and in its current from cannot cope with systems governed by PDEs. Nonetheless, we note that the orthogonal collocation equations introduced in (5) and (6) readily generalize to differential operators other than time derivatives (e.g., spatial and higher order derivatives), such that the proposed scheme can be used to identify sparse PDEs. We believe that the implicit integration scheme embedded in our proposed method would be advantageous for high-order PDE systems, for which estimation of high-order derivatives from noisy measurement data is especially challenging29. Developing the associated code and performing validating experiments is a crucial next step. An interesting potential extension to the present work is an adaptive thresholding tolerance \(\psi \), whose value decreases with increasing number of iterations and basis functions pruned. This idea stems from the fact that, as fewer functions remain in the basis, the variability of the remaining inactive coefficients decreases making the thresholding step more challenging. This property can be observed from the results presented in Fig. 2B, where the first few thresholding steps eliminate a considerably larger number of basis functions than the later steps before convergence is established. For high noise environments, even the coefficient estimates of active basis functions are likely to have high variability and could be erroneously pruned if \(\psi \) is initially set to a low value when the library of basis functions is large. Furthermore, for higher dimensional systems it is important to employ efficient large-scale nonlinear programming strategies. Developing numerical solutions that exploit the (sparse) structure of the collocation equations and resulting optimization problem is a key step in discovering governing equations for systems of industrially-relevant dimensions46. In particular, decomposition and parallelization schemes are critical for considering larger data sets, while maintaining manageable computational complexity37. While for the present work local optimization algorithms were employed, to solve (4), we expect that using scalable global optimization solvers could further improve the accuracy of the model coefficient estimates, and the overall performance of the proposed framework. Data-driven discovery of governing equations is a promising avenue for advancing our understanding of and elucidating new phenomena across a wide range of disciplines. This new fundamental knowledge can in turn be used to drive the development of new technologies to solve pressing scientific and societal challenges. In this paper, we introduced and validated a novel moving horizon-based, nonlinear dynamic optimization framework for learning governing equations from noise-contaminated state measurements over time. The proposed framework benefits from the properties of weak (or integral) discovery methods by incorporating high order and stable numerical integration schemes to represent the system dynamics, thereby minimizing gradient approximation errors, and improving performance for stiff and multiscale nonlinear systems. The proposed moving horizon scheme not only improves computational tractability of the underlying optimization problem, but provides a systematic and statistically meaningful approach for thresholding basis functions by which the identified equations are generally of lower complexity than those resulting from sparse regression approaches proposed by other authors. The sequential coefficient estimates are aggregated once the structure of the discovered equations does not change, improving robustness to noisy measurements in a similar sense to previously reported ensemble approaches. Lastly, the nonlinear programming structure that lies at the core of the proposed methodology is capable of handling more complex dynamical systems by relaxing the assumption made in most prior works (i.e., that the dynamics are linear with respect to the unknown model parameters), as it was demonstrated empirically for a chemical reactor case study. Pre-processing: data smoothing The Savitzky-Golay filter (SVGF) in the SciPy package (https://www.scipy.org/) was employed, which is based on local least-squares regression polynomial approximations applied to the data on moving windows of a given size. An iterative scheme was developed to automatically determine the appropriate smoothing widow size, which consisted of smoothing the measured signal using increasing window size until the estimated noise component did not change significantly (given a specific tolerance). Further details are presented in Algorithm 1, and an illustration is shown in the Supplementary Fig. S.1. Pre-processing: statistical analysis The smoothed state measurements are arranged in the form of a data matrix \(\tilde{\mathbf{X }}\): $$\begin{aligned} \tilde{\mathbf{X }} =\begin{bmatrix} \tilde{\mathbf{x }}^{T}({\hat{t}}_1) \\ \vdots \\ \hat{\mathbf{x }}^{T}({\hat{t}}_m) \end{bmatrix} = \begin{bmatrix} {\tilde{x}}_1 ({\hat{t}}_1) &{} \dots &{} {\hat{x}}_n ({\hat{t}}_1) \\ \vdots &{} \ddots &{} \vdots \\ {\tilde{x}}_1 ({\hat{t}}_m) &{} \dots &{} {\hat{x}}_n ({\hat{t}}_m) \\ \end{bmatrix} \end{aligned}$$ where \(t_1\), \(t_2\), \(\dots \), \(t_m\) are the time intervals at which the measurements were collected. The dictionary of candidate basis functions is then evaluated for every point in the data matrix, resulting in a structure such as the one below: where \(\tilde{\mathbf{X }}^{P_2}\) denotes second order polynomials which may include interaction terms, e.g.: $$\begin{aligned} \tilde{\mathbf{X }}^{P_2} = \begin{bmatrix} \tilde{\mathbf{x }}^{P_2}({\hat{t}}_1) \\ \vdots \\ \tilde{\mathbf{x }}^{P_2}({\hat{t}}_m) \end{bmatrix} = \begin{bmatrix} {\tilde{x}}_1^2({\hat{t}}_1) &{} {\tilde{x}}_1({\hat{t}}_1) {\tilde{x}}_2({\hat{t}}_1) &{} \cdots &{} {\tilde{x}}_2^2({\hat{t}}_1) &{} \cdots &{} {\tilde{x}}_n^2({\hat{t}}_1)\\ {\tilde{x}}_1^2({\hat{t}}_2) &{} {\tilde{x}}_1({\hat{t}}_2) {\tilde{x}}_2({\hat{t}}_2) &{} \cdots &{} {\tilde{x}}_2^2({\hat{t}}_2) &{} \cdots &{} {\tilde{x}}_n^2({\hat{t}}_2) \\ \vdots &{} \vdots &{} \ddots &{} \vdots &{} \ddots &{} \vdots \\ {\tilde{x}}_1^2({\hat{t}}_m) &{} {\tilde{x}}_1({\hat{t}}_m) {\tilde{x}}_2({\hat{t}}_m) &{} \cdots &{} {\tilde{x}}_2^2({\hat{t}}_m) &{} \cdots &{} {\tilde{x}}_n^2({\hat{t}}_m) \end{bmatrix} \end{aligned}$$ Evidently, the form of this structure will depend on the choice of basis functions. Using this form, Granger causality tests (based on F- and chi-squared distributions) were used to determine potential causality between \(\Theta _i (\tilde{\mathbf{X }}_j({\hat{t}}_{k-1}))\) and \(\tilde{\mathbf{X }}_j({\hat{t}}_k)\) for all basis functions \(i\in \{1,\dots ,n_\theta \}\) and states \(j\in \{1,\dots ,n_x\}\). Basis functions i for which the null hypothesis (that \(\Theta _i(\tilde{\mathbf{X }}_j)\) does not Granger-cause \({\tilde{\mathbf{X }}}_j\)) could not be rejected with high significance, were eliminated from the library. Stationarity of the time series was checked using the Dickey-Fuller test and enforced by differencing as needed. The statsmodel (https://www.statsmodels.org/stable/index.html) implementations of the Granger causality and Dickey-Fuller tests were used. Next, the derivative of each state variable is approximated from the data by using central finite differences as follows: $$\begin{aligned} \dot{\tilde{\mathbf{x }}}({\hat{t}}_k) = \frac{\tilde{\mathbf{x }}({{\hat{t}}_{k+1}})-\tilde{\mathbf{x }}({{\hat{t}}_{k-1}})}{{\hat{t}}_{k+1}-{\hat{t}}_{k-1}}, \;\; k = 2,\dots ,m-1 \end{aligned}$$ and similar to (9) a data derivative matrix \(\dot{\tilde{\mathbf{X }}}\) is formed. Other derivative approximation strategies (e.g.,47) can be used when there is significant noise in the data. Ordinary least squares (OLS) regression was used to obtain initial coefficient estimates for \(\Xi \) in (4) as well as the associated upper and lower bounds (\(\Xi ^L\) and \(\Xi ^U\)). The linear system solved is given by: $$\begin{aligned} \dot{\tilde{\mathbf{X }}} = \Theta (\tilde{\mathbf{X }})\Xi ^{OLS} \end{aligned}$$ This allows for performing preliminary variable selection by computing the F-statistic and p-value associated with each coefficient estimate, as well as for using the resulting confidence intervals (for a specified confidence level) to serve as (\(\Xi ^L\) and \(\Xi ^U\)) in (9). The statsmodel implementation of OLS was used to solve the regression problem (12), as well as to compute associated test statistics and confidence intervals. Discretization of the candidate ordinary differential equations The collocation equations in (5) and (6) are represented in compact form as \( \mathbf{g }(\Theta (\mathbf{x }(t_{ij})^T, \mathbf{u }(t_{ij})^T),\mathbf{c }),\Xi )=0\) in the DNLP in (4). We employ the pyomo.DAE (http://www.pyomo.org/) modeling extension that enables automatic simultaneous discretization of ODEs, and leverages Gauss-Legendre and Gauss-Radau collocation schemes to determine the interpolating points in (5) and (6). It should be noted that the optimal choice of finite elements and collocation points may not align with the sample times \({\hat{t}}_1, \dots , {\hat{t}}_m\) at which the data were originally collected, thus spline interpolation is required to approximate the data at the relevant time instants. We employ the SciPy package in Python using a cubic spline to estimate the state values at the collocation points. Moving horizon optimization approach A detailed outline of the moving horizon algorithm is presented next in Algorithm 2 and an illustration is shown in Supplementary Fig. S.2. In brief, the algorithm consists of solving (4) and performing parameter thresholding every \(\omega \) iterations (the thresholding process is described separately in Algorithm 3). The algorithm terminates when the training data are exhausted or when convergence is established, that is when the number of basis functions remaining in the library, denoted as \(|\Theta |\), does not change after a number \(\Omega \) of thresholding steps. Evidently, the number of optimization problems to be solved depends directly on the choice of optimization horizon H. Nevertheless, the problems in the sequence are likely significantly more computationally tractable than solving (4) for the entire (large-scale) data set. It is worth mentioning that, to date, no analytical frameworks exist for determining the optimal choice of horizon H. The empirical consensus is that longer horizons yield better results (i.e., convergence of the estimates to the true values of the parameters), which intuitively comes at a computational cost38. For periodic systems, such as the Lotka–Volterra system shown in Supplementary Fig. S.2, an intuitive choice for H can be an integer multiple of the period corresponding to the fundamental oscillation frequency, which can be estimated from the data. Thresholding algorithm The proposed thresholding approach is described in Algorithm 3, and is embedded within the moving horizon scheme introduced previously in Algorithm 2. The variability of the sequence of coefficient estimates can be quantified statistically by computing the coefficient of variation (CV), defined as the ratio between the standard deviation and the mean. In our framework, the coefficient of variation is computed for the coefficients of all basis functions \(\Theta _\theta \; \forall \theta \in \{1,\dots ,|\Theta |\}\), and for all state variables \(j\in \{1,\dots ,n_x\}\). If the coefficient of variation \(CV_{\theta ,j}\) is greater than the specified variability threshold \(\psi \), then the basis function is pruned and not considered in future iterations of the moving horizon scheme. Otherwise, the basis function \(\Theta _\theta \) remains in the dictionary. Basis functions such as \(\{\pmb {1}, \mathbf{x }\} \in \Theta \) are particularly prone to contributing to overfitting noise components in the data, as well as to the differentiation error introduced when the dynamics are discretized (particularly in high noise environments and when the initial function library is larger). To prevent spurious thresholding for this type of basis functions, whose associated coefficients are likely so see greater variability across different data subsets, it is preferable to retain them in the basis for the first few thresholding steps regardless of their associated coefficients' observed CV. After these initial iterations and when (potentially) some of the other inactive basis functions have been pruned, if basis functions like e.g. \(\{\pmb {1}, \mathbf{x } \}\) are in fact active they are expected to experience less variability in their respective coefficients and remain in the basis when the algorithm converges. All data and code used in this analysis can be found at: https://github.com/Baldea-Group/DySMHO. Karniadakis, G. E. et al. Physics-informed machine learning. Nat. Rev. Phys. 3, 422–440 (2021). Raissi, M., Perdikaris, P. & Karniadakis, G. E. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. J. Comput. Phys. 378, 686–707 (2019). Article ADS MathSciNet Google Scholar Pang, G., Lu, L. & Karniadakis, G. E. fPINNs: Fractional physics-informed neural networks. SIAM J. Sci. Comput. 41, A2603–A2626 (2019). Yang, L., Meng, X. & Karniadakis, G. E. B-PINNs: Bayesian physics-informed neural networks for forward and inverse PDE problems with noisy data. J. Comput. Phys. 425, 109913 (2021). Meng, X., Li, Z., Zhang, D. & Karniadakis, G. E. PPINN: Parareal physics-informed neural network for time-dependent PDEs. Comput. Methods Appl. Mech. Eng. 370, 113250 (2020). Pang, G., D'Elia, M., Parks, M. & Karniadakis, G. E. nPINNs: Nonlocal physics-informed neural networks for a parametrized nonlocal universal laplacian operator. Algorithms and Applications. J. Comput. Phys. 422, 109760 (2020). James, G., Witten, D., Hastie, T. & Tibshirani, R. An Introduction to Statistical Learning, vol. 112 (Springer, 2013). Schmidt, M. & Lipson, H. Distilling free-form natural laws from experimental data. Science 324, 81–85 (2009). Koza, J. R. Genetic Programming: On the Programming of Computers by Means of Natural Selection, vol. 1 (MIT Press, 1992). Udrescu, S.-M. & Tegmark, M. AI Feynman: A physics-inspired method for symbolic regression. Sci. Adv. 6, eaay2631 (2020). Cranmer, M. et al. Discovering symbolic models from deep learning with inductive biases. arXiv preprint arXiv:2006.11287 (2020). Dubčáková, R. Eureqa: Software Review. Genet. Program. Evol. Mach. 12, 173–178 (2011). Xu, H., Chang, H. & Zhang, D. DLGA-PDE: Discovery of PDEs with incomplete candidate library via combination of deep learning and genetic algorithm. J. Comput. Phys. 418, 109584 (2020). Tibshirani, R. Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B Stat. Methodol. 58, 267–288 (1996). MathSciNet MATH Google Scholar Zou, H. & Hastie, T. Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B Stat. Methodol. 67, 301–320 (2005). Brunton, S. L., Proctor, J. L. & Kutz, J. N. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proc. Natl. Acad. Sci. 113, 3932–3937 (2016). Zhang, L. & Schaeffer, H. On the convergence of the SINDy algorithm. Multiscale Model. Simul. 17, 948–972 (2019). de Silva, B. et al. Pysindy: A python package for the sparse identification of nonlinear dynamical systems from data. J. Open Source Softw. 5, 2104, https://doi.org/10.21105/joss.02104 (2020). Rudy, S. H., Brunton, S. L., Proctor, J. L. & Kutz, J. N. Data-driven discovery of partial differential equations. Sci. Adv. 3, e1602614 (2017). Schaeffer, H. Learning partial differential equations via data discovery and sparse optimization. Pro. R. Soc. A Math. Phys. Eng. Sci. 473, 20160446 (2017). Champion, K., Lusch, B., Kutz, J. N. & Brunton, S. L. Data-driven discovery of coordinates and governing equations. Proc. Natl. Acad. Sci. 116, 22445–22451 (2019). Mangan, N. M., Brunton, S. L., Proctor, J. L. & Kutz, J. N. Inferring biological networks by sparse identification of nonlinear dynamics. IEEE Trans. Mol. Biol. Multi-Scale Commun. 2, 52–63 (2016). Kaiser, E., Kutz, J. N. & Brunton, S. L. Sparse identification of nonlinear dynamics for model predictive control in the low-data limit. Proc. R. Soc. A Math. Phys. Eng. Sci. 474, 20180335 (2018). ADS MathSciNet CAS MATH Google Scholar Hoffmann, M., Fröhner, C. & Noé, F. Reactive SINDy: Discovering governing reactions from concentration data. J. Chem. Phys. 150, 025101 (2019). Sun, W. & Braatz, R. D. ALVEN: Algebraic learning via elastic net for static and dynamic nonlinear model identification. Comput. Chem. Eng. 143, 107103 (2020). Cozad, A., Sahinidis, N. V. & Miller, D. C. Learning surrogate models for simulation-based optimization. AIChE J. 60, 2211–2227 (2014). Schaeffer, H. & McCalla, S. G. Sparse model selection via integral terms. Phys. Rev. E 96, 023302 (2017). Messenger, D. A. & Bortz, D. M. Weak SINDy: Galerkin-based data-driven model selection. Multiscale Model. Simul. 19, 1474–1497 (2021). Reinbold, P. A., Gurevich, D. R. & Grigoriev, R. O. Using noisy or incomplete data to discover models of spatiotemporal dynamics. Phys. Rev. E 101, 010203 (2020). Goyal, P. & Benner, P. Discovery of Nonlinear Dynamical Systems using a Runge-Kutta Inspired Dictionary-based Sparse Regression Approach. arXiv preprint arXiv:2105.04869 (2021). Kaheman, K., Brunton, S. L. & Kutz, J. N. Automatic differentiation to simultaneously identify nonlinear dynamics and extract noise probability distributions from data. arXiv preprint arXiv:2009.08810 (2020). Cao, W. & Zhang, W. Machine learning of partial differential equations from noise data. arXiv preprint arXiv:2010.06507 (2020). Tran, G. & Ward, R. Exact recovery of chaotic systems from highly corrupted data. Multiscale Model. Simul. 15, 1108–1129 (2017). Champion, K., Zheng, P., Aravkin, A. Y., Brunton, S. L. & Kutz, J. N. A unified sparse optimization framework to learn parsimonious physics-informed models from data. IEEE Access 8, 169259–169271 (2020). Fasel, U., Kutz, J. N., Brunton, B. W. & Brunton, S. L. Ensemble-sindy: Robust sparse model discovery in the low-data, high-noise limit, with active learning and control. Proc. R. Soc. A 478, 20210904 (2022). Reinbold, P. A., Kageorge, L. M., Schatz, M. F. & Grigoriev, R. O. Robust learning from noisy, incomplete, high-dimensional experimental data via physically constrained symbolic regression. Nat. Commun. 12, 1–8 (2021). Biegler, L. T. Nonlinear Programming: Concepts, Algorithms, and Applications to Chemical Processes (SIAM, 2010). Rao, C. V., Rawlings, J. B. & Mayne, D. Q. Constrained state estimation for nonlinear discrete-time systems: Stability and moving horizon approximations. IEEE Trans. Autom. Control 48, 246–258 (2003). Rawlings, J. B., Mayne, D. Q. & Diehl, M. Model Predictive Control: Theory, Computation, and Design Vol. 2 (Nob Hill Publishing, 2017). Kandepu, R., Foss, B. & Imsland, L. Applying the unscented Kalman filter for nonlinear state estimation. J. Process Control 18, 753–768 (2008). Kravaris, C., Hahn, J. & Chu, Y. Advances and selected recent developments in state and parameter estimation. Comput. Chem. Eng. 51, 111–123 (2013). Zheng, P., Askham, T., Brunton, S. L., Kutz, J. N. & Aravkin, A. Y. A unified framework for sparse relaxed regularized regression: SR3. IEEE Access 7, 1404–1423 (2018). Nicholson, B., Siirola, J. D., Watson, J.-P., Zavala, V. M. & Biegler, L. T. Pyomo.DAE: A modeling and automatic discretization framework for optimization with differential and algebraic equations. Math. Program. Comput. 10, 187–223 (2018). Ho, T. K. Random decision forests. In Proceedings of 3rd International Conference on Document Analysis and Recognition, vol. 1, 278–282 (IEEE, 1995). Lejarza, F. & Baldea, M. Discovering governing equations via moving horizon learning: The case of reacting systems. AIChE J. 66, e17567 (2021). Kelley, M. T., Baldick, R. & Baldea, M. A direct transcription-based multiple shooting formulation for dynamic optimization. Comput. Chem. Eng. 140, 106846 (2020). Chartrand, R. Numerical differentiation of noisy, nonsmooth data. Int. Sch. Res. Not. 2011, 66 (2011). Support from the National Science Foundation, USA through the CAREER Award 1454433 (recipient: MB) is acknowledged with gratitude. FL acknowledges the support from the The University of Texas at Austin through the Donald D. Harrington Dissertation Fellowship. McKetta Department of Chemical Engineering, The University of Texas at Austin, 200 E Dean Keeton St, Austin, TX, 78712-1589, USA Fernando Lejarza & Michael Baldea Oden Institute for Computational Engineering and Sciences, The University of Texas at Austin, 201 E 24th St, Austin, TX, 78712-1229, USA Michael Baldea Fernando Lejarza Conceptualization: F.L., M.B. Methodology: F.L., M.B. Investigation: F.L. Software development: F.L. Numerical experiments: F.L. Visualization: F.L. Funding acquisition: M.B. Supervision: M.B. Writing—original draft: F.L. Writing—review and editing: F.L., M.B. Correspondence to Michael Baldea. The authors declare no competing interests. Supplementary Information. Lejarza, F., Baldea, M. Data-driven discovery of the governing equations of dynamical systems via moving horizon optimization. Sci Rep 12, 11836 (2022). https://doi.org/10.1038/s41598-022-13644-w
CommonCrawl
MITA RAJARAM Articles written in Journal of Earth System Science Volume 91 Issue 1 March 1982 pp 79-83 Induction by long period geomagnetic variations in the Indian sub-continent Mita Rajaram B P Singh S Y Waghmare In the present paper storm time variations and 27-day geomagnetic periodicity have been analysed to estimate the depth of the substitute conductor, assuming an infinitely (super) conducting core model of the earth. The advantage of using data from a restricted longitude range is that the uncertainties arising from lateral contrasts in the upper mantle and contributions from Sq current systems are considerably reduced. The result of the present analysis, which has been done in the time domain, gives a value of 522 km for the depth of the substitute conductor in case of storm time variations which rises to 870 km for 27-day recurrent storms. A higher value of the depth for 27-day variations indicate that the rise in conductivity inside the earth is not like a step function rather is a gradual one. The value of 522 km for storm time variations for the Indian region is smaller than the global average. This is natural to expect because the Indian sub-continent is known to be a tectonically active region. Magsat studies over the Indian region B P Singh Mita Rajaram Data collected by Magsat have been extensively used by Indian scientists in studies of the crust beneath India. Results obtained by various workers have been summarized and the reasons for differences in findings have been discussed. It is concluded that methods that work well for higher latitudes do not give the best estimates of crustal field and magnetization in equatorial regions. A better estimate of the crustal component is obtained when the external current contribution is estimated using the symmetry properties of associatedX and Z-fields. Inversion technique that provides stable crustal magnetization in midlatitudes, becomes unstable near the equator. Why such an instability arises and how it can be circumvented are discussed. That the Peninsular shield, the Ganga basin and the Himalayas are three different geotectonic blocks is clearly reflected in the magnetization distribution. A thick magnetic crust under Aravalli, Singhbum and Dharwar suggest these areas to be comparatively stable. In general, seismic, gravity and heat flow data agree characteristically well with the magnetization estimates. Volume 109 Issue 3 September 2000 pp 381-391 Aeromagnetic study of peninsular India P Harikumar Mita Rajaram T S Balakrishnan The degree sheet Aeromagnetic maps up to 17‡N, acquired from the Geological Survey of India, have been manually redigitised at 6 minute intervals to study the long wavelength anomalies over peninsular India. These data have been collected at different survey altitude, epochs, flight line directions, etc. Great care has been taken to correct the total field map and remove the contribution due to the core field and prepare an accurate crustal anomaly map. For the first time, a regional map, depicting the NW-SE structural features north of the orthopyroxene isograd with the essentially E-W features to the south of it and revealing several well known structures, is presented. The analytical signal is calculated to delineate the source fields of these anomalies. It dramatically maps the charnockites and is able to delineate the orthopyroxene isograd. In the Dharwar region the magnetic signatures are associated with the intrusives/ iron ore bodies. Thus, we find that the source rocks of the aeromagnetic anomalies are the host province of charnockites in the SGT and the intrusives/iron ore bodies in the Dharwar belt. Gravity residuals are calculated and a tectonic map of the region is presented from the combined geopotential data. Volume 124 Issue 3 April 2015 pp 613-630 A relook into the crustal architecture of Laxmi Ridge, northeastern Arabian Sea from geopotential data Nisha Nair S P Anand Mita Rajaram P Rama Rao In this study, we undertake analysis of ship-borne gravity-magnetic and satellite-derived free-air gravity (FAG) data to derive the crustal structure of Laxmi Ridge and adjacent areas. 2D and 3D crustal modelling suggests that the high resolution FAG low associated with the ridge is due to underplating and that it is of continental nature. From Energy Spectral Analysis, five-depth horizons representing interface between different layers are demarcated that match those derived from 2D models. Magnetic sources from EMAG2 data, various filtered maps and absence of underplating in the EW section suggest that the EW and NW–SE segment of the Laxmi Ridge is divided by the Girnar fracture zone and probably associated with different stages of evolution. From the derived inclination parameters, we infer that the region to the north of Laxmi Ridge, between Laxmi and Gop Basins, is composed of volcanic/basaltic flows having Deccan affinity, which might have been emplaced in an already existing crust. The calculated inclination parameters derived from the best fit 2D model suggests that the rifting in the Gop Basin preceded the emplacement of the volcanics in the region between Laxmi and Gop Basins. The emplacement of volcanic/basaltic flows may be associated with the passage of India over the Reunion hotspot. Volume 128 Issue 8 December 2019 Article ID 0215 Research Article Structural framework of the Wagad uplift and adjoining regions, Kutch rift basin, India, from aeromagnetic data P R RADHIKA S P ANAND MITA RAJARAM P RAMA RAO The Kutch sedimentary basin formed during the Late Triassic breakup of Gondwanaland is characterised by horst and graben structures consisting of several east–west trending uplifts surrounded by low-lying plains. The eastern part of the basin has a diverse landscape comprising the Wagad uplift, Banni plain, Island Belt uplift and the Rann of Kutch. This area is bounded by major faults like the South Wagad Fault (SWF), Gedi fault and the Island Belt Fault. The lineaments/faults present in the region at different depth levels and the propagation of these features through the different sedimentary layers are studied using the semi-detailed aeromagnetic data collected over the basin. The aeromagnetic anomaly map depicts several major E–W, NE–SW and NW–SE oriented lineaments/faults, which probably represent structural trends associated with different stages of evolution of this rift basin. Power spectral analysis of the differential reduced to pole magnetic data indicates the presence of four magnetic interfaces. The slopes identified from the 1D power spectra were used for designing matched bandpass filters for isolating and enhancing the magnetic signatures present within those interfaces. Different edge detection techniques were used to delineate the magnetic contacts/faults/lineaments present in those interfaces. In addition, we have computed the radially averaged power spectrum of 121 subset grids each with a dimension of $\rm{20 km \times 20 km}$ from which three magnetic interfaces were delineated and compared with the stratigraphic sequence of the Wagad uplift and adjoining regions. A major NE–SW fault is delineated from this analysis and suggests that this fault has depth persistence as it dislocates the different magnetic interfaces. Integration with stratigraphic data suggests that this fault was formed prior to the deposition of Miocene Kharinadi formation. We have interpreted that this fault, forming the eastern limit of the Banni basin, might have formed during the passage of the Indian plate over the Reunion hotspot. Based on the results of the aeromagnetic data analysis and other published data, we propose a generalised evolutionary model for the study region. Best Referee Awards Journal of Earth System Science | News
CommonCrawl
# Mathematical background: Fourier transform and convolution The Fourier transform is a mathematical tool that allows us to analyze signals in the frequency domain. It is defined as the integral of a function multiplied by the complex exponential of a frequency variable. The Fourier transform is a fundamental concept in signal processing, as it allows us to analyze the frequency components of a signal. The convolution of two functions is another important concept in signal processing. It is defined as the integral of the product of two functions. Convolution is a fundamental operation in signal processing, as it allows us to combine the properties of two functions. The Gaussian function is a widely used function in signal processing. It is defined by the equation: $$G(t; \sigma) = \frac{1}{\sqrt{2\pi\sigma}} e^{-\frac{t^2}{2\sigma^2}}$$ The Fourier transform of the Gaussian function is also a Gaussian: $$\mathcal{F}\{G(t; \sigma)\} = e^{-\omega^2 \sigma^2 / 2}$$ The convolution of a signal with a Gaussian is equivalent to low-pass filtering of that signal. This combination of differentiation with Gaussian low-pass filtering is an efficient way to perform both operations simultaneously. It computes the derivative of lower-frequency signal while attenuating higher-frequency noise. Let's consider a signal $f(t)$ and a Gaussian function $G(t; \sigma)$. The convolution of $f(t)$ with $G(t; \sigma)$ is given by: $$(f * G)(t) = \int_{-\infty}^{\infty} f(\tau) G(t - \tau; \sigma) d\tau$$ ## Exercise Compute the Fourier transform of the Gaussian function $G(t; \sigma)$. # Gaussian filters and their properties Gaussian filters are a class of filters that use the Gaussian function as their kernel. They are widely used in signal processing for tasks such as noise reduction, image processing, and data analysis. The properties of Gaussian filters include: - The filter is symmetric and normalized. - The filter is separable in two dimensions. - The filter is rotationally invariant. - The filter is Gaussian in the frequency domain. Let's consider a 2D Gaussian filter with standard deviation $\sigma$. The filter kernel is given by: $$G(x, y; \sigma) = \frac{1}{2\pi\sigma^2} e^{-\frac{x^2 + y^2}{2\sigma^2}}$$ ## Exercise Compute the Fourier transform of the 2D Gaussian filter kernel. # Effect of kernel size on the filter's performance The size of the Gaussian filter kernel affects the filter's performance. A larger kernel size allows for more effective noise reduction, but it also increases the computational complexity. The effect of kernel size on the filter's performance can be visualized using a plot of the filter's impulse response. As the kernel size increases, the impulse response becomes more spread out, leading to a longer filter operation. Let's consider a 1D Gaussian filter with kernel size $n$. The filter kernel is given by: $$G(x; n, \sigma) = \frac{1}{n\sigma\sqrt{2\pi}} \sum_{i=0}^{n-1} e^{-\frac{(x - i)^2}{2\sigma^2}}$$ ## Exercise Compute the impulse response of a 1D Gaussian filter with kernel size $n$. # Applications of Gaussian filters in signal processing: noise reduction, image processing, and data analysis Gaussian filters are widely used in signal processing for noise reduction, image processing, and data analysis. In noise reduction, Gaussian filters are used to suppress high-frequency noise while preserving the low-frequency signal. This is achieved by applying the filter to the input signal and subtracting the filtered signal from the original signal. In image processing, Gaussian filters are used for tasks such as smoothing, edge detection, and image segmentation. They are often used to remove noise from images before applying other processing techniques. In data analysis, Gaussian filters are used to smooth time series data and reduce noise while preserving the underlying trends. Let's consider a 1D Gaussian filter with standard deviation $\sigma$ applied to a noisy signal $s(t)$. The filtered signal is given by: $$s_{filtered}(t) = s(t) - G(t; \sigma) * s(t)$$ ## Exercise Apply a 1D Gaussian filter with standard deviation $\sigma$ to a noisy signal $s(t)$. # Comparison with other filters: box filter, median filter, and bilateral filter Gaussian filters are compared to other filters such as the box filter, median filter, and bilateral filter. The box filter is a simple low-pass filter that averages the values within a rectangular region. It is less flexible than Gaussian filters, as it does not adapt to the local structure of the signal. The median filter is a non-linear filter that replaces each pixel with the median value of the pixels in a rectangular region. It is more robust to outliers than Gaussian filters, but it does not adapt to the local structure of the signal. The bilateral filter is a non-local filter that takes into account both the spatial and intensity variations of the signal. It is more flexible than Gaussian filters, as it can adapt to the local structure of the signal. Let's compare a Gaussian filter with standard deviation $\sigma$ to a box filter with size $n$. The filtered signal is given by: $$s_{filtered}(t) = s(t) - G(t; \sigma) * s(t) - \frac{1}{n} \sum_{i=0}^{n-1} s(t - i)$$ ## Exercise Compare the performance of a Gaussian filter with standard deviation $\sigma$ to a box filter with size $n$. # Implementation and optimization of Gaussian filters in popular programming languages Gaussian filters can be implemented and optimized in popular programming languages such as Python, MATLAB, and C++. In Python, the SciPy library provides functions for computing the Fourier transform and applying Gaussian filters. In MATLAB, the FFT and CONV functions can be used to compute the Fourier transform and apply Gaussian filters. In C++, the FFTW library can be used to compute the Fourier transform, and the OpenCV library provides functions for applying Gaussian filters. Let's implement a Gaussian filter with standard deviation $\sigma$ in Python using the SciPy library: ```python import numpy as np from scipy.ndimage import gaussian_filter signal = np.random.rand(100) filtered_signal = gaussian_filter(signal, sigma=5) ``` ## Exercise Implement a Gaussian filter with standard deviation $\sigma$ in MATLAB: ```matlab signal = rand(1, 100); filtered_signal = imfilter(signal, fspecial('gaussian', [5 1], 5), 'same'); ``` # Examples and case studies Let's consider a noisy signal $s(t)$ and a Gaussian filter with standard deviation $\sigma$. The filtered signal is given by: $$s_{filtered}(t) = s(t) - G(t; \sigma) * s(t)$$ ## Exercise Apply a Gaussian filter with standard deviation $\sigma$ to a noisy signal $s(t)$. # Conclusion and future developments In this textbook, we have covered the mathematical background of Gaussian filters, their properties, the effect of kernel size on the filter's performance, and their applications in signal processing. Gaussian filters are a powerful tool in signal processing, as they can effectively suppress high-frequency noise while preserving low-frequency signal. They are widely used in noise reduction, image processing, and data analysis. Future developments in Gaussian filters may include adaptive kernel sizes, more efficient implementations, and the integration of machine learning techniques for better adaptation to the local structure of the signal. ## Exercise Summarize the key concepts and applications of Gaussian filters in signal processing.
Textbooks
\begin{document} \title{Suppressing decoherence and improving entanglement by quantum-jump-based feedback control in two-level systems} \author{S. C. Hou} \author{X. L. Huang} \author{X. X. Yi} \affiliation{School of Physics and Optoelectronic Technology,\\ Dalian University of Technology, Dalian 116024 China} \date{\today} \begin{abstract} We study the quantum-jump-based feedback control on the entanglement shared between two qubits with one of them subject to decoherence, while the other qubit is under the control. This situation is very relevant to a quantum system consisting of nuclear and electron spins in solid states. The possibility to prolong the coherence time of the dissipative qubit is also explored. Numerical simulations show that the quantum-jump-based feedback control can improve the entanglement between the qubits and prolong the coherence time for the qubit subject directly to decoherence. \end{abstract} \pacs{73.40.Gk, 03.65.Ud, 42.50.Pq}\maketitle \section{introduction} Superposition of states and entanglement make quantum information processing much different from its classical counterpart. But a quantum state would unavoidably interact with its environment, resulting in a degradation of coherence and entanglement. For example, spontaneous emission in atomic qubits \cite{Roos} would spoil the coherence of quantum states and limit the entanglement time. Recent experimental advances have enabled individual systems to be monitored and manipulated at quantum level \cite{Puppe}. This makes the quantum feedback control realizable. Among the feedback controls, The homodyne-mediated feedback \cite{Wiseman,Wiseman2} and quantum-jump-based feedback controls have been proposed to generate steady state entanglement in a cavity \cite{Wang,Carvalho}. These two feedback schemes are Markovian, namely, a feedback information proportional to the quantum-jump detection is synchronously used. Besides, these control scheme can also be used to suppress decoherence \cite{Viola,Katz,Ganesan,Zhang}. Meanwhile, researchers are looking for proper systems for experimental implementation of quantum information processing. Among the various candidates, solid-states quantum devices based on superconductors \cite{Bertet} and lateral quantum dots \cite{Hayashi} are promising ones, however, the decoherence from intrinsic noise originating from two-level fluctuators is hard to engineered \cite{Rebentrost}. For this reason, the nuclear spins have attracted considerable attention \cite{Vandersypen} due to its long coherence times \cite{Ladd}. But their weak interactions to others make the preparation, control, and detection on them difficult. Thanks to its intrinsic interactions with electron spins, electron spin can be used as an ancilla to access single nuclear spin. This naturally leads us to rise the following question: can feedback strategy be used to suppress decoherence, prepare and protect entanglement between the nuclear and electron spins by controlling the electron spin? In this paper, we will study this problem by considering a nuclear spin (as a qubit) coupled to electron spin (as the other qubit) that is exposed to its environment. We show that a Markovian feedback based on quantum-jump can be used to suppress decoherence, produce entanglement and protect it. The paper is organized as follows: In Sec.{\rm II}, we describe our model and present the dynamics in absence of feedback. In Sec.{\rm III}, we introduce the quantum-jump-based feedback control and give the dynamical equation under the feedback control. The effect of feedback control on decoherence and entanglement is discussed in Sec.{\rm IV} and Sec.{\rm V}, respectively. Sec.{\rm VI} concludes our results. \section{model} Our system consists of a pair of two-level systems, called qubit 1 and qubit 2, where only the qubit 2 interacts with its environment. We present a scheme employing quantum-jump-based feedback control on the qubit 2 to affect the decoherence of the qubit 1 and increase entanglement between the two qubits. The Hamiltonian of the system reads \begin{eqnarray} H=\frac{1}{2}\hbar\omega_1\sigma_{1}^z+\frac{1}{2}\hbar\omega_2\sigma_{2}^z+\hbar g(\sigma_1^+\sigma_2^-+\sigma_1^-\sigma_2^+) . \label{eqn:systemhamiltonian} \end{eqnarray} The first two terms represent the free Hamiltonian of the two qubits, the last term describes their interactions under the rotating-wave approximation. $\omega_1$ and $\omega_2$ are the transition frequency of the two qubits, respectively. $g $ is the coupling strength of the two qubits. $\sigma_z$ is the Pauli matrix, i.e., $\sigma_z=\ket{e}\bra{e}-\ket{g}\bra{g}$, and $\sigma^+=\ket{e}\bra{g}$, $\sigma^-=\ket{g}\bra{e}.$ The state of this quantum system can be described by the density operator $\rho$ which is obtained by tracing out the environment. The dynamics of open quantum systems can be described by quantum master equations. The most general form of master equation for the density operator is \cite{quantumoptics,quantumnoise} \begin{eqnarray} \dot{\rho}=-\frac{i}{\hbar}[H,\rho]+\mathcal{L}(\rho), \label{eqn:masterequation} \end{eqnarray} where $H$ is the system Hamiltonian and $\mathcal{L}$ is a superoperator defined by $\mathcal{L}(\rho)=\Sigma_k\gamma_k(L_k\rho L_k^{\dag}-\frac{1}{2}L_k^\dag L_k\rho-\frac{1}{2}\rho L_k^\dag L_k),$ in which different $k$ characterizes different dissipative channels. In our system, the first qubit is assumed to be isolated from environment. The decoherence comes from the spontaneous emission of the qubit 2 (the second qubit). This situation is of relevance to a system consisting of nuclear and electron spins in aforementioned solid state devices. The dynamics of such a system takes, \begin{eqnarray} \dot{\rho}=-\frac{i}{\hbar}[H,\rho]+\gamma(\sigma_2^-\rho\sigma_2^+- \frac{1}{2}\sigma_2^+\sigma_2^-\rho-\frac{1}{2}\rho \sigma_2^+\sigma_2^-) . \label{eqn:full} \end{eqnarray} Here $\sigma_2^{\pm}=I_1\otimes\sigma_2^{\pm}$. The second part of Eq.(\ref{eqn:full}) describes the dissipation of our system with $\gamma$ the decay rate. Though the first qubit is assumed to be isolated from environment, it still loss coherence due to the coupling to the second qubit. The decoherence process can be showed by the decay of off-diagonal elements of the reduced density matrix for the first qubit. In order to investigate this decoherence, we calculate the evolution of system density operator $\rho$ and then trace out the second qubit to get the reduced matrix \begin{eqnarray} \rho_1=\text{Tr}_2(\rho)=\sum_{k=e,g}{ }_2\langle k|\rho|k\rangle_2= \left( \begin{array}{cc} \rho_{ee} & \rho_{eg}\\ \rho_{ge} & \rho_{gg}\\ \end{array} \right). \end{eqnarray} The diagonal elements are the populations in the excited and ground states of the first qubit. And the off-diagonal elements represent the coherence of the qubit 1. \section{Quantum-jump-based Feedback control} Quantum feedback controls play an increasingly important role in quantum information processing. It is widely used to create and stabilize entanglement as well as combat with decoherence \cite{Carvalho,Wang,Zhang,Katz}. In our model, the second qubit is used as an ancilla through which the feedback can affect the dynamics of the first qubit, i.e., by employing a feedback control on the second qubit, we control the first qubit. The goal is to suppress the decoherence of the first qubit and enhance the entanglement between the two qubits by a feedback control on the second qubit\cite{Carvalho}. Our feedback control strategy is based on quantum-jump detection. The master equation with feedback can be derived from the general measurement theory \cite{Wiseman2}. In our paper, Eq.(\ref{eqn:full}) is equivalent to \begin{eqnarray} \rho(t+dt)=\sum_{\alpha=0,1}\Omega_{\alpha}(T)\rho(t)\Omega_{\alpha}^{\dag}(T). \label{measure} \end{eqnarray} with \begin{eqnarray} \Omega_{1}(dt)=\sqrt{\gamma dt}\sigma_2^- \\\Omega_{0}=1-(\frac{i}{\hbar}H+\frac{1}{2}\gamma\sigma_2^+\sigma_2^-)dt. \label{measure2} \end{eqnarray} When the measurement result is $\alpha=1$, a detection occurs, which causes a finite evolution in the system via $\Omega_1(dt)$. This is called a quantum jump. Then the unnormalized density matrix becomes $\tilde{\rho}_{\alpha=1}=\sigma_2^-\rho(t)\sigma_2^+dt$. The feedback control is added by giving $\tilde{\rho}_{\alpha=1}$ a finite unitary evolution, then $\tilde{\rho}_{\alpha=1}$ become $\tilde{\rho}_{\alpha=1}=F\sigma_2\rho(t)\sigma_2^+F^{\dag}dt$. In the limit that the feedback acts immediately after a detection and in a very shot time (much smaller than the time scale of the system's evolution), the master equation is Markovian, \begin{eqnarray} \dot{\rho}=-\frac{i}{\hbar}[H,\rho]+\gamma(F\sigma_2^-\rho\sigma_2^+F^\dag- \frac{1}{2}\sigma_2^+\sigma_2^-\rho-\frac{1}{2}\rho \sigma_2^+\sigma_2^-). \label{eqn:controlled} \end{eqnarray} Here $F=e^{iH_f}$ and $H_f=-\frac{1}{\hbar}H_f't_f.$ We see that the operator $H_f$ contains a relatively large operator $H_f'$ multiplied by a very short time $t_f$ (Markovian assumption), but the product represents a certain amount of evolution, so it is convenient to discuss $H_f$ instead of $H_f'$ and $t_f$. Here $H_f$ is a $2\times 2$ hermit operator which can be decomposed by Pauli matrixes $H_f=A_x\sigma_x+A_y\sigma_y+A_z\sigma_z$ ($A_x,A_y,A_z$ are real numbers). So we have, \begin{eqnarray} F=I_1\otimes e^{i\vec{A}\cdot\vec{\sigma}}=I_1\otimes(\cos|\vec{A}|+i\frac{\sin|\vec{A}|}{|\vec{A}|} \vec{A}\cdot\vec{\sigma}). \label{eqn:feedback} \end{eqnarray} Here $\vec{\sigma}=(\sigma_{x},\sigma_{y},\sigma_{z})$ and $\vec{A}=(A_x,A_y,A_z)$ representing the amplitude of $\sigma_x,\sigma_y$ and $\sigma_z$ control. In order to understand the physical meaning of feedback operator $F$, we rewrite it as $F=I_1\otimes e^{-i\frac{\omega}{2}\vec{n}\cdot\vec{\sigma}}$ where $\vec{n}=(\sin{\theta}\cos{\phi},sin{\theta}\sin{\phi}, \cos{\theta})$ and $\vec{\sigma}=(\sigma_x,\sigma_y,\sigma_z)$, this feedback operator is equivalent to a time-evolution with evolution operator $F=I_1\otimes e^{iH_f}$. And it is clear that the operator $F$ rotate the Bloch vector of the second qubit with the angle $\omega$ around the $\vec{n}$ axis. The relationship between the two forms of $F$ are $A_x=-\frac{\omega}{2}\sin{\theta}\cos{\phi}, A_y=-\frac{\omega}{2}\sin{\theta}\sin{\phi}, A_z=-\frac{\omega}{2}\cos{\theta}.$ So a $\sigma_x$ control ($A_y=0,A_z=0$) means rotating the Bloch vector with a certain amount of angle around the $x$ axis of Bloch sphere, so does the $A_y$ and $A_z$ control. Different $\vec{A}$ represents different feedback evolution i.e., rotate the Bloch vector with a particular angle around a particular direction in the Bloch sphere. For simplicity, we discuss the $\sigma_x, \sigma_y,\sigma_z$ control one by one in the following. This control mechanism has the advantage of being simple to apply in practice, since it does not need real time state estimation as Bayesian feedback control does\cite{Wiseman3}. The emission of the second qubit is measured by a photo detector, whose signal provides the information to design the control $F$. In this kind of monitoring, the absence of signal predominates the dynamics and the control is triggered only after a detection click, i.e. a quantum jump, occurs. \section{Decoherence suppression} Before investigating the influence of the feedback control, we first analyze the evolution of our system without control. Assume that the two qubits are initially in the same pure superposition state, for example, $|\psi\rangle=\frac{1}{\sqrt{2}}(|e\rangle_1+|g\rangle_1) \otimes\frac{1}{\sqrt{2}}(|e\rangle_2+|g\rangle_2)$. The corresponding density matrix is, \begin{eqnarray} \rho_0=|\psi\rangle\langle\psi|=\frac{1}{4} \left( \begin{array}{cccc} 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 \\ \end{array} \right). \end{eqnarray} We assign the Planck constant $\hbar$ to be 1, $\omega_1=\omega_2=\omega$ in Eq.(\ref{eqn:systemhamiltonian}), and $g/\omega=1,\gamma/\omega=0.5$. After numerical calculation, we get the evolution of the density matrix for the first qubit without control. Since $\rho_{eg}=\rho_{ge}^*, \rho_{ee}+\rho_{gg}=1$, we only discuss coherence $|\rho_{eg}|$ and excited state population $\rho_{ee}$ for simplicity. The evolution of $|\rho_{eg}|$ and $\rho_{ee}$ without control is depicted in Fig.\ref{FIG:rhovs} (a) and (b) (dashed lines). In Fig.\ref{FIG:rhovs} (a), a fast decay of $|\rho_{eg}|$ (dashed line) can be found. This demonstrates that the first qubit lost coherence due to the second qubit's spontaneous emission and their interaction. Meanwhile, the first qubit lost energy due to couplings to the second qubit (Fig.\ref{FIG:rhovs} (b) (dashed line)). The results also show that the populations in excited state and ground state decay away. This is because the first qubit exchange energy with the second qubit, see Eq.(\ref{eqn:systemhamiltonian}). \begin{figure} \caption{(a) Time evolution of $|\rho_{eg}|$ with and without control. $t$ is in the unit of $\frac{1}{\omega}$. The different curves correspond to $A_x=1.2,A_y=A_z=0$(solid line) and $A_x=A_y=A_z=0=0$ (dashed line), for $g/\omega=1,\gamma/\omega=0.5$. The feedback control strategy results in an improvement in decoherence time evidently. (b) Excited state population $\rho_{ee}$ evolution with and without control for the same parameters with (a), the decay of excited state population is slower in the controlled scheme.} \label{FIG:rhovs} \end{figure} Now we add feedback control $F$ to our system, the master equation then becomes Eq.(\ref{eqn:controlled}). Our system is initially in the state $\rho_0$, other parameters remain unchanged. \begin{figure} \caption{The evolution of absolute value of the first qubit's off-diagonal element with different control parameters, for $g/\omega=1,\gamma/\omega=0.5$ and $t$ in the unit of $\frac{1}{\omega}$. (a) The $\sigma_x$ control for $A_x=0\thicksim\pi,A_y=A_z=0$. (b) The $\sigma_y$ control for $A_y=0\thicksim\pi,A_x=A_z=0$. (c) The $\sigma_z$ control for $A_z=0\thicksim\pi,A_x=A_y=0$. When the feedback amplitude is chosen to be about 1.3 and 1.9 for both $\sigma_x$ and $\sigma_y$ control, the oscillation of off-diagonal element is remarkably enhanced. The $\sigma_z$ control doesn't work in our model. } \label{FIG:rho3} \end{figure} We first analysis the $\sigma_x$ control by choosing feedback amplitude $A_x=0\thicksim\pi, A_y=A_z=0$. Note that when $A_y=A_z=0,$ the feedback amplitude $A_x$ influence the system's evolution with a period of $\pi$ which comes from the term $F\sigma_2^-\rho\sigma_2^+F^{\dag}$ in Eq.(\ref{eqn:feedback}). It can be analytically proved that $e^{iA_x\sigma_x}\sigma^-\rho_2\sigma^+ e^{-iA_x\sigma_x}=e^{i(A_x+\pi) \sigma_x}\sigma^-\rho_2\sigma^+e^{-i(A_x+\pi)\sigma_x}$ and $e^{iA_y\sigma_y}\sigma^-\rho_2\sigma^+e^{iA_y\sigma_y}= e^{i(A_y+\pi)\sigma_y}\sigma^-\rho_2\sigma^+e^{i(A_y+\pi)\sigma_y}$ under any $A_x$ and $A_y$. Here $\rho_2$ is the reduced density matrix of the second qubit. The absolute value for the first qubit's off-diagonal density matrix element evolves as showed in Fig.\ref{FIG:rho3} (a). The figure indicates that, for an appropriate feedback amplitude , $A_x\approx1.3$ and $A_x\approx1.9$, the absolute value of off-diagonal element can be evidently enhanced compared with the uncontrolled case ($A_x=0$). That means the decoherence is partially suppressed. The improvement of coherence caused by feedback is shown explicitly in Fig.\ref{FIG:rhovs} (a). We plot $|\rho_{eg}|$, representing the coherence of the first qubit, as a function of time with $A_x=1.2,A_y=A_z=0$ (a selected controlled case). In comparison with the uncontrolled case, a stronger oscillation amplitude and longer dechoherence time appears. Meanwhile, the $\rho_{ee}$ decays slowly compared to the uncontrolled case as shown in Fig.\ref{FIG:rho3} (b). Similarly, the $\sigma_y$ control is also able to slow down the decay of $|\rho_{eg}|$. We make $A_y=0\thicksim\pi, A_x=A_z=0$. The numerical results of $|\rho_{eg}|$ is shown in Fig.\ref{FIG:rho3} (b). Unlike the $\sigma_x$ and $\sigma_y$ control, the $\sigma_z$ control ($A_z=0\thicksim\pi, A_x=A_y=0$) has no effect on the evolution of the system as shown in Fig.\ref{FIG:rho3} (c). This is because $e^{iA_z\sigma_z}\sigma_-\rho_2\sigma_+ e^{-iA_z\sigma_z}=\rho_2$ for any $A_z$. The physics behind this result is that after emitting a photon, the controlled qubit must stay in the ground state with the Bloch vector pointing the bottom of the Bloch sphere, so the rotation around $z$ axis does not change the Bloch vector, i.e., the state of the qubit remains unchanged. The present results show that decoherence of the first qubit can be suppressed by controlling its partner. The decoherence source in our system is the spontaneous emission of the second qubit, once the detector detects a photon, i.e. a quantum jump of the second qubit happens, the feedback beam instantaneously act on the second qubit and then the first qubit is impacted through the coupling of the two qubit. The feedback control scheme can reduce the destructive effects of coherence and slow down the dissipation of energy. The control effect is relevant to the coupling strength $g$. When $g$ is small, the first qubit is hard to be impacted by the second qubit, so it's hard to prepare, measure and control the state of the first qubit. As the interaction goes stronger, the effect of feedback control becomes more evident. For the case discussed in Fig.\ref{FIG:rhovs}, the first qubit is dissipative. We found that when the control parameters is chosen as: $A_x=\frac{\pi}{2},A_y=A_z=0$ , or $A_y=\frac{\pi}{2},A_x=A_y=0$ with the two qubits initially being prepared in the same states, the decoherence dynamics turns to a phase damping type. The population in ground state and excited state do not change, while the off-diagonal elements evolves in the same way as in the uncontrolled case. We show this in a bloch sphere\cite{Altafini} in Fig.\ref{FIG:bloch}. Here the reduced density matrix of the first qubit can be written by $\rho_1=\frac{1}{2}(I+\vec{P}\cdot\vec{\sigma})$. We can get the polarization vector components $P_x=\text{Tr}(\sigma_x\rho_1)$, $P_y=\text{Tr}(\sigma_y\rho_1)$ and $P_z=\text{Tr}(\sigma_z\rho_1)$. \begin{figure}\label{FIG:bloch} \end{figure} \section{Entanglement control} \begin{figure} \caption{(a) Conccurence as a function of time and $A_y$. The system is initially in the state $|\psi\rangle=|g\rangle_1|e\rangle_2$, for the parameters $g/\omega=1,\gamma/\omega=0.5$. (b) A controlled evolution for $A_y=0.5\pi,A_x=A_z=0$ vs. uncontrolled case. The entanglement is improved by choosing an appropriate feedback. $t$ is in the unit of $\frac{1}{\omega}$ for (a) and (b).} \label{FIG:cge} \end{figure} Quantum feedback control has been recently used to improve the creation of steady state entanglement in open quantum systems. A highly entangled states of two qubits in a cavity can be produced with an appropriate selection of the feedback Hamiltonian and detection strategy \cite{Carvalho,Carvalho2}. We will show that the quantum-jump-based feedback scheme can produce and improve entanglement in our model. We choose the concurrence \cite{Wootters} as a measure of entanglement. For a mixed state represented by the density matrix $\rho$ the "spin-flipped" density operator reads \begin{eqnarray} \tilde{\rho}=(\sigma_y\otimes\sigma_y)\rho^*(\sigma_y\otimes\sigma_y) \end{eqnarray} where the $*$ denotes complex conjugate of $\rho$ in the bases of $\{|gg\rangle, |ge\rangle, |eg\rangle, |ee\rangle\}$, and $\sigma_y$ is the usual Pauli matrix. The concurrence of the density matrix $\rho$ is defined as \begin{eqnarray} C(\rho)=\max{(\sqrt{\lambda_1}-\sqrt{\lambda_2} -\sqrt{\lambda_3}-\sqrt{\lambda_4},0)}. \end{eqnarray} where $\lambda_i$ are eigenvalues of matrix $\rho\tilde{\rho}$ and sorted in decreasing order $\lambda_1>\lambda_2>\lambda_3>\lambda_4$. The range of concurrence is from 0 to 1, and $C=1$ represents the maximum entanglement. In absence of spontaneous emission, i.e. $\gamma=0$, the system evolves without dissipation. We find that for the system initially in a separable state except $|\psi\rangle=|e\rangle_1|e\rangle_2$ or $|\psi\rangle=|g\rangle_1|g\rangle_2$ (the eigenstates of system Hamiltonian $H$), an entangled state can be generated due to the interaction between the two qubits. The amount of entanglement depends on initial states of the system and the coupling strength $g$. But when the spontaneous emission effect is taken into account, the performance of entanglement preparation get worse considerably. \begin{figure} \caption{ (a) Conccurence as a function of time and $A_y$. The system is initially in the state $|\psi\rangle=|e\rangle_1|e\rangle_2$, for the parameters $g/\omega=1,\gamma/\omega=0.5$. (b) The controlled conccurence evolution for $A_y=1.2,A_x=A_z=0$ vs. uncontrolled case. $t$ is in the unit of $\frac{1}{\omega}$ for (a) and (b).} \label{FIG:cee} \end{figure} Now we investigate if our feedback control strategy can improve the entanglement preparation with the effect of spontaneous emission of the second qubit. The master equation with control is Eq.(\ref{eqn:controlled}). The effect of feedback control lies in different choices for the feedback parameters $A_x, A_y, A_z$, the coupling strength $g$ and different initial states. Here we present two typical results with two different states. Our first choice is the initial state $|\psi\rangle=|g\rangle_1|e\rangle_2$ with $\sigma_y$ control for $A_y=0\thicksim\pi, A_x=0, A_z=0$. The concurrence evolution is plotted as a function of time and feedback amplitude $A_y$ in Fig.\ref{FIG:cge} (a), and Fig.\ref{FIG:cge} (b) denotes the concurrence evolution with a selected feedback amplitude compared with the uncontrolled case. We see that entangled states can be generated with any feedback parameters, but it decreases with time because of the dissipative effect. When an appropriate feedback amplitude $A_y\approx0.9$ is chosen, the concurrence amplitude is remarkably enhanced, and the entanglement lasts for a long time. For the system initially in the state $|\psi\rangle=|e\rangle_1|e\rangle_2$ with $\sigma_y$ control, the dynamics of the concurrence is shown in Fig.\ref{FIG:cee} (a). Note that in this case if there is no spontaneous effect, this is a steady state of the system, the density matrix elements does not change with time. Fig.\ref{FIG:cee} (a) demonstrates that the dissipation and feedback can produce entanglement. We show this explicitly in Fig.\ref{FIG:cee} (b) by choosing feedback amplitude $A_y=1.2$. We can see that for a proper feedback amplitude, after an entanglement death, a larger amount entanglement is regenerated. The above results shows the feedback control strategy can be used to prepare and protect entanglement in our model. The effect of entanglement control strongly depends on the initial state. For a certain initial state, we found that the $\sigma_x$ control and $\sigma_y$ control has the similar effect but the $\sigma_z$ control does not work. \section{Conclusion and remarks} In this paper, we studied the effect of quantum jump based feedback control on a system consisting of two qubits where only one of them subject to decoherence. By numerical simulation, we found that it is possible to suppress decoherence of the first qubits by a local control on the second qubits. We observed that the decoherence time of the first qubit is increased remarkably. The control scheme can also used to protect the entanglement between the two qubits. These features can be understood as that the feedback control changes the dissipative dynamics of the system through the quantum-jump operators. We would like to note that Hamiltonian Eq.(\ref{eqn:systemhamiltonian}) does not describe the hyperfine interaction. However, by the recent technology we can simulate Hamiltonian Eq.(\ref{eqn:systemhamiltonian}) in nuclear-electron spin systems, in this sense, the scheme presented here is available for nuclear-electron spin systems. On the other hand, by using the hyperfine interaction Hamiltonian, our further simulations show that we can obtain results similar to that for Hamiltonian Eq.(\ref{eqn:systemhamiltonian}). \begin{references} \bibitem{Roos} C. F. Roos, G. P. T. Lancaster, M. Riebe, H. H\"{a}ffner, W.H\"{a}sel, S. Gulde, C. Becher, J. Eschner, F. Schmidt-Kaler, and R. Blatt, Phys. Rev. Lett. \textbf{92}, 220402 (2004). \bibitem{Puppe} T. Puppe, I. Schuster, A. Grothe, A. Kubanek, K. Murr, P. W. H. Pinkse, and G. Rempe, Phys. Rev. Lett. \textbf{99}, 013002 (2007). \bibitem{Wiseman} H. M. Wiseman, and G. J. Milburm, Phys. Rev. Lett. \textbf{70}, 548 (1993). \bibitem{Wiseman2} H. M. Wiseman, Phys. Rev. A \textbf{49}, 2133 (1994). \bibitem{Wang} J. Wang, H. M. Wiseman, and G. J. Milburn, Phys. Rev. A \textbf{71}, 042309 (2005). \bibitem{Carvalho} A. R. R. Carvalho, J.J. Hope, Phys. Rev. A \textbf{76}, 010301 (2007). \bibitem{Viola} L. Viola, and S. Lloyd, Phys. Rev. A \textbf{58}, 2733 (1998). \bibitem{Katz} G. Katz, M. A. Ratner, and R. Kosloff, Phys. Rev. Lett. \textbf{98}, 203006 (2007). \bibitem{Ganesan} N. Ganesan, and T. Tarn, Phys. Rev. A \textbf{75}, 032323 (2007). \bibitem{Zhang} J. Zhang, C. Li, R. Wu, T. Tarn, and X. Liu, J. Phys. A \textbf{38}, 6587-6601 (2005). \bibitem{Bertet} P. Bertet, I. Chiorescu, G. Burkard, K. Semba, C. J. P. M. Harmans, D. P. DiVincenzo, and J. E. Mooij, Phys. Rev. Lett. \textbf{95}, 257002 (2005). \bibitem{Hayashi} T. Hayashi, T. Fujisawa, H. D. Cheong, Y. H. Jeong, and Y. Hirayama, Phys. Rev. Lett. \textbf{91}, 226804 (2003). \bibitem{Rebentrost} P.Rebentrost, I.Serban, T.Schulte-Herbr\"{u}ggen, and F. K. Wilhelm, Phys. Rev. Lett. \textbf{102}, 090401 (2009). \bibitem{Vandersypen} L. Vandersypen and I. Chuang, Rev. Mod. Phys. \textbf{76}, 1037 (2004). \bibitem{Ladd} T. Ladd, D. Maryenko, Y. Yamamoto, E. Abe, and K. Itoh, Phys. Rev. B \textbf{71}, 014401 (2005). \bibitem{quantumoptics} M. O. Scully, and M. S. Zubairy {\it Quantum Optics} (Cambridge University Press, Cambridge 1997). \bibitem{quantumnoise} C. W. Gardiner, and P. Zoller, {\it Quantum Noises} (Springer-Verlag, Berlin 1991). \bibitem{Wiseman3} H. M. Wiseman, S. Mancini, and J. Wang, Phys. Rev. A \textbf{66}, 013807 (2002). \bibitem{Altafini} C. Altafini, J. Math. Phys. \textbf{44}, 2357 (2003). \bibitem{Carvalho2} A. R. R. Carvalho, A. J. S. Reid, and J. J. Hope, Phys. Rev. A \textbf{78}, 012334 (2008). \bibitem{Wootters}W. K. Wootters, Phys. Rev. Lett. \textbf{80}, 2245 (1998). \end{references} \end{document}
arXiv
Decay Formula A quantity is subject to exponential decay if it decreases at a rate proportional to its current value. The decay law calculates the number of undecayed nuclei in a given radioactive substance. Decay Formula – Formula for Half-Life in Exponential Decay – \[\large N(t)=N_{0}\left ( \frac{1}{2}^{\frac{t}{t_{\frac{1}{2}}}} \right )\] \[\large N(t)=N_{0}e^{\frac{-t}{r}}\] \[\large N(t)=N_{0}e^-\lambda t\] $N_{0}$ is the initial quantity of the substance that will decay (this quantity may be measured in grams, moles, number of atoms, etc.), N(t) is the quantity that still remains and has not yet decayed after a time t, $t_\frac{1}{2}$ is the half-life of the decaying quantity, r is a positive number called the mean lifetime of the decaying quantity, $\lambda$ is a positive number called the decay constant of the decaying quantity. The three parameters $t_\frac{1}{2}$, r, and $\lambda$ are all directly related: \[\large t_{\frac{1}{2}}=\frac{\ln (2)}{\lambda}= r \ln(2)\] Decay constant ($\lambda$) gives the ratio of number of radioactive atoms decayed to the initial number of atoms, which is \[\LARGE \lambda=\frac{0.693}{t_{\frac{1}{2}}}\] Decay Law is used to find the decay rate of a radioactive element. Here are few Radioactive Isotopes and their half-life: 1) As per decay rate of $10^{-24}$ Seconds $\large Hydrogen-7 =23$ $\large Hydrogen-4 =139$ $\large Hydrogen-10 =200$ $\large Lithium-5 =324$ $\large Boron-7 =350$ $\large Helium-5 =760$ Mathematics Formula Pdf Pyridine Formula Amperes Law Formula Angle Between Two Vectors Formula Interpolation Formula R Squared Formula Completing The Square Formula Trapezoid Area Formula Doppler Shift Formula Chemistry Formula Chart Download
CommonCrawl
Abstract: Here we review some results of J. -H. Lee of the $N\times N$ Zakharov–Shabat system with a polynomial spectral parameter. We define a scattering transform following the set-up of Beals–Coifman . In the $2 \times 2$ cases, we modify the Kaup–Newell and Kuznetsov–Mikhailov system to assure the normalization with respect to the spectral parameter. Then we are able to apply the technique of Zakharov–Shabat for the solitons of NLS to our cases. We obtain the long-time behavior of the equations which can be transformed into DNLS and MTM in laboratory coordinates respectively.
CommonCrawl
\begin{document} \begin{abstract} We investigate composition-differentiation operators acting on the Dirichlet space of the unit disk. Specifically, we determine characterizations for bounded, compact, and Hilbert-Schmidt composition-differentiation operators. In addition, for particular classes of inducing maps, we derive an adjoint formula, compute the norm, and identify the spectrum. \end{abstract} \title{Composition-Differentiation Operators on the Dirichlet Space} \section{Introduction} For a space $\mathcal{X}$ of functions analytic on the unit disk and for a self-map $\varphi$ of the unit disk, we define the composition operator $C_\varphi$ on $\mathcal{X}$ by $C_\varphi(f)=f\circ\varphi$. The study of such operators has been ongoing for fifty years. Recently, a new variation of this operator has come under investigation. Defining $D$ to be the operator of differentiation, several researchers have undertaken a study of the two operators $DC_\varphi$ and $C_\varphi D$ and we point to \cite{FatehiHammond:2020}, \cite{HibschweilerPortnoy:2005}, \cite{Ohno:2006}, and \cite{Ohno:2009}. These studies have focused on the Hardy, Bergman, and Bloch spaces. Our goal is to extend parts of this study to the Dirichlet space of the unit disk. On many of the spaces mentioned above, the operator $D$ is unbounded and this behavior persists on the Dirichlet space. As a result of this, we adopt the convention from \cite{FatehiHammond:2020} and define the composition-differentiation operator $D_\varphi(f)=C_\varphi D(f)=f'\circ\varphi.$ Throughout the sections that follow, we will examine when these operators are bounded, compact, and Hilbert-Schmidt on the Dirichlet space. We then restrict our attention to two specific classes of symbols. For linear-fractional maps with $\|\varphi\|_{\infty}<1$, we examine the adjoint of $D_\varphi$. For maps induced by monomials, $\varphi(z)=az^M$ where $|a|<1$ and $M\in\mathbb{N}$, we determine the norm and spectrum of $D_\varphi$. We also point to instances where our methods will extend to other spaces of interest. In \cite{Ohno:2014}, the author studies composition operators from the Hardy space into the Dirichlet space and when such operators are bounded, compact, and Hilbert-Schmidt. One byproduct of our results extends this work to composition operators from the Bergman space into the Dirichlet space. \section{Preliminaries} Let $\mathbb{D}=\{z\in\mathbb{C}:|z|<1\}$ be the open unit disk in the complex plane and $H(\mathbb{D})$ be the set of all functions analytic on $\mathbb{D}$. A function $f\in H(\mathbb{D})$ is in the Dirichlet space $\mathcal{D}$ provided $f$ has a finite Dirichlet integral, i.e. \[\int_{\mathbb{D}}|f'(z)|^2\,dA(z)<\infty,\] where $dA$ is normalized Lebesgue area measure. Occasionally we will use the symbol $\|f\|_{\mathcal{D}_0}$ to denote the Dirichlet integral of $f$. If $f(z)=\sum_{n=0}^{\infty} a_nz^n$ is the power series representation for $f$, then \[\begin{aligned}\|f\|_{\mathcal{D}}^2&=|f(0)|^2+\int_\mathbb{D}|f'(z)|^2\,dA(z)\\ &=|a_0|^2+\sum_{n=1}^{\infty}n|a_n|^2.\\ \end{aligned}\] From this it is clear that we can view the Dirichlet space as a weighted Hardy space given by the weight sequence $\beta(0)=1$ and $\beta(n)=n^{1/2}$ for $n\geq 1$; see \cite[Chapter 2]{CowenMacCluer:1995} for more details. The generating function for this weight sequence is given by \[k(z)=\sum_{n=0}^{\infty}\frac{z^n}{\beta(n)^2}=1+\log\frac{1}{1-z}\] and hence the reproducing kernel function for the Dirichlet space (with norm and inner product induced by the given weight sequence) is \[K_w(z)=k(\overline{w}z)=1+\log\frac{1}{1-\overline{w}z}.\] With this, for each $f\in\mathcal{D}$, we have \[f(w)=\langle f,K_w\rangle_{\mathcal{D}},\] where \[\begin{aligned}\langle f,g\rangle_{\mathcal{D}}&=f(0)\overline{g(0)}+\int_{\mathbb{D}}f'(z)\overline{g'(z)}\,dA(z)\\ &=a_0\overline{b_0}+\sum_{n=1}^{\infty}na_n\overline{b_n}. \end{aligned}\] Furthermore, the kernel function for evaluation of the first derivative is given by \[K_w^{(1)}(z)=\frac{d}{d\overline{w}}k(\overline{w}z)=\frac{z}{1-\overline{w}z}\] and \[f'(w)=\langle f,K_w^{(1)}\rangle_{\mathcal{D}} \] for each $f\in \mathcal{D}$. Also recall the Bergman space of the disk $A^2$ is defined to be those functions analytic in the disk with $$\int_\mathbb{D}|f(z)|^2\,dA(z)<\infty.$$ Thus $f\in\mathcal{D}$ if and only if $f'\in A^2$. The Bergman space is a functional Hilbert space and thus point evaluations are bounded (\cite[Chapter 2]{CowenMacCluer:1995}). As we study composition-differentiation operators on $\mathcal{D}$, we will need to employ a specific counting function. If $\varphi$ is self-map of $\mathbb{D}$, for $w\in\mathbb{D}$, we set $n_{\varphi}(w)$ to be the cardinality of the set $\{\varphi^{-1}(w)\}$; we note that $n_{\varphi}(w)=0$ for $w\not\in\varphi(\mathbb{D})$. By making a change of variables and using the fact that $D_\varphi(f)=f'\circ\varphi$ we see that \begin{equation}\label{eqn:countingfunctioninnorm}\begin{aligned}\|D_\varphi f\|_{\mathcal{D}}^2&=|f'(\varphi(0))|^2+\int_\mathbb{D}|(f'\circ\varphi)'(z)|^2\,dA(z)\\ &=|f'(\varphi(0))|^2+\int_\mathbb{D}|f''(\varphi(z))|^2|\varphi'(z)|^2\,dA(z)\\\ &=|f'(\varphi(0))|^2+\int_\mathbb{D}|f''(w)|^2n_{\varphi(w)}\,dA(w).\\ \end{aligned}\end{equation} To characterize bounded and compact composition-differentiation operators, we will employ Carleson measures. For $0\leq\theta<2\pi$ and $0<h<1$, define the standard Carleson set $S(\theta,h)$ by \[S(\theta,h)=\{z\in\mathbb{D}: |z-e^{i\theta}|<h\}.\] Our results center around these sets, but one can verify equivalent results using Carleson rectangles or pseudohyperbolic disks. For $a\in\mathbb{D}$, recall the involution automorphism $\varphi_a$ given by \[\varphi_a(z)=\frac{a-z}{1-\overline{a}z}\] satisfies \[|\varphi_a'(z)|=\frac{1-|a|^2}{|1-\overline{a}z|^2}.\] These maps are bijections of the disk onto itself and are self-inverses. More generally, we will consider linear fractional self-maps of the disk, i.e. maps of the form \begin{equation}\label{eqn:lfphi}\varphi(z)=\frac{az+b}{cz+d}\end{equation} that map the disk into itself. We point to \cite[Chapter 0]{Shapiro:1991} for more details. We will only be interested in non-constant maps of the disk into itself, and for maps of the form above, it is necessary that $|cz+d|$ be bounded away from zero throughout the disk. Additionally, we define the companion map \begin{equation}\label{eqn:lfsigma}\sigma_{\varphi}(z)=\sigma(z)=\frac{\overline{a}z-\overline{c}}{-\overline{b}z+\overline{d}}.\end{equation} It is simple to show that $\sigma$ is also a self-map of the disk whenever $\varphi$ is. This companion map occurs frequently when considering the adjoint of a composition operator with linear fractional symbol acting on a variety of spaces and the interested reader is encouraged to consider \cite{Cowen:1988,Hurst:1997,GallardoRodriguez:2003,Heller:2012} Finally, for two quantities $A(x)$ and $B(x)$, we say that $A(x)\cong B(x)$ to mean there exist positive constants $C_1$ and $C_2$ such that $C_1 A(x)\leq B(x)\leq C_2A(x)$ for all $x$ in some specified range. \section{Bounded Composition-Differentiation Operators on $\mathcal{D}$} We begin with a result that is often referenced without proof but we include a proof for the sake of completeness. The result for $T=C_\varphi$ appears as Exercise 1.1.1 in \cite{CowenMacCluer:1995}. \begin{proposition}\label{prop:boundedinto} Suppose $\mathcal{X}$ and $\mathcal{Y}$ are functional Banach spaces of analytic functions defined on $\mathbb{D}$. Let $\varphi$ be a self-map of $\mathbb{D}$ and let $T$ be either $C_\varphi$ or $D_\varphi$. Then $T$ is bounded from $\mathcal{X}$ to $\mathcal{Y}$ if and only if $T$ maps $\mathcal{X}$ into $\mathcal{Y}$. \end{proposition} \begin{proof} The necessity is clear and we will verify sufficiency for $D_\varphi$. Suppose $D_\varphi$ maps $\mathcal{X}$ into $\mathcal{Y}$. To conclude that $D_\varphi$ is bounded, we will appeal to the Closed Graph Theorem. Let $(f_n)$ be a sequence in $\mathcal{X}$ converging in norm to $f\in\mathcal{X}$. Also suppose $(D_\varphi f_n)$ converges in norm to $g\in\mathcal{Y}$. For $x\in \mathbb{D}$, it is clear that \[(D_\varphi f_n)(x)=f_n'(\varphi(x))=K_{\varphi(x)}^{(1)}(f_n).\] Also, our assumptions on $(f_n)$ combined with the continuity of the evaluation functional guarantee that $\left(K_{\varphi(x)}^{(1)}(f_n)\right)$ converges to $K_{\varphi(x)}^{(1)}(f)$. Thus, for each $x\in \mathbb{D}$, we have that $((D_\varphi f_n)(x))$ converges to \[K_{\varphi(x)}^{(1)}(f)=f'(\varphi(x))=(D_\varphi f)(x).\] On the other hand, in a functional Banach space, norm convergence implies point-wise convergence and so, for $x\in\mathbb{D}$, we see that $((D_\varphi f_n)(x))$ converges to $g(x)$ and thus $(D_\varphi f)(x)=g(x)$ for every $x\in \mathbb{D}$ or $D_\varphi f=g$. Therefore we conclude that $D_\varphi$ is bounded by the Closed Graph Theorem. \end{proof} \begin{theorem} Let $\varphi$ be an analytic self-map of $\mathbb{D}$ and set $d\mu=n_{\varphi}dA$. The following are equivalent. \begin{enumerate} \item $D_\varphi:\mathcal{D}\to \mathcal{D}$ is bounded. \item $C_\varphi:A^2\to\mathcal{D}$ is bounded. \item There is a constant $C_1>0$ such that $\mu(S(\theta,h))\leq C_1h^4$ for all $0\leq\theta<2\pi$ and $0<h<1$. \item There is a constant $C_2>0$ such that \[\int_{\mathbb{D}}|\varphi_a'(w)|^4\,d\mu(w)\leq C_2\] for all $a\in\mathbb{D}$. \end{enumerate} \end{theorem} \begin{proof} The equivalence $\text{(i)} \Longleftrightarrow \text{(ii)}$ follows from Proposition \ref{prop:boundedinto} combined with the fact that $f\in\mathcal{D}$ if and only if $f'\in A^2$ and the observation that $D_\varphi(f)=f'\circ\varphi=C_\varphi(f')$. Next we verify the equivalence between (ii) and (iii). Let $f\in A^2$. Then, making a change of variables, we have \[\begin{aligned}\|C_\varphi f\|_{\mathcal{D}}^2&=|f(\varphi(0))|^2+\int_\mathbb{D}|(f\circ\varphi)'(z)|^2\,dA(z)\\ &=|f(\varphi(0))|^2+\int_\mathbb{D}|f'(\varphi(z))|^2|\varphi'(z)|^2\,dA(z)\\\ &=|f(\varphi(0))|^2+\int_\mathbb{D}|f'(w)|^2n_{\varphi(w)}\,dA(w)\\ &=|f(\varphi(0))|^2+\int_\mathbb{D}|f'(w)|^2\,d\mu(w). \end{aligned}\] As point evaluations are bounded on $A^2$, it follows that $C_\varphi:A^2\to \mathcal{D}$ is bounded if and only if there is a constant $C>0$ such that \[\int_D|f'(w)|^2\,d\mu(w)\leq C\|f\|_{A^2}^2.\] The desired result then follows from \cite[Theorem 2.2]{Luecking:1985} (and the remarks following the theorem) with $n=1$, $q=2$, $p=2$, and $\alpha=0$. The equivalence between (iii) and (iv) is found in \cite[Theorem 13]{ArazyFisherPeetre:1985}. \end{proof} For $\mu$ as defined in the theorem, we note that the integral in condition (iv) of the theorem can also be written as \begin{equation}\label{eqn:involutionintegral}\int_{\mathbb{D}}|\varphi_a'(w)|^4\,d\mu(w)=\int_{\mathbb{D}}\frac{|\varphi'(z)|^2}{(1-|\varphi(z)|^2)^4}\left(\frac{(1-|a|^2)(1-|\varphi(z)|^2)}{|1-\overline{a}\varphi(z)|^2}\right)^4\,dA(z)\end{equation} by reversing the change of variables. \begin{example}\label{exam:bounded} An obvious consequence of the previous theorem is that the identity map $\varphi(z)=z$ does not induced a bounded composition-differentiation operator on $\mathcal{D}$, nor does any automorphism of the disk. This is also true for any linear fractional self-map of the disk with $\|\varphi\|_{\infty}=1$. However, a linear fractional map with $\|\varphi\|_{\infty}<1$ will induce a bounded composition-differentiation operator. The following example can be found in \cite{JovovicMacCluer:1997}. Let $\Gamma_1$ be a simply connected region in $\mathbb{D}$ touching $\partial\mathbb{D}$ only at 1 and such that the boundary curve in a neighborhood of 1 is a piece of the curve (in rectangular coordinates) $x^{1/2}+y^{1/2}=1$. Then let $\varphi$ be a univalent mapping of $\mathbb{D}$ onto $\Gamma_1$. In \cite{JovovicMacCluer:1997}, the authors show that such a $\varphi$ will induce a bounded and, in fact, compact composition operator on $\mathcal{D}$. However, the corresponding operator $D_\varphi$ is unbounded on $\mathcal{D}$. This can be verified by showing that $\mu(S(\theta,h))/h^4$ is unbounded as $h\rightarrow 0$, for $\theta=0$. We have, \[\begin{aligned}\lim_{h\rightarrow 0}\frac{1}{h^4}\int_{S(0,h)}n_{\varphi}(w)\,dA(w)&=\lim_{h\rightarrow 0}\frac{1}{h^4}A(\Gamma_1\cap S(0,h))\\ &\cong\lim_{h\rightarrow 0}\int_{1-h}^1\left(1-x^{1/2}\right)^2\,dx\\ &=\lim_{h\rightarrow 0}\frac{2h-h^2/2+(4/3)(1-h)^{3/2}-4/3}{h^4}\\ &=\infty.\end{aligned}\] We can adjust this example to produce a $\varphi$ such that $D_\varphi$ is bounded on $\mathcal{D}$. Here we take $\Gamma_2$ to be defined similarly to $\Gamma_1$ but we modify so that the boundary curve in a neighborhood of 1 is a piece of the curve $x^{1/3}+y^{1/3}=1.$ Then we take $\varphi$ to be a univalent map of $\mathbb{D}$ onto $\Gamma_2$. To show that $D_\varphi$ is bounded, it is sufficient to check that $\mu(S(0,h)/h^4$ is bounded for all $0<h<1$ since $\Gamma_2$ is contained in a non-tangential approach region at 1; see, for example, \cite[Lemma 6.2]{CowenMacCluer:1995}. We have \[\begin{aligned}\frac{1}{h^4}\int_{S(0,h)}n_{\varphi}(w)\,dA(w)&=\frac{1}{h^4}A(\Gamma_2\cap S(0,h))\\ &\cong\int_{1-h}^1\left(1-x^{1/3}\right)^3\,dx\\ &=\frac{h^2/2+(9/4)(1-h)^{4/3}-(9/5)(1-h)^{5/3}-9/20}{h^4}.\end{aligned}\] This last function is increasing on $(0,1)$ and so it is bounded above by the value at $h=1$ which is 1/20. \end{example} \section{Compact and Hilbert-Schmidt Composition-Differentiation Operators on $\mathcal{D}$} To characterize the compact composition-differentiation operators, we first have a familiar reformulation of compactness. \begin{proposition}[{\cite[Lemma 3.7]{Tjani:2003}}]\label{prop:compactreformulation} Let $\varphi$ be a self-map of $\mathbb{D}$. Then $C_\varphi:A^2\to \mathcal{D}$ is compact if and only if whenever $(f_n)$ is a bounded sequence in $A^2$,with $(f_n)\to 0$ uniformly on compact subsets of $\mathbb{D}$, it follows that $(\|C_\varphi f_n\|_{\mathcal{D}})\to 0$. Similarly, $D_\varphi:\mathcal{D}\to\mathcal{D}$ is compact if and only if whenever $(f_n)$ is a bounded sequence in $\mathcal{D}$ with $(f_n)\to 0$ uniformly on compact subsets of $\mathbb{D}$, it follows that $(\|D_\varphi f_n\|_{\mathcal{D}})\to 0$. \end{proposition} \noindent As with boundedness, compactness of $D_\varphi:\mathcal{D}\to \mathcal{D}$ is connected to the compactness of $C_\varphi:A^2\to\mathcal{D}$. \begin{lemma} Let $\varphi$ be a self-map of $\mathbb{D}$. Then $D_\varphi:\mathcal{D}\to \mathcal{D}$ is compact if and only if $C_\varphi:A^2\to\mathcal{D}$ is compact. \end{lemma} \begin{proof} Suppose $D_\varphi:\mathcal{D}\to \mathcal{D}$ is compact. To show $C_\varphi:A^2\to\mathcal{D}$ is compact, we will appeal to Proposition \ref{prop:compactreformulation}. To that end, let $(f_n)$ be a bounded sequence in $A^2$ such that $(f_n)\to 0$ uniformly on compact subsets of $\mathbb{D}$. For $n\in \mathbb{N}$, let $g_n\in \mathcal{D}$ with $g_n'=f_n$, i.e. take \[g_n(z)=\int_0^z f_n(w)\,dw.\] Then $(g_n)$ is a bounded sequence in $\mathcal{D}$ since $\|g_n\|_{\mathcal{D}}^2=|g_n(0)|^2+\|g_n'\|_{A^2}^2=\|f_n\|_{A^2}^2$. The hypothesis on $(f_n)$ guarantees that $(g_n)\to 0$ uniformly on compact subsets of $\mathbb{D}$. Thus $(\|D_\varphi g_n\|_{\mathcal{D}})\to 0$ by Proposition \ref{prop:compactreformulation}. Also, we have \[\|C_\varphi f_n\|_{\mathcal{D}}=\|C_\varphi g_n'\|_{\mathcal{D}}=\|g_n'\circ\varphi\|_{\mathcal{D}}=\|D_\varphi g_n\|_{\mathcal{D}}\] and thus $(\|C_\varphi f_n\|_{\mathcal{D}})\to 0$. Thus $C_\varphi:A^2\to \mathcal{D}$ is compact by Proposition \ref{prop:compactreformulation}. Conversely, suppose $C_\varphi:A^2\to \mathcal{D}$ is compact. To show $D_\varphi:\mathcal{D}\to \mathcal{D}$ is compact, let $(g_n)$ be a bounded sequence in $\mathcal{D}$ such that $(g_n)\to 0$ uniformly on compact subsets of $\mathbb{D}$. Then $(g_n')$ is a bounded sequence in $A^2$ since $\|g_n'\|_{A^2}=\|g_n\|_{\mathcal{D}_0}\leq \|g_n\|_{\mathcal{D}}$. Moreover $(g_n')\to 0$ uniformly on compact subsets of $\mathbb{D}$; see \cite[Theorem V.1.6]{Lang:1993}. By our hypothesis and Proposition \ref{prop:compactreformulation} we have $(\|C_\varphi g_n'\|_{\mathcal{D}})\to 0$. But \[\|D_\varphi g_n\|_{\mathcal{D}}=\|g_n'\circ \varphi\|_{\mathcal{D}}=\|C_\varphi g_n'\|_{\mathcal{D}}\] and hence $(\|D_\varphi g_n\|_{\mathcal{D}})\to 0$. Thus $D_\varphi:\mathcal{D}\to \mathcal{D}$ is compact. \end{proof} The lemma leads to the following characterization of compactness for $D_\varphi:\mathcal{D}\to\mathcal{D}$. \begin{theorem}\label{thm:compactness} Let $\varphi$ be a self-map of $\mathbb{D}$ so that $D_\varphi:\mathcal{D}\to \mathcal{D}$ is bounded and set $d\mu=n_{\varphi}dA$. The following are equivalent. \begin{enumerate} \item $D_\varphi:\mathcal{D}\to \mathcal{D}$ is compact. \item $C_\varphi:A^2\to\mathcal{D}$ is compact. \item $\displaystyle \lim_{h\to 0} \sup_{\theta\in[0,2\pi)}\frac{\mu(S(\theta,h))}{h^4}=0$ . \item $\displaystyle \lim_{|a|\to 1}\int_{\mathbb{D}}|\varphi_a'(w)|^4\,d\mu(w)=0$. \end{enumerate} \end{theorem} \begin{proof} By the lemma we see that (i) and (ii) are equivalent and we will show $\text{(ii)}\Longrightarrow\text{(iv)}\Longrightarrow\text{(iii)}\Longrightarrow\text{(ii)}$. Suppose $C_\varphi:A^2\to \mathcal{D}$ is compact. For $a\in\mathbb{D}$ let $k_a(z)=(1-|a|^2)/(1-\overline{a}z)^2$. Then $k_a\in A^2$ with $\|k_a\|_{A^2}=1$ and $k_a$ converges weakly to 0 as $|a|\to 1$ \cite[Theorem 2.17]{CowenMacCluer:1995}. Hence $(\|C_\varphi k_a\|_{\mathcal{D}})\to 0$ as $|a|\to 1$ since $C_\varphi:A^2\to \mathcal{D}$ is compact. Then we have \[\begin{aligned}\|C_\varphi k_a\|_{\mathcal{D}}^2\geq \|(k_a\circ \varphi)'\|_{A^2}^2&=\int_{\mathbb{D}}\frac{4|a|^2(1-|a|^2)^2|\varphi'(z)|^2}{|1-\overline{a}\varphi(z)|^6}\,dA(z)\\ &=4|a|^2\int_{\mathbb{D}}\frac{|\varphi'(z)|^2(1-|a|^2)^4(1-|\varphi(z)|^2)^4}{(1-|\varphi(z)|^2)^4|1-\overline{a}\varphi(z)|^8}\frac{|1-\overline{a}\varphi(z)|^2}{(1-|a|^2)^2}\,dA(z)\\ &\geq4|a|^2\int_{\mathbb{D}}\frac{|\varphi'(z)|^2(1-|a|^2)^4(1-|\varphi(z)|^2)^4}{(1-|\varphi(z)|^2)^4|1-\overline{a}\varphi(z)|^8}\frac{(1-|a|)^2}{(1-|a|^2)^2}\,dA(z)\\ &\geq |a|^2\int_{\mathbb{D}}\frac{|\varphi'(z)|^2(1-|a|^2)^4(1-|\varphi(z)|^2)^4}{(1-|\varphi(z)|^2)^4|1-\overline{a}\varphi(z)|^8}\,dA(z)\\ &=|a|^2\int_{\mathbb{D}}\frac{|\varphi'(z)|^2}{(1-|\varphi(z)|^2)^4}\left(\frac{(1-|a|^2)(1-|\varphi(z)|^2)}{|1-\overline{a}\varphi(z)|^2}\right)^4\,dA(z)\\ &=|a|^2\int_{\mathbb{D}}|\varphi_a'(w)|^4\,d\mu(w), \end{aligned}\] where the last equality follows from Eqn.(\ref{eqn:involutionintegral}). It follows that (iv) holds as $|a|\to 1$. That (iv) implies (iii) is shown in \cite[Proposition 3.4]{Tjani:2003}. Lastly, we will show that (iii) implies (ii). To show $C_\varphi:A^2\to \mathcal{D}$ is compact, we will again appeal to Proposition \ref{prop:compactreformulation}. Let $(f_n)$ be a bounded sequence in $A^2$ with $(f_n)\to 0$ uniformly on compact subsets of $\mathbb{D}$. Then we are left to show that $(\|C_\varphi f_n\|_{\mathcal{D}})\to 0$. By our assumptions on $(f_n)$ we know $(|f_n(\varphi(0))|^2)\to 0$ and thus it is sufficient to show that $(\|C_\varphi f_n\|_{\mathcal{D}_0})\to 0$. Let $w\in\mathbb{D}$, $r=(1-|w|)/2$, and $D(w,r)=\{z\in\mathbb{D}: |z-w|<r\}$. As $|f_n'|^2$ is subharmonic, for $w\in\mathbb{D}$, we have \[|f_n'(w)|^2\leq \frac{1}{r^2}\int_{D(w,r)}|f_n'(z)|^2\,dA(z)=\frac{4}{(1-|w|)^2}\int_{D(w,r)}|f_n'(z)|^2\,dA(z).\] Then, from our previous estimate and Fubini's theorem, \[\begin{aligned}\|C_\varphi f_n\|^2_{\mathcal{D}_0}&=\int_\mathbb{D}|f_n'(\varphi(z))|^2|\varphi'(z)|^2\,dA(z)=\int_\mathbb{D}|f_n'(w)|^2\,d\mu(w)\\ &\leq\int_\mathbb{D}\frac{4}{(1-|w|)^2}\left(\int_{D(w,r)}|f_n'(z)|^2\,dA(z)\right)\,d\mu(w)\\ &=4\int_\mathbb{D}\frac{1}{(1-|w|)^2}\left(\int_\mathbb{D}\mbox{\Large$\chi$}_{D(w,r)}(z)|f_n'(z)|^2\,dA(z)\right)\,d\mu(w)\\ &=4\int_\mathbb{D}|f_n'(z)|^2\left(\int_\mathbb{D}\frac{\mbox{\Large$\chi$}_{D(w,r)}(z)}{(1-|w|)^2}\,d\mu(w)\right)\,dA(z).\end{aligned}\] If $z,w\in \mathbb{D}$ with $|w-z|<(1-|w|)/2$, then $(1-|w|)/2<1-|z|$ and, for $z=|z|e^{i\theta}$, we have \[|w-e^{i\theta}|\leq |w-z|+|z-e^{i\theta}|\leq \frac{1-|w|}{2}+(1-|z|)\leq 2(1-|z|).\] Thus, for a fixed $z\in\mathbb{D}$, if $w$ satisfies $|w-z|<(1-|w|)/2$, we have $w\in S(\theta,2(1-|z|))$. Now define functions $F$ and $G$ on $\mathbb{D}\times \mathbb{D}$ by $F(z,w)=\mbox{\Large$\chi$}_{D(w,r)}(z)$, where $r=(1-|w|)/2$, and $G(z,w)=\mbox{\Large$\chi$}_{S(\theta,2(1-|z|))}(w)$, where $z=|z|e^{i\theta}$. If $z$ is fixed and $w$ satisfies $z\in D(w,r)$, then $F(z,w)=1=G(z,w)$. Otherwise $F(z,w)=0\leq G(z,w)$. Thus $F(z,w)\leq G(z,w)$ on $\mathbb{D}\times \mathbb{D}$. Additionally, if $|w-z|<(1-|w|)/2$, then $1/(1-|w|)\leq (3/2)1/(1-|z|)$. With these two estimates, we have \[\begin{aligned}\|C_\varphi f\|^2_{\mathcal{D}_0}&\leq 9\int_\mathbb{D}\frac{|f_n'(z)|^2}{(1-|z|)^2}\left(\int_\mathbb{D}\mbox{\Large$\chi$}_{S(\theta,2(1-|z|))}(w)\,d\mu(w)\right)\,dA(z)\\ &=9\int_\mathbb{D}\frac{|f_n'(z)|^2}{(1-|z|)^2}\left(\int_{S(\theta,2(1-|z|))}\,d\mu(w)\right)\,dA(z).\\ \end{aligned}\] Now, with (iii) as our assumption, let $\varepsilon>0$ and choose $\delta>0$ such that $\mu(S(\theta,h))<\varepsilon h^4$ for $0<h<\delta$ and $\theta\in[0,2\pi)$. Then we can split the last integral and we have \[\|C_\varphi f\|_{\mathcal{D}_0}\leq I_1 + I_2,\] where \[I_1=9\int_{|z|>1-\delta/2}\frac{|f_n'(z)|^2}{(1-|z|)^2}\left(\int_{S(\theta,2(1-|z|))}\,d\mu(w)\right)\,dA(z)\] and \[I_2=9\int_{|z|\leq1-\delta/2}\frac{|f_n'(z)|^2}{(1-|z|)^2}\left(\int_{S(\theta,2(1-|z|))}\,d\mu(w)\right)\,dA(z).\] Utilizing our choice of $\delta$, we see that there is a positive constant $C$ such that \[\begin{aligned}I_1&\leq\varepsilon2^4 9\int_{|z|>1-\delta/2}\frac{|f_n'(z)|^2}{(1-|z|)^2}(1-|z|)^4\,dA(z)\\ &\leq \varepsilon 2^49\int_\mathbb{D}|f_n'(z)|^2(1-|z|^2)^2\,dA(z)\\ &\leq C\varepsilon, \end{aligned}\] where the last integral represents an equivalent (semi-)norm on $A^2$ and is therefore bounded as $n$ varies by our conditions on $(f_n)$. Also, \[\begin{aligned}I_2&\leq \frac{36}{\delta^2}\int_{|z|\leq1-\delta/2}|f_n'(z)|^2\left(\int_\mathbb{D}\,d\mu(w)\right)\,dA(z)\\ &=\frac{36}{\delta^2}\|\varphi\|_{\mathcal{D}_0}^2\int_{|z|\leq1-\delta/2}|f_n'(z)|^2\,dA(z).\\ \end{aligned}\] By our assumptions on $(f_n)$, we know that $(f_n')\to 0$ uniformly on compact subsets of $\mathbb{D}$ (see \cite[Theorem V.1.6]{Lang:1993}) and hence we can choose $n\in\mathbb{N}$ large enough so that this last integral is bounded above by a constant multiple of $\varepsilon$. Thus we conclude that $(\|C_\varphi f_n\|_{\mathcal{D}_0})\to 0$ and hence $C_\varphi:A^2\to \mathcal{D}$ is compact as desired. \end{proof} The following corollary follows from either Proposition \ref{prop:compactreformulation} or Theorem \ref{thm:compactness} \begin{corollary}\label{cor:compact} Let $\varphi$ be a self-map of $\mathbb{D}$ so that $D_\varphi:\mathcal{D}\to \mathcal{D}$ is bounded. If $\|\varphi\|_{\infty}<1$, then $D_\varphi:\mathcal{D}\to \mathcal{D}$ is compact. \end{corollary} \begin{example} To generate a compact $D_\varphi$ for a self-map with boundary contact, we can modify the last self-map from Example \ref{exam:bounded}. We take $\Gamma_3$ to be similar to $\Gamma_2$ from the aforementioned example but we use the curve $x^{1/4}+y^{1/4}=1$ as the boundary curve in a neighborhood of 1. We then let $\varphi$ be a univalent map of $\mathbb{D}$ onto $\Gamma_3$. Again, since $\Gamma_3$ is contained in a non-tangential approach region at 1, we only need to verify condition (iii) from Theorem \ref{thm:compactness} at $\theta=0$, i.e we need $\lim_{h\rightarrow 0}\mu(S(0,h))/h^4=0$. The computation is similar to those in the previous example and we omit the details. \end{example} Characterizing which $D_\varphi$ induce Hilbert-Schmidt operators on $\mathcal{D}$ is a consequence of \cite[Theorem 3.23]{CowenMacCluer:1995}. As is the case for composition operators acting on $\mathcal{D}$, this result has connections to the hyperbolic derivative of the inducing map. \begin{theorem} Let $\varphi$ be a self-map of $\mathbb{D}$. Then $D_\varphi:\mathcal{D}\to \mathcal{D}$ is Hilbert-Schmidt if and only if \[\int_{\mathbb{D}}\frac{|\varphi'(z)|^2}{(1-|\varphi(z)|^2)^4}\,dA(z)<\infty.\] \end{theorem} \begin{proof} We know that the sequence of functions $(e_n)$ defined by $e_0(z)=1$ and $e_n(z)=z^n/\sqrt{n}$, for $n\geq 1$, is an orthonormal basis for $\mathcal{D}$. It follows that $D_\varphi$ is Hilbert-Schmidt on $\mathcal{D}$ if and only if \[\sum_{n=0}^{\infty}\|D_\varphi e_n\|_{\mathcal{D}}^2<\infty.\] Considering \myeqref{eqn:countingfunctioninnorm}, we see further that $D_\varphi$ is Hilbert-Schmidt if and only if \[\sum_{n=2}^{\infty}\|D_\varphi e_n\|_{\mathcal{D}_0}^2=\sum_{n=0}^{\infty}\int_\mathbb{D}(n+2)(n+1)^2|\varphi(z)|^{2n}|\varphi'(z)|^2\,dA(z)<\infty.\] Differentiating the geometric series three times, we have \[\sum_{n=0}^{\infty}(n+3)(n+2)(n+1)z^n=\frac{3!}{(1-z)^4},\] for $z\in\mathbb{D}$. Also, for $n\geq 0$, we have $1/3\leq (n+1)/(n+3)\leq 1$ and it follows that $\sum_{n=2}^{\infty}\|D_\varphi e_n\|_{\mathcal{D}_0}^2$ is bounded above and below by multiples of \[\sum_{n=0}^{\infty}\int_\mathbb{D}(n+3)(n+2)(n+1)|\varphi(z)|^{2n}|\varphi'(z)|^2\,dA(z)=3!\int_\mathbb{D}\frac{|\varphi'(z)|^2}{(1-|\varphi(z)|^2)^4}\,dA(z).\] The conclusion now follows. \end{proof} \section{Adjoints} In this section we will consider the adjoint of a composition-differentiation operator induced by a linear fractional map $\varphi$ with $\|\varphi\|_{\infty}<1$; we assume $\varphi$ is non-constant and ask the reader to recall Eqs. (\ref{eqn:lfphi}) and (\ref{eqn:lfsigma}). Note any such map will induce a bounded (and, in fact, compact) composition-differentiation operator on $\mathcal{D}$. Moreover, the conditions on $\varphi$, and hence on $\sigma$, guarantee that both $|cz+d|$ and $|-\overline{b}z+\overline{d}|$ are bounded away from 0 in the disk. For $\psi$ analytic in $\mathbb{D}$, we define the multiplication operator $T_\psi$ by \[T_\psi(f)=\psi\cdot f.\] The following lemma gives a sufficient condition for $T_\psi$ to be bounded on the Dirichlet space. \begin{lemma}\label{lemma:boundedmultiplicationoperator} Let $\psi$ be analytic in $\mathbb{D}$ with both $\psi$ and $\psi'$ in $H^{\infty}(\mathbb{D})$. Then $T_{\psi}$ is a bounded multiplication operator on $\mathcal{D}$. \end{lemma} For a bounded multiplication operator acting on a reproducing kernel Hilbert space, the adjoint of the multiplication operator has predictable behavior when acting on a kernel function; in particular, $T_\psi^*(K_w)=\overline{\psi(w)}K_w$. There are similar results for the adjoint of a composition or a composition-differentiation operator: $C_\varphi^*(K_w)=K_{\varphi(w)}$ and $D_\varphi^*(K_w)=K_{\varphi(w)}^{(1)}$. We will utilize these facts to prove the following theorem, which is an analogue of \cite[Theorem 1]{FatehiHammond:2020} \begin{theorem} Let $\varphi$ be a linear fractional self-map of $\mathbb{D}$ with $\|\varphi\|_{\infty}<1$. Then $D_{\varphi}^*T_{K_{\sigma(0)}^{(1)}}^*=T_{K_{\varphi(0)}^{(1)}}D_{\sigma}$. \end{theorem} \begin{proof} For $\varphi$ and $\sigma$ as given above, $\varphi(0)=b/d$ and $\sigma(0)=-\overline{c}/\overline{d}$, from which we have \[ K_{\varphi(0)}^{(1)}(z)=\frac{\overline{d}z}{-\overline{b}z+\overline{d}}\] and \[K_{\sigma(0)}^{(1)}(z)=\frac{dz}{cz+d}.\] Note that both of these functions are bounded in $\mathbb{D}$, as are their respective derivatives, by the remarks preceding Lemma \ref{lemma:boundedmultiplicationoperator} and hence the induced multiplication operators are bounded on $\mathcal{D}$ by the aforementioned lemma. It follows immediately that \[D_{\varphi}^*T_{K_{\sigma(0)}^{(1)}}^*(K_w)=\overline{K_{\sigma(0)}^{(1)}(w)}D_{\varphi}^*(K_w)=\frac{\overline{dw}}{\overline{cw}+\overline{d}}K_{\varphi(w)}^{(1)}.\] Next, observe that \[D_{\sigma}(K_w)(z)=C_{\sigma}\left(\frac{\overline{w}}{1-\overline{w}z}\right)=\frac{\overline{w}}{1-\overline{w}\sigma(z)}\] and so \[\begin{aligned}T_{K_{\varphi(0)}^{(1)}}D_{\sigma}(K_w)(z)&=\left(\frac{\overline{d}z}{-\overline{b}z+\overline{d}}\right)\left(\frac{\overline{w}}{1-\overline{w}\sigma(z)}\right)\\ &=\frac{\overline{dw}z}{\overline{cw}+\overline{d}-(\overline{aw}+\overline{b})z}\\ &=\left(\frac{\overline{dw}}{\overline{cw}+\overline{d}}\right)\left(\frac{z}{1-\overline{\varphi(w)}z}\right)\\ &=\frac{\overline{dw}}{\overline{cw}+\overline{d}}K_{\varphi(w)}^{(1)}(z). \end{aligned}\] The kernel functions span a dense set in $\mathcal{D}$ and hence we conclude that $D_{\varphi}^*T_{K_{\sigma(0)}^{(1)}}^*=T_{K_{\varphi(0)}^{(1)}}D_{\sigma}$ on $\mathcal{D}$. \end{proof} For composition operators induced by linear fractional symbol acting on the Dirichlet space, the adjoint has the form $C_\varphi^*=C_{\sigma}+K$, where $K$ is a specific rank 2 operator (see \cite[Theorem 3.3]{GallardoRodriguez:2003}), whereas on the Hardy space the adjoint for the composition and composition-differentiation operators seem to have more similarity. Specifically, we have $C_\varphi^*T_{K_\sigma(0)}^*=T_{K_\varphi(0)}C_{\sigma}$ and $D_{\varphi}^*T_{K_{\sigma(0)}^{(1)}}^*=T_{K_{\varphi(0)}^{(1)}}D_{\sigma}$, respectively, where the kernels appearing here are those for the Hardy space (see \cite[Theorem 2]{Cowen:1988} and \cite[Theorem 1]{FatehiHammond:2020}). \section{Special Symbols}\label{Section:SpecialSymbols} In this section we investigate composition-differentiation operators induced by a monomial symbol. Specifically, we explore the norm and spectrum of such an operator. \subsection{Norm}\label{Subsection:Norm} We first consider the norm of the operator $D_\varphi$ acting on $\mathcal{D}$ when $\varphi(z) = az^M$ for $0 < |a| < 1$ and $M\in\mathbb{N}$. Note that any such $\varphi$ will induce a compact operator on $\mathcal{D}$ by Corollary \ref{cor:compact}. In the case of the Hardy space, the authors in \cite{FatehiHammond:2020} obtain a similar result for $D_\varphi$ acting on the Hardy space $H^2$. In that setting, they utilize certain facts about composition and multiplication operators on $H^2$ that do not hold on the Dirichlet space (or on the standard weighted Bergman or Dirichlet spaces). Specifically, the authors there rely on the fact that $T_z^*T_z=I$ for $T_z$ acting on the Hardy space. Instead, we will exploit the fact that the operator $D_\varphi$, for $\varphi$ as above, preserves the orthogonality of the monomials. For $0<|a|<1$ and $M\in\mathbb{N}$, we define constants \[\nu = \left\lfloor\frac{2}{1-|a|^2}\right\rfloor\] and \[\mathcal{N}_M = \max\left\{1, \sqrt{M}\sqrt{\nu(\nu-1)}|a|^{\nu-1}\right\}.\] \begin{theorem} If $\varphi(z) = az^M$ for $0 < |a| < 1$ and $M$ in $\mathbb{N}$, then $\|D_\varphi\| = \mathcal{N}_M$. \end{theorem} \begin{proof} For $\varphi$ as given, we will show $\|D_\varphi\| \geq \mathcal{N}_M$ and $\|D_\varphi\| \leq \mathcal{N}_M$. To obtain the lower bound we first consider the orthonormal basis functions $(e_n)_{n\geq 0}$ defined by $e_0(z) = 1$ and $e_n(z) = z^n/\sqrt{n}$, $n\geq 1$. For $z\in \mathbb{D}$, observe $e_0'(\varphi(z))=0$, \[e_1'(\varphi(z)) = 1 = e_0(z),\] and, for $n \geq 2$, \[e_n'(\varphi(z)) = \frac{n\left(az^M\right)^{n-1}}{\sqrt{n}} = \sqrt{n}a^{n-1}z^{M(n-1)} = \sqrt{M}\sqrt{n(n-1)}a^{n-1}e_{M(n-1)}(z).\] Thus \[\|D_\varphi\| \geq \max\left\{1,\sup_{n \geq 2}\sqrt{M}\sqrt{n(n-1)}|a|^{n-1}\right\}.\] The lower bound will follow if we can verify that \[\sup_{n \geq 2}\sqrt{M}\sqrt{n(n-1)}|a|^{n-1} = \sqrt{M}\sqrt{\nu(\nu-1)}|a|^{\nu-1}.\] Note that the function $g(x) = \sqrt{M}\sqrt{x(x-1)}|a|^{x-1}$ has exactly one critical point in $(1,\infty)$. This point is a local maximum and hence also the absolute maximum of $g$ on $(1,\infty)$. Thus the supremum we wish to compute is a maximum and will occur at the greatest integer $n \geq 2$ such that \[\sqrt{(n-1)(n-2)}|a|^{n-2} \leq \sqrt{n(n-1)}|a|^{n-1};\] equivalently, \[n \leq \frac{2}{1-|a|^2}.\] Thus, \begin{equation}\label{eqn:supremum}\sup_{n \geq 2}\sqrt{M}\sqrt{n(n-1)}|a|^{n-1} = \sqrt{M}\sqrt{\nu(\nu-1)}|a|^{\nu-1},\end{equation} which implies $\|D_\varphi\| \geq \mathcal{N}_M$. For the upper estimate, let $f(z) = \sum_{n=0}^{\infty} b_nz^n= b_0e_0(z)+\sum_{n=1}^{\infty}b_n\sqrt{n}e_n(z)$ be in $\mathcal{D}$. Then \[\begin{aligned}(D_\varphi f)(z)&= \sum_{n=1}^{\infty} b_n\sqrt{n}e_n'(\varphi(z))\\ &=b_1+\sum_{n=2}^{\infty}b_n\sqrt{n}\left(\sqrt{M}\sqrt{n(n-1)}a^{n-1}e_{M(n-1)}(z)\right). \end{aligned}\] With this and \myeqref{eqn:supremum}, we have \[\begin{aligned} \|D_\varphi f\|_{\mathcal{D}}^2 &= |b_1|^2+\sum_{n=2}^{\infty}|b_n|^2n\left(\sqrt{M}\sqrt{n(n-1)}|a|^{n-1}\right)^2\\ &\leq |b_1|^2+\sum_{n=2}^{\infty}|b_n|^2n\left(\sqrt{M}\sqrt{\nu(\nu-1)}|a|^{\nu-1}\right)^2\\ &\leq \mathcal{N}_M^2\left(|b_1|^2+\sum_{n=2}^{\infty}|b_n|^2n\right) \\ &\leq \mathcal{N}_M^2 \|f\|_{\mathcal{D}}^2\\ \end{aligned}\] and thus $\|D_\varphi\|\leq \mathcal{N}_M$ as desired. \end{proof} Further investigation of the quantities in the theorem reveal how the norm changes with respect to $|a|$. When $M=1$, $\|D_\varphi\|=1$ when $0<|a|\leq6^{-1/4}$ and $\|D_\varphi\|>1$ for $6^{-1/4}<|a|<1$. For $M\geq 2$, $\|D_\varphi\|=1$ when $0<|a|\leq1/\sqrt{2M}$ and $\|D_\varphi\|>1$ for $1/\sqrt{2M}<|a|<1$. In both cases, the norm tends to infinity as $|a|\to1$; see Figure \ref{Figure:NormDphi} below. As alluded to before the statement of the theorem, our proof here does not rely on operator-theoretic properties of multiplication operators that are specific to a certain space, but rather on the fact that, for these symbols, the composition-differentiation operator preserves the orthogonality of the monomials. This method applies to the Hardy space as well as a variety of other spaces. \begin{figure} \caption{Norm of $D_\varphi$ for $\varphi(z) = az^M$, as a function of $|a|$, with $M=1,2,3$.} \label{Figure:NormDphi} \end{figure} \subsection{Spectrum}\label{Subsection:Spectrum} Since a self-map of the disk $\varphi$ with $\|\varphi\|_\infty < 1$ induces a compact operator $D_\varphi$ on $\mathcal{D}$ by Corollary \ref{cor:compact}, as it does on the Hardy space $H^2$, the proof of the following theorem follows exactly as presented in \cite[Proposition 3]{FatehiHammond:2020}. \begin{theorem}\label{Theorem:SpectrumLinear} If $\varphi(z) = az+b$, for $0 < |a| < 1-|b|$, then the spectrum of $D_\varphi$ is $\{0\}$. \end{theorem} We now turn our attention to the spectrum of $D_\varphi$ with symbols of the form $\varphi(z) = az^M$, for $0 < |a| < 1$ and $M\in\mathbb{N}$. The previous theorem deals with the case $M=1$ and we are left to investigate the spectrum when $M \geq 2$. By the Spectral Theorem for Compact Operators (see \cite{Conway:1990}), the spectrum is countable, contains 0, and any non-zero element is an eigenvalue. If we assume the existence of a non-zero eigenvalue $\lambda$ of $D_\varphi$, then any associated eigenfunction $f$ satisfies \[\frac{d^n}{dz^n}\left[\lambda f(z)\right] = \frac{d^n}{dz^n}\left[f'(\varphi(z))\right]\] for all $z$ in $\mathbb{D}$ and $n \geq 0$. We will compute high-order derivatives of the function $f'\circ\varphi$ utilizing Fa\`a di Bruno's Formula (see \cite[Theorem C of Section 3.4]{Comtet:1974}) \begin{equation}\label{Equation:FaadiBruno}\frac{d^n}{dz^n} f'(\varphi(z)) = \sum_{k=1}^n f^{(k+1)}(\varphi(z))B_{n,k}\left(\varphi'(z),\dots,\varphi^{(n-k+1)}(z)\right),\end{equation} where $B_{n,k}(x_1,\dots,x_{n-k+1})$ is the Bell polynomial (see \cite[Theorem A of Section 3.3]{Comtet:1974}) defined as \begin{equation}\label{Equation:BellPolynomial}B_{n,k}(x_1,\dots,x_{n-k+1}) = \sum\frac{n!}{c_1!c_2!\cdots c_{n-k+1}!}\left(\frac{x_1}{1!}\right)^{c_1}\cdots\left(\frac{x_{n-k+1}}{(n-k+1)!}\right)^{c_{n-k+1}} \end{equation} The summation in \myeqref{Equation:BellPolynomial} is taken over all sequences $c_1, \dots, c_{n-k+1}$ of non-negative integers that satisfy \begin{equation}\label{Equation:SequenceRelations} \begin{cases} c_1 + c_2 + \cdots + c_{n-k+1} = k\\ 1c_1 + 2c_2 + \cdots + (n-k+1)c_{n-k+1} = n. \end{cases} \end{equation} Of particular interest in this section is the relation \[\frac{d^n}{dz^n}\left[\lambda f(z)\right]{\bigg\rvert}_{z = 0} = \frac{d^n}{dz^n}\left[f'(\varphi(z))\right]{\bigg\rvert}_{z = 0}.\] Combining this with \myeqref{Equation:FaadiBruno} and the fact that $\varphi(0)=0$ yields that an eigenfunction $f$ associated with a non-zero eigenvalue $\lambda$ must satisfy \begin{equation}\label{Equation:a0Relation} f(0) = \frac{1}{\lambda}f'(0) \end{equation} and \begin{equation}\label{Equation:1stanRelation} f^{(n)}(0) = \frac{1}{\lambda}\sum_{k=1}^n f^{(k+1)}(0)B_{n,k}\left(\varphi'(0),\dots,\varphi^{(n-k+1)}(0)\right) \end{equation} for all $n$ in $\mathbb{N}$. Since all but one derivative of $\varphi$ will be zero at $z=0$, many of the Bell Polynomials present in \eqref{Equation:anRelation} will have many, if not all, the $x_j$ terms being 0. Those Bell polynomials used in this section are collected here for convenience. \begin{lemma}\label{Lemma:BellPolynomialValues} If $j,k,n$ in $\mathbb{N}$ with $n > 1$, $k \leq n$, and $1 < j < n-k+1$, then \begin{enumerate} \item $B_{n,k}(0,\dots,0) = 0$, \item $B_{n,1}(0,\dots,0,x_n) = x_n$, \item $B_{n,1}(0,\dots,0,x_j,0,\dots,0) = 0$, \item $B_{n,n-1}(0,x_2) = 0$ for $n \neq 2$. \end{enumerate} \end{lemma} \begin{remark}\label{Remark:BellPolynomialReduction} Since the symbol $\varphi(z) = az^M$ with $M \geq 2$, it is the case that $\varphi^{(\ell)}(0) = 0$ for all $\ell \neq M$. In particular, $\varphi'(0) = 0$. From Lemma \ref{Lemma:BellPolynomialValues} we obtain \[B_{n,n}\left(\varphi'(0),\dots,\varphi^{n-k+1}(0)\right) = B_{n,n}(\varphi'(0)) = B_{n,n}(0) = 0.\] Thus, \myeqref{Equation:1stanRelation} reduces to \[f'(0) = \frac{1}{\lambda}f''(0)B_{1,1}(\varphi'(0)) = 0\] and \begin{equation}\label{Equation:anRelation} f^{(n)}(0) = \frac{1}{\lambda}\sum_{k=1}^{n-1} f^{(k+1)}(0)B_{n,k}\left(0,\varphi''(0),\dots,\varphi^{(n-k+1)}(0)\right) \end{equation} for $n > 1$. From \myeqref{Equation:a0Relation}, it must also hold that $f(0) = 0$ as well. \end{remark} We present the main result of this section as Theorem \ref{Theorem:SpectralResults}. We see that all maps of the form $\varphi(z)=az^M$ induce a quasinilpotent operator $D_\varphi$ on $\mathbb{D}$, except for $az^2$. \begin{theorem}\label{Theorem:SpectralResults} If $\varphi(z) = az^M$, for $0 < |a| < 1$ and $M$ in $\mathbb{N}$, then the spectrum of $D_\varphi$ is given by \[\sigma(D_\varphi) = \begin{cases}\hfil \{0, 2a\} & \text{if $M = 2$}\\ \hfil\{0\} & \text{otherwise.} \end{cases} \] \end{theorem} \noindent For the proof of Theorem \ref{Theorem:SpectralResults}, we require the following lemmas. These results determine the quantities $f^{(n)}(0)$ if $f$ is an eigenfunction associated with a non-zero eigenvalue $\lambda$. The cases of $n < M$, $n=M$, and $n > M$ are considered separately. \begin{lemma}\label{Lemma:CoefficientsLessThanM} Suppose $\varphi(z) = az^M$, for $0 < |a| < 1$ and $M \geq 2$. If $\lambda$ is a non-zero eigenvalue of the induced operator $D_\varphi$ with associated eigenfunction $f$, then $f^{(n)}(0) = 0$ for all $n < M$. \end{lemma} \begin{proof} Note that Remark \ref{Remark:BellPolynomialReduction} shows the lemma to be true for the case $M=2$. For $M > 2$, the same remark shows $f^{(n)}(0) = 0$ for $n \leq 1$. Thus we consider the situation that $M > 2$ and $1 < n < M$. Observe that for $k$ in $\mathbb{N}$ with $1 \leq k \leq n-1$ it follows that $2 \leq n-k+1 \leq n < M$. Thus $\varphi^{(\ell)}(0) = 0$ for all $1 \leq \ell \leq n-k+1$. So \myeqref{Equation:anRelation}, in conjunction with Lemma \ref{Lemma:BellPolynomialValues} yields \[\begin{aligned} f^{(n)}(0) &= \frac{1}{\lambda}\sum_{k=1}^{n-1}f^{(k+1)}(0)B_{n,k}\left(0,\varphi''(0),\dots,\varphi^{(n-k+1)}(0)\right)\\ &= \frac{1}{\lambda}\sum_{k=1}^{n-1}f^{(k+1)}(0)B_{n,k}(0,\dots,0)\\ &= 0, \end{aligned}\] as desired. \end{proof} \begin{lemma}\label{Lemma:CoefficientEqualToM} Suppose $\varphi(z) = az^M$, for $0 < |a| < 1$ and $M \geq 2$. Furthermore, suppose $\lambda$ is a non-zero eigenvalue of the induced operator $D_\varphi$ with associated eigenfunction $f$. Then $f^{(M)}(0) = 0$ unless $M=2$ and $\lambda=2a$. \end{lemma} \begin{proof} First note that $\varphi^{(\ell)}(0) = 0$ for all $\ell \neq M$ and $\varphi^{(M)}(0) = (M!)a$. Suppose $k$ in $\mathbb{N}$ such that $2 \leq k \leq M-1$. Observe that $2 \leq M-k+1 \leq M-1$. Thus \myeqref{Equation:anRelation} reduces to \begin{equation}\label{Equation:MDerivative} \begin{aligned} f^{(M)}(0) &= \frac{1}{\lambda}\sum_{k=1}^{M-1} f^{(k+1)}(0)B_{M,k}\left(0,\varphi''(0),\dots,\varphi^{(M-k+1)}(0)\right)\\ &= \frac{1}{\lambda}f''(0)B_{M,1}\left(0,\varphi''(0),\dots,\varphi^{(M)}(0)\right)\\ &\qquad + \frac{1}{\lambda}\sum_{k=2}^{M-1} f^{(k+1)}(0)B_{M,k}\left(0,\varphi''(0),\dots,\varphi^{(M-k+1)}(0)\right)\\ &= \frac{B_{M,1}\left(0,\dots,0,(M!)a\right)}{\lambda}f''(0) + \frac{1}{\lambda}\sum_{k=2}^{M-1} f^{(k+1)}(0)B_{M,k}(0,\dots,0)\\ &= \frac{(M!)a}{\lambda}f''(0). \end{aligned} \end{equation} Suppose $M = 2$. Then \myeqref{Equation:MDerivative} becomes $f''(0) = \frac{2a}{\lambda}f''(0)$. So it must be that $f''(0) = 0$ unless $\lambda = 2a$. Finally, suppose $M > 2$. Then $f''(0) = 0$ by Lemma \ref{Lemma:CoefficientsLessThanM}. So \myeqref{Equation:MDerivative} reduces to $f^{(M)}(0) = 0$. \end{proof} \begin{lemma}\label{Lemma:CoefficientsGreaterThanM} Suppose $\varphi(z) = az^M$ for $0 < |a| < 1$ and $M \geq 2$. If $\lambda$ is a non-zero eigenvalue of the induced operator $D_\varphi$ with associated eigenfunction $f$, then $f^{(n)}(0) = 0$ for all $n > M$. \end{lemma} \begin{proof} We will prove this result by induction on $n$. First, we will show $f^{(M+1)}(0) = 0$. We will consider the cases of $M=2$ and $M > 2$ separately. \begin{case}{1} Suppose $M=2$. We can write \myeqref{Equation:anRelation} as \[\begin{aligned} f'''(0) &= \frac{1}{\lambda}f''(0)B_{3,1}\left(0,\varphi''(0),\varphi'''(0)\right) + \frac{1}{\lambda}f'''(0)B_{3,2}\left(0,\varphi''(0)\right)\\ &= \frac{1}{\lambda}f''(0)B_{3,1}(0,2a,0) + \frac{1}{\lambda}f'''(0)B_{3,2}(0,2a). \end{aligned}\] From Lemma \ref{Lemma:BellPolynomialValues} it follows that $B_{3,1}(0,2a,0) = 0 = B_{3,2}(0,2a)$. Thus when $M=2$, $f^{(M+1)}(0) = 0$. Now, suppose $f^{(3)}(0) = \cdots = f^{(n-1)}(0) = 0$ for some $n \geq 4$. We will now show $f^{(n)}(0) = 0$. We write \myeqref{Equation:anRelation} as \[\begin{aligned} f^{(n)}(0) &= \frac{1}{\lambda}\sum_{k=1}^{n-1}f^{(k+1)}(0)B_{n,k}\left(0,\varphi''(0),\dots,\varphi^{(n-k+1)}(0)\right)\\ &= \frac{1}{\lambda}f''(0)B_{n,1}(0,2a,0,\dots,0) + \frac{1}{\lambda}f^{(n)}(0)B_{n,n-1}(0,2a)\\ &\qquad + \sum_{k=2}^{n-2}f^{(k+1)}(0)B_{n,k}(0,2a,0\dots,0). \end{aligned}\] For $2 \leq k \leq n-2$, we have $3 \leq k+1 \leq n-1$. So $f^{(k+1)}(0) = 0$ by the inductive hypothesis. From Lemma \ref{Lemma:BellPolynomialValues}, $B_{n,1}(0,2a,0,\dots,0) = 0$ and $B_{n,n-1}(0,2a) = 0$ since $n > 1$. Thus $f^{(n)}(0) = 0$. \end{case} \begin{case}{2} Suppose $M>2$. We will first show $f^{(M+1)}(0) = 0$. In this case, \myeqref{Equation:anRelation} becomes \[\begin{aligned} f^{(M+1)}(0) &= \frac{1}{\lambda}\sum_{k=1}^{M-1}f^{(k+1)}(0)B_{M+1,k}\left(0,\varphi''(0),\dots,\varphi^{(M-k+2)}(0)\right) + \frac{1}{\lambda}f^{(M+1)}(0)B_{M+1,M}(0,0). \end{aligned}\] For $1\leq k\leq M-1$ it follows that $f^{(k+1)}(0) = 0$ by Lemmas \ref{Lemma:CoefficientsLessThanM} and \ref{Lemma:CoefficientEqualToM} since $2 \leq k+1 \leq M$. From Lemma \ref{Lemma:BellPolynomialValues} we have $B_{M+1,M}(0,0) = 0$. Thus $f^{(M+1)}(0) = 0$. Finally, suppose $f''(0) = \cdots = f^{(n-1)}(0) = 0$ for some $n \geq M+2$. We will show $f^{(n)}(0) = 0$. By writing \myeqref{Equation:anRelation} as \[\begin{aligned} f^{(n)}(0) &= \frac{1}{\lambda}\sum_{k=1}^{n-1} f^{(k+1)}(0)B_{n,k}\left(0,\varphi'(0),\dots,\varphi^{(n-k+1)}(0)\right)\\ &= \frac{1}{\lambda}\sum_{k=1}^{n-2}f^{(k+1)}(0)B_{n,k}\left(0,\varphi''(0),\dots,\varphi^{(n-k+1)}(0)\right) + \frac{1}{\lambda}f^{(n)}(0)B_{n,n-1}(0,\varphi''(0))\\ &= \frac{1}{\lambda}f^{(n)}(0)B_{n,n-1}(0,0)\\ &= 0. \end{aligned}\] \end{case} \noindent In either case, $f^{(n)}(0) = 0$ for all $n > M$. \end{proof} \noindent We can now prove Theorem \ref{Theorem:SpectralResults} as follows. \begin{proof}[Proof of Theorem \ref{Theorem:SpectralResults}] Let $\lambda$ be a non-zero eigenvalue of $D_\varphi$. As Theorem \ref{Theorem:SpectrumLinear} shows, the conclusion holds for the case of $M=1$. We proceed by considering the following two cases. \begin{case}{1} Suppose $M=2$. Note the function $g(z) = z^2$ is a non-zero function in $\mathcal{D}$. For each $z$ in $\mathbb{D}$, we see \[(D_\varphi g)(z) = g'(\varphi(z)) = 2\varphi(z) = 2(az^2) = 2ag(z).\] Thus $2a$ is an eigenvalue of $D_\varphi$ with associated eigenfunction $g$. So $\{0,2a\} \subseteq \sigma(D_\varphi)$. Finally, assume, for purposes of contradiction, that $\lambda$ is a non-zero eigenvalue of $D_\varphi$ that is not $2a$, with associated eigenfunction $f$. Then it follows from Lemmas \ref{Lemma:CoefficientsLessThanM}, \ref{Lemma:CoefficientEqualToM}, and \ref{Lemma:CoefficientsGreaterThanM} that $f^{(n)}(0) = 0$ for all $n \geq 0$. So $f$ is identically 0, a contradiction. Thus $\sigma(D_\varphi) = \{0,2a\}$. \end{case} \begin{case}{2} Suppose $M > 2$. Then it follows directly from Lemmas \ref{Lemma:CoefficientsLessThanM}, \ref{Lemma:CoefficientEqualToM}, and \ref{Lemma:CoefficientsGreaterThanM} that $f^{(n)}(0) = 0$ for all $n \geq 0$. So $f$ is identically 0, a contradiction. Thus $\sigma(D_\varphi) = \{0\}$. \end{case} \noindent Therefore, the spectrum of $D_\varphi$ has been established, as desired. \end{proof} \begin{remark} The arguments made in this section apply to a functional Banach space $\mathcal{X}$ that contains the polynomials for which $\varphi(z) = az^M$, $0 < |a| < 1$ and $M$ in $\mathbb{N}$, induces a compact operator $D_\varphi$. As an example, these results apply to $D_\varphi$ acting on $H^2(\mathbb{D})$ and thus adds to the results in Section 2 of \cite{FatehiHammond:2020}. Additionally, these results hold for $D_\varphi$ acting on the Bloch space, adding to the results of \cite{Ohno:2009}. \end{remark} \begin{remark} At the end of \cite{FatehiHammond:2020}, the authors pose the question: \centerline{``Is $D_\varphi$ [acting on $H^2$] quasinilpotent whenever $\varphi$ is univalent and $\|\varphi\|_\infty < 1$?"} \noindent While Theorem \ref{Theorem:SpectralResults} does not answer this question, in light of the previous remark it does show that the symbols of quasinilpotent $D_\varphi$ acting on $H^2$ or $\mathcal{D}$ need not be univalent. It also shows that $\|\varphi\|_{\infty}<1$ alone is not enough to guarantee that the operator is quasinilpotent. We believe this line of inquiry poses an interesting open question. \end{remark} \end{document}
arXiv
\begin{document} \begin{frontmatter} \title{Wavepacket approach to particle diffraction by thin targets: \\ quantum trajectories and arrival times} \author[ceft]{C. Efthymiopoulos} \ead{[email protected]} \author[ndel]{N. Delis} \ead{[email protected]} \author[gcon]{G. Contopoulos} \ead{[email protected]} \address[ceft,ndel,gcon]{Research Center for Astronomy and Applied Mathematics, Academy of Athens} \address[ndel]{Department of Physics, University of Athens, Panepistimiopolis, 153 42 Athens, Greece} \begin{abstract} We develop a wavepacket approach to the diffraction of charged particles by a thin material target and we use the de Broglie-Bohm quantum trajectories to study various phenomena in this context. We construct a particle wave function model given as the sum of two terms $\psi=\psi_{ingoing}+\psi_{outgoing}$, each having a wavepacket form with longitudinal and transverse quantum coherence lengths both finite. We find the form of the separator, i.e.the limit between the domains of prevalence of the ingoing and outgoing quantum flow. The structure of the quantum-mechanical currents in the neighborhood of the separator implies the formation of an array of \emph{quantum vortices} (nodal point - X point complexes). The X point gives rise to stable and unstable manifolds, whose directions determine the scattering of the de Broglie - Bohm trajectories. We show how the deformation of the separatior near Bragg angles explains the emergence of a diffraction pattern by the de Broglie - Bohm trajectories. We calculate the arrival time distributions for particles scattered at different angles. A main prediction is that the arrival time distributions have a dispersion proportional to $v_0^{-1}\times$ the largest of the longitudinal and transverse coherence lengths, where $v_0$ is the mean velocity of incident particles. We also calculate time-of-flight differences $\Delta T$ for particles scattered in different angles. The predictions of the de Broglie - Bohm theory for $\Delta T$ turn to be different from estimates of the same quantity using other theories on time observables like the sum-over-histories or the Kijowski approach. We propose an experimental setup aiming to test such predictions. Finally, we explore the semiclassical limit of short wavelength and short quantum coherence lengths, and demonstrate how, in this case, results with the de Broglie - Bohm trajectories are similar to the classical results of Rutherford scattering. \end{abstract} \begin{keyword} Particle diffraction; de Broglie - Bohm trajectories \end{keyword} \end{frontmatter} \section{Introduction} The {\it de Broglie - Bohm quantum trajectories} \cite{debro1928} \cite{bohm1952}\cite{bohmhil1993}\cite{hol1993} have been considered as an interpretational tool in a number of recent applications (see \cite{wya2005}\cite{durteu2009}\cite{cha2010} for reviews), since they can offer new insight into a variety of complex quantum phenomena. According to the de Broglie-Bohm theory, to any wavefunction $\psi(\mathbf{r}_1,\mathbf{r}_2,\ldots,\mathbf{r}_N,t)$ describing a $N-$particle system, we can associate a set of `quantum trajectories'. One trajectory is defined by the initial conditions $(\mathbf{r}_1(0),\mathbf{r}_2(0),\ldots,\mathbf{r}_N(0))$ and by the `pilot wave' equations of motion \begin{equation}\label{sch2} {d\mathbf{r}_i\over dt}={\hbar\over m_i}Im({\nabla_i\psi\over \psi}),~~~i=1,\ldots N \end{equation} where $m_i$ are the particle masses and $\hbar$ is Planck's constant. The equations of motion (\ref{sch2}) imply the continuity equation for the probability density $\rho(\mathbf{r}_1,\mathbf{r}_2,\ldots,\mathbf{r}_N,t) = |\psi(\mathbf{r}_1,\mathbf{r}_2,\ldots,\mathbf{r}_N,t)|^2$. In particular, in a one-particle system we can choose many different initial conditions corresponding to an initial density $\rho(\mathbf{x},0)=|\psi(\mathbf{r},0)|^2$. Then, the pilot-wave equations guarantee the preservation of Born's rule ${\rho(\mathbf{r},t)}=|\psi(\mathbf{r},t)|^2$ at all subsequent times $t$. Furthermore, the de Broglie - Bohm trajectories are equivalent to the stream lines of the quantum probability current $\mathbf{j}=(\hbar/2mi) (\psi^*\nabla\psi-\psi\nabla\psi^*)$. Thus, the Bohmian approach yields practically equivalent results to Madelung's quantum hydrodynamics \cite{mad1926}. The de Broglie - Bohm theory has been discussed extensively from the point of view of its relevance as a consistent interpretation of quantum mechanics (e.g. \cite{hol1993}, \cite{bacval2009}; see \cite{tow2011} for an extended list of references). However, the employment of the de Broglie - Bohm trajectories has been proven useful also in many {\it practical} aspects of the study of quantum systems. Some modern applications are: i) Visualization of quantum processes: examples are barrier penetration or the quantum tunneling effect \cite{hiretal1974a} \cite{dewhil1982}\cite{skoetal1989}\cite{lopwya1999}, the (particle) two-slit experiment \cite{phietal1979}, ballistic transport through `quantum wires' \cite{beehou1991}\cite{beretal2001}, molecular dynamics \cite{gin2003}, dynamics in nonlinear systems with classical focal points or caustics \cite{zhamak2003}, and rotational or atom-surface scattering \cite{ginetal2002} \cite{sanzetal2004a}\cite{sanzetal2004b}. ii) Lagrangian solvers of Schr\"{o}dinger equation via swarms of evolving Bohmian trajectories (see \cite{wya2005} for a comprehensive review, as well as \cite{sanzetal2002} \cite{sanzetal2004a}\cite{sanzetal2004b}\cite{ori2007}). The interest in this method lies in that, instead of solving Schr\"{o}dinger's equation first, one uses a step-by-step procedure to calculate the trajectories via Newton's second order equations of motion in a potential \begin{equation}\label{sch3} {U(\mathbf{r},t)}=V(\mathbf{r},t)+Q(\mathbf{r},t) \end{equation} where ${Q(\mathbf{r},t)} $ is the `quantum potential', caused by the wavefunction $\psi$: \begin{equation}\label{sch4} {Q(\mathbf{r},t)}=-{\hbar^2\over 2m}{\nabla^2|\psi|\over |\psi| }. \end{equation} Using the information of the initial value of the wavefunction as well as the evolution of the quantum trajectories, the wavefunction can then be determined at any subsequent time step. iii) Dynamical origin of the {\it quantum relaxation} \cite{valwes2005} \cite{eftcon2006}\cite{ben2010}\cite{colstru2010}\cite{towetal2011}. The de Broglie - Bohm theory offers a justification of Born's rule $\rho=|\psi|^2$, since it predicts that, under some conditions, the quantum trajectories lead to an asymptotic (in time) approach towards this rule even if it was initially allowed that $\rho_{initial}\neq |\psi_{initial}|^2$. It should be noted that not all choices of $\rho_{initial}$ are guaranteed to lead to quantum relaxation, and counter-examples can be found, for reasons explained in \cite{eftcon2006}. The arguments used in that paper to explain the suppression of the quantum relaxation effect in the two-slit experiment apply also to many other cases (see e.g. \cite{sanzetal2000}). In particular, a necessary condition for quantum relaxation to take place is that the trajectories should exhibit {\it chaotic} behavior (see \cite{valwes2005} \cite{eftcon2006}); however, even this condition is not sufficient (see \cite{conetal2011}). The problem of chaos in the de Broglie - Bohm theory has been studied extensively (indicative references are \cite{duretal1992}\cite{faisch1995}\cite{parval1995}\cite{depol1996} \cite{dewmal1996}\cite{iacpet1996}\cite{fri1997}\cite{konmak1998} \cite{wuspru1999}\cite{maketal2000}\cite{cush2000}\cite{desalflo2003} \cite{falfon2003}\cite{wispuj2005}\cite{wisetal2007}\cite{schfor2008}). We have worked on this problem in \cite{eftcon2006}\cite{eftetal2007} \cite{coneft2008}\cite{eftetal2009}\cite{deletal2011}. Our main result was that chaos is due to the presence of {\it moving quantum vortices} forming `nodal point - X-point complexes' (\cite{eftetal2007}\cite{coneft2008}\cite{eftetal2009}; see also \cite{wispuj2005}\cite{wisetal2007}). Quantitative studies of chaos and of the effects of vortices are presented in \cite{fri1997}\cite{schfor2008}\cite{eftetal2009}. In particular, in \cite{eftetal2009} we made a theoretical analysis of the dependence of Lyapunov exponents of the quantum trajectories on the size and speed of the quantum vortices, thus explaining numerical results found in \cite{eftetal2007} and \cite{coneft2008}. Furthermore, in \cite{eftcon2006} we gave examples of systems which do or do not exhibit quantum relaxation, depending on whether or not their underlying trajectories are chaotic. It should be emphasized that, besides chaos, the quantum vortices play a key role in a variety of quantum dynamical phenomena (e.g. \cite{mccwya1971}\cite{hiretal1974a} \cite{hiretal1974b}\cite{sanzetal2004a}\cite{sanzetal2004b}). iv) {\it Arrival times and times of flight}. In the traditional formulation of quantum mechanics time is only a parameter in Schr\"{o}dinger's equation, since by a theorem of Pauli \cite{pau1926} no definition of a self-adjoint time-operator consistent with all axioms of quantum mechanics can be given in a system with energy spectrum bounded from below. Time, however, is an experimental observable. Various approaches in the literature, reviewed in \cite{muglea2000}\cite{mugetal2002}, have addressed the question of a consistent definition of quantum probability distributions for time observables. Besides the Bohmian approach, two other approaches are: a) the `sum-over-histories' approach \cite{hartle1988}\cite{yamtak1993} based on Feynman paths, and b) the approach of Kijowski \cite{kij1974}, based on the definition of quantum states acted upon by the so-called 'Bohm-Aharonov operator' (see \cite{muglea2000}). On the other hand, the de Broglie - Bohm approach gives a straightforward answer to this problem, since the time needed to connect any two points along a quantum trajectory is a well defined quantity (see \cite{lea1990a} \cite{lea1990b}). Regarding this latter point, a key remark that will concern us in the sequel is that a consistent definition of the arrival times, that would allow in principle for a comparison of the various approaches in specific quantum systems, is only possible provided that the initial wavefunction is {\it localized in space}, i.e. it is described by a wavepacket model. Being motivated by the latter remark, in the present paper we present a study of the de Broglie - Bohm trajectories in a wavepacket model referring to a quantum phenomenon that has played a fundamental role in the development of quantum mechanics, namely the {\it diffraction} of charged particles (e.g. electrons or ions) by a thin material target. A theoretical study on the quantum scattering problem in the framework of the de Broglie - Bohm approach has been presented in the series of works \cite{dau1996}\cite{dauetal1996}\cite{dauetal1997} \cite{durretal2000}\cite{durretal2006}. These studies refer to the establishment of the rules of scattering probabilities using the `flux across surfaces' theorem adapted to the concept of quantum trajectories. However, they do not deal with the form of the quantum trajectories or the emergence of diffraction patterns under specific scattering potentials. A numerical simulation of Rutherford scattering by a single nucleus has been presented in \cite{mei2006}, while in \cite{sanzetal2002} the phenomena of atom-surface scattering as well as neutron diffraction by slits are considered, which share some common features, but also important differences, with our problem. In the present paper we make a detailed study of the de Broglie - Bohm trajectories in the context of a wavepacket model of charged particle diffraction, by first investigating the form of the {\it quantum currents} corresponding to various cases of this model. These cases are diversified one from the other by the different quantitative relations characterizing the so-called {\it quantum coherence lengths} in the longitudinal and transverse directions of the charged particle beam. This is necessary in order to be able to compare the results corresponding to possibly different experimental realizations of a charged particle beam, as e.g. in the case of electrons produced either by thermionic or by a cold-field emission processes. Our present study completes in a substantial way the study initiated in a previous paper of ours \cite{deletal2011}, in which we implemented the de Broglie - Bohm approach in the case of electron diffraction through a thin crystal. In that study, however, we assumed a planar wave model for the propagation of the electron wavefunction in the longitudinal direction. In contrast, in the present paper we assume instead a finite longitudinal quantum coherence length. This assumption leads to a number of crucial new elements with respect to \cite{deletal2011}. In fact, in order to achieve our goal we derive a wavefunction model by a refined implementation of basic scattering theory, so as to account for a fully-localized in space description of scattering. The derivation of this model presents its own interest, and it is exposed in detail in section 2. \begin{figure} \caption{The basic setup of the problem under study. A source (S) emits charged particles described by an `ingoing' wavefunction having the form of a wavepacket with dispersions $l$ in the longitudinal direction (z-axis $\equiv$ direction of incidence to a thin material target (C) placed at the center O of the coordinate system), and $D$ in the transverse direction. After scattering, some particles arrive at detectors $D_i$ placed at equal distances from O and various angles $\theta_i$. The wavefunction is assumed to have axial symmetry (around the z-axis), thus the figure corresponds to any meridian plane. Various other symbols are explained in the text.} \label{setup} \end{figure} The structure of the paper is as follows: after the derivation of the basic wavefunction model in section 2, we pass to a study of the quantum trajectories in section 3. Here the emphasis is on the influence upon the trajectories of quantum vortices, whose appearance and role in this problem are explicitly discussed. In fact, we show that the quantum vortices appear in the transition zone from a domain of predominance of the ingoing wavefunction to a domain of predominance of the outgoing wavefunction. Inside this zone we can define a locus called {\it separator}, which plays a key role in the interpretation of the scattering process via the quantum trajectories. In section 4 we study the arrival times of diffracted particles to detectors placed in various scattering angles. A main outcome of this study is that it is possible to propose a feasible experimental test probing the predictions of the Bohmian theory about the particles' arrival times. In section 5 we discuss separately the `semi-classical' case of particles with a large mass and and a small de Broglie wavelength, applicable e.g. to $\alpha-$particle or ion scattering, since this case exhibits some special features in comparison to the case of electron diffraction. Finally, section 6 summarizes the main conclusions of the present study. \section{Modelling of the wavefunction} We consider a cylindrical beam of particles of mass $m$ and charge $Z_1q_e$ incident on a thin material target. We set the center of the target as the origin of our coordinate system of reference, and use both cylindrical coordinates $(z,R,\phi)$ and spherical coordinates $(r,\theta,\phi)$. The $z-$axis is the beam's main axis, $R$ denotes cylindrical radius transversally to $z$, $\phi$ is the azimuth, $r=(z^2+R^2)^{1/2}$ and $\theta=\tan^{-1}(R/z)$ (see Figure 1, schematic). A basic form of diffraction theory for charged particles, reviewed e.g. in \cite{peng2005}, assumes that the incident waves are planar. As explained in the introduction, here instead we are interested in a wavepacket approach. Focusing only on elastic scattering phenomena, the latter approach can be obtained by a refinement of the basic theory as follows: The potential felt by a charged particle approaching the target can be considered as the sum of the individual potential terms generated by every atom in the target: \begin{equation}\label{pot} V(\mathbf{r})=\sum_{j=1}^N U(\mathbf{r}-\mathbf{r}_j)~~. \end{equation} where $\mathbf{r}_j$ denotes the position of j-th atom in the lattice of the target (this position exhibits some statistical fluctuations due to thermal oscillations etc; the effect of these fluctuations is discussed later in this section). As a model for the function $U$, we can adopt a screened Coulomb potential \begin{equation}\label{potatom} U(\mathbf{r-r_j})={1\over 4\pi\epsilon_0} {Z_1Zq_e^2\exp(-|\mathbf{r-r_j}|/r_0)\over|\mathbf{r-r_j}|} \end{equation} ($\epsilon_0$ = vacuum dielectric constant), where $Z$ is the nuclear charge, and $r_0$ is a constant representing a charge screening range within the atoms, whose value is of the order of the atomic size.\\ Particles being scattered by the target can be described by a wavefunction given as a superposition of eigenfunctions \begin{equation}\label{psiall} \psi(\mathbf{r},t)={1\over (2\pi)^{3/2}} \int d^3\mathbf{k}~\tilde{c}(\mathbf{k})\phi_{\mathbf{k}} (\mathbf{r})e^{{-i\hbar k^2t/2m}} \end{equation} where $\tilde{c}(\mathbf{k})$ are Fourier coefficients, and $\phi_{\mathbf{k}}(\mathbf{r})$ are scattering eigenfunctions, i.e. solutions of the time-independent Schr\"{o}dinger's equation \begin{equation}\label{sch1} -{\hbar^2\over 2m}\nabla^2 \phi + V(\mathbf{r})\phi = E\phi \end{equation} with $V$ chosen as in (\ref{pot}) and $E>0$. The different solutions $\phi\equiv\phi_\mathbf{k}$ are labeled by their wavevectors $\mathbf{k}$ of modulus $k\equiv \mid \mathbf{k}\mid=(2mE)^{1/2}/\hbar$, where $E>0$ is the energy associated with one eigenstate. Born's approximation can be used to obtain an approximative formula for $\phi_{\mathbf{k}}$. We thus write \begin{equation}\label{bornser} \phi_\mathbf{k}= \phi_{0,\mathbf{k}}+\phi_{1,\mathbf{k}}+\phi_{2,\mathbf{k}} +\ldots \end{equation} where $\phi_{0,\mathbf{k}}=e^{i\mathbf{k r}}=O(1)$ is the solution of Eq.(\ref{sch1}) for the free particle problem $( V(\mathbf{r})=0)$, while $\phi_{1,\mathbf{k}}=O(V)$, $\phi_{2,\mathbf{k}}=O(V^2)$ etc (assuming that $V$ small compared to the particles' energies). The above series are meaningful at all points of space excluding a set of balls of radius a few times $r_0$ around every one of the atoms in the target. Spherical harmonic expansions (see e.g. \cite{mes1961}) provide a more accurate representation of the solution inside such balls, but their use is cumbersome while practically unnecessary in the context of the present study. A step by step determination of the series terms in (\ref{bornser}) can be obtained via the recursive formula \begin{equation}\label{sch1ret} -{\hbar^2\over 2m}\nabla^2 \phi_{n,\mathbf{k}} + V\phi_{n-1,\mathbf{k}}= E\phi_{n,\mathbf{k}}={\hbar^2k^2\over 2m}\phi_{n,\mathbf{k}}~~. \end{equation} All essential phenomena discussed below are present already in the solutions including just the two first terms $\phi_{\mathbf{k}}\simeq \phi_{0,\mathbf{k}}+\phi_{1,\mathbf{k}}$. From Eq.(\ref{sch1ret}) for $n=1$ we find: \begin{equation}\label{phi1} \phi_{1,\mathbf{k}}(\mathbf{r})=-{m\over 2\pi\hbar^2} \int_{\mbox{all space}} d^3\mathbf{r'} {e^{ik|\mathbf{r-r'}|}\over|\mathbf{r-r'}|} \left(e^{i\mathbf{k\cdot r'}}\sum_{j=1}^N{1\over 4\pi\epsilon_0} {Z_1Zq_e^2e^{-|\mathbf{r'-r_j}|/r_0}\over|\mathbf{r'-r_j}|} \right)~~. \end{equation} The integral in (\ref{phi1}) can be estimated using standard approximations of scattering theory. We then find \begin{equation}\label{phiall} \phi_{\mathbf{k}}(\mathbf{r})\simeq e^{i\mathbf{k\cdot r}} -{Z_1Zq_e^2\over 4\pi\epsilon_0}{m\over \hbar^2} \left(\sum_{j=1}^N{e^{ik\mid\mathbf{r-r_j\mid}}e^{i\mathbf{k\cdot r_j}} \over \mid\mathbf{r-r_j\mid}(2k^2\sin^2(\Delta\theta_j/2)+1/2r_0^2)}\right) \end{equation} where $\Delta\theta_j$ denotes the angle between the vectors $\mathbf{k}$ and $\mathbf{r}-\mathbf{r_j}$. Substituting Eq.(\ref{phiall}) into Eq.(\ref{psiall}) we have \begin{eqnarray}\label{psiall2} \psi(\mathbf{r},t)&\simeq&{1\over (2\pi)^{3/2}}\Bigg\{ \int d^3\mathbf{k}~\tilde{c}(\mathbf{k})e^{i\mathbf{k r}}e^{{-i\hbar k^2t/2m}}\\ &-&{Z_1Zq_e^2\over4\pi\epsilon_0}{m\over \hbar^2} \int d^3\mathbf{k}~\tilde{c}(\mathbf{k}) \left(\sum_{j=1}^N{e^{ik\mid\mathbf{r-r_j\mid}}e^{i\mathbf{k\cdot r_j}} \over \mid\mathbf{r-r_j\mid}(2k^2\sin^2(\Delta\theta_j/2)+1/2r_0^2)}\right) e^{{-i\hbar k^2t/2m}}\Bigg\}\nonumber \end{eqnarray} The problem of defining $\psi(\mathbf{r},t)$ is now restricted to making an appropriate choice for the coefficients $\tilde{c}(\mathbf{k})$. The latter are determined by the Fourier transform of the initial wavefunction $\psi(\mathbf{r},t=0)$. In the wavepacket approach, the initial wavefunction is localized around the source, i.e. far from the target. Hence we can set $\psi(\mathbf{r},t=0)\simeq\psi_{ingoing}(\mathbf{r},t=0)$, where $\psi_{ingoing}(\mathbf{r},t=0)$ represents a wavepacket moving in the z-direction towards the target with some velocity $v_0$. A Gaussian wavepacket of this form corresponds (in momentum space) to the choice \begin{equation}\label{psimom} \tilde{c}(\mathbf{k})={1\over \pi^{1/2}\sigma_\perp} {1\over \pi^{1/4}\sigma_\parallel^{1/2}} \exp\left(-{k_x^2+k_y^2\over 2\sigma_\perp^2})\right) \exp\left(-{(k_z-k_0)^2\over 2\sigma_\parallel^2}-ik_zz_0\right)~~. \end{equation} In (\ref{psimom}), $(k_x,k_y,k_z)$ are the Cartesian components of $\mathbf{k}$, $z_0=-l_0$ is the initial position of the center of the wavepacket along the z-axis, and $k_0=m v_0/\hbar$. The quantities $\sigma_\parallel$, $\sigma_\perp$ are the longitudinal and transverse dispersions of the wavepacket in momentum space. These correspond to dispersions in position space given by $l=\sigma_\parallel^{-1}$ and $D=\sigma_\perp^{-1}$. The quantities $l$ and $D$ are hereafter called the longitudinal and transverse quantum coherence length respectively. Eq.(\ref{psiall2}) now takes the form \begin{equation}\label{psiinout} \psi(\mathbf{r},t)=\psi_{ingoing}(\mathbf{r},t) +\psi_{outgoing}(\mathbf{r},t) \end{equation} where \begin{equation}\label{psiin} \psi_{ingoing}= B(t) \exp\left(-{R^2\over 2(D^2+{i\hbar t\over m})} -{(z+l_0-{\hbar k_0\over m}t)^2\over 2(l^2+{i\hbar t\over m})}+ik_0z\right) \end{equation} with $$ B(t)={1\over \pi^{3/4}}\left({D\over D^2+i\hbar t/m}\right) \left({l\over l^2+i\hbar t/m}\right)^{1/2} \exp\left(ik_0l_0-{i\hbar k_0^2\over 2m}t\right) ~~. $$ The function $\psi_{outgoing}$ corresponds to the second integral in (\ref{psiall2}). An explicit expression for this function can only be found by adopting some further approximations. First, we consider fast-moving wavepackets, for which $k_0\gg \max(\sigma_\perp,\sigma_\parallel)$ as well as $k_0\gg 1/r_0$. Then, in the denominator of the second integrand in (\ref{psiall2}): i) the term $1/2r_0^2$ can be ignored, and ii) we use the approximation $1/k^2\simeq 1/k_0^2$. Second, at all distances $r\gg r_j$ we have that the angles $\Delta\theta_j$ are approximately equal one to the other and to the angle $\theta$ (which is equal to the angle between the vectors $\mathbf{r}$ and $\mathbf{k}_0=(0,0,k_0)$. Finally, we set $\mid\mathbf{r-r_j}\mid\approx r-\mathbf{r_j\cdot n}+ r_j^2/(2r)$ in the exponential argument of (\ref{psiall2}), where $\mathbf{n}=(\sin\theta\cos\phi,\sin\theta\sin\phi,\cos\theta)$ (this is necessary in order to retain all terms whose phase has a substantially non-zero value), while we set $\mid\mathbf{r-r_j}\mid\approx r$ in the denominator of the integrands in (\ref{psiall2}). Using these approximations, we find \begin{eqnarray}\label{psiout} \psi_{outgoing}&\approx& B(t){Z_1Zq_e^2\over 4\pi\epsilon_0} {m\over \hbar^2}{1\over 2k_0^2\sin^2(\theta/2)}{\exp(ik_0r)\over r} \nonumber\\ &\times&\sum_{j=1}^N\Bigg[ \exp\left(ik_0[-\mathbf{r_j\cdot n}+z_j+r_j^2/(2r)]\right)\\ &~&~~~~~~~~\exp\left(-{R_j^2\over 2(D^2+{i\hbar t\over m})} -{(r+l_0-v_0t-\mathbf{r_j\cdot n}+z_j+{r_j^2\over 2r})^2 \over 2(l^2+{i\hbar t\over m})}\right)\Bigg]\nonumber \end{eqnarray} where $v_0=\hbar k_0/m$ represents the mean velocity of a particle with wavenumber $k_0$. At distances $r$ closer to the target than the maximum of the two coherence lengths $D,l$, the prefactor $f(r,\theta)=1/(2k_0^2\sin^2(\theta/2)r)$ in Eq.(\ref{psiout}) is no longer accurate. In order to be able to perform some numerical calculations of de Broglie - Bohm trajectories, after numerically simulating the sums appearing in (\ref{psiall2}) we found by trial a fitting model that represents reasonably well the modifications of $f(r,\theta)$ close to the target. This reads: \begin{equation}\label{fit} f(r,\theta)=k_0^{-2}\left[c_3D\sin\theta+(c_3^2D^2\sin^2\theta +r^2-2rc_4D\sin\theta+c_4^2D^2)^{1/2}-r\cos\theta\right]^{-1} \end{equation} where $c_3$ and $c_4$ are fitting constants determined by comparison of Eq.(\ref{fit}) to the results of the numerical simulation of $f(r,\theta)$ (in all simulations below we set $c_3=0.3$, $c_4=0.8$). It is to be stressed that Eq.(\ref{fit}) correctly recovers the asymptotic form $f\sim 1/(2k_0^2\sin^2(\theta/2)r)$ when $r$ is large. The outgoing wavefunction now takes the form \begin{equation}\label{psioutl2} \psi_{outgoing}\approx \ {B(t)Z_1Zq_e^2m\over 4\pi\epsilon_0\hbar^2} e^{ik_0r} f(r,\theta) S_{eff}(k_0,\mathbf{r},t) \end{equation} where the quantity $S_{eff}(k_0;\mathbf{r})$ is called hereafter the `effective Fraunhoffer function' (in analogy with the `far field' diffraction limit in wave optics, see \cite{ers2007}). This is given by \begin{eqnarray}\label{seff} S_{eff}(k_0,\mathbf{r},t)&=& \sum_{j=1}^N\Bigg[ \exp\left(ik_0(-\mathbf{r_j\cdot n}+z_j+r_j^2/(2r))\right)\\ &~&~~~~~~~~\exp\left(-{R_j^2\over 2(D^2+{i\hbar t\over m})} -{(r+l_0-v_0t-\mathbf{r_j\cdot n}+z_j+{r_j^2\over 2r})^2 \over 2(l^2+{i\hbar t\over m})}\right)\Bigg]~~.\nonumber \end{eqnarray} The physical significance of the function $S_{eff}(k_0;\mathbf{r},t)$ is that it sums the contributions of all the atoms in the target which act as sources of partial outgoing waves, whose superposition forms $\psi_{outgoing}$. Furthermore, the function $S_{eff}$ accounts for the formation of a diffraction pattern, which, for given $\mathbf{n}$, $r$, arises by the coherent contributions of all atoms in the target whose phasors $\exp[ik_0(-\mathbf{r_j\cdot n}+z_j+r_j^2/(2r))]$ are nearly parallel one to the other. As in \cite{deletal2011}, we consider the simplest example of a cubic lattice structure of the target \begin{eqnarray}\label{lattice} &~&\mathbf{r}_j=(n_x,n_y,n_z)a + \Delta a\mathbf{u}_j(t), \nonumber\\ &~&\\ (n_x,n_y,n_z)&\in& (-{N_\perp\over 2},{N_\perp\over 2})\times (-{N_\perp\over 2},{N_\perp\over 2})\times (-{N_z\over 2},{N_z\over 2})\nonumber \end{eqnarray} where i) $a$ is the lattice constant (equal to the length of one side of the primitive cell) ii) $\Delta a$ is the amplitude of some random oscillations (due to thermal or recoil motions; $\Delta a$ is taken equal to a small fraction of $a$) and $\mathbf{u_j}\equiv(u_{j,x},u_{j,y},u_{j,z})$ are random variables with a uniform distribution in the intervals $[-0.5,0.5]$ (the random oscillations introduce a so-called Debye-Waller effect, analyzed in \cite{deletal2011}; here, for simplicity, we ignore modifications on the wavefunction due to this effect). iii) The number of atoms $N_z$ in the z-direction is $N_z=d/a$, where $d$ is the target thickness, and iv) the value of $N_\perp$ is of order $N_\perp=O(D/a)$, due to the Gaussian factor $\exp(-R_j^2/(2D^2+i\hbar t/m))$ in Eq.(\ref{seff}) which can be approximated by $\approx 1$ for all $|n_x|<N_\perp/2$ and $|n_y|<N_\perp/2$, and by $0$ for $|n_x|>N_\perp/2$ or $|n_y|>N_\perp/2$ (for typical magnitudes of $D$ the inequality $D^2>>\hbar t/ m$ holds for all times $t$ of interest in our study, see below). We now distinguish the following cases: \subsection{$l>>D>>a$} When the longitudinal coherence length $l$ is larger than the transverse coherence length $D$, a simple modeling of the sum in Eq.(\ref{seff}) becomes possible at all distances $r>D$. Ignoring first the random fluctuations in (\ref{lattice}) (i.e. setting $\Delta a=0$), we have $r>>|-\mathbf{r_j\cdot n}+z_j+{r_j^2\over 2r}|$ whereby it follows that \begin{eqnarray}\label{sefflong} &~&S_{eff}(k_0,\mathbf{r},t)\approx \exp\left(-{(r+l_0-v_0t)^2\over 2(l^2+i\hbar t/m)}\right)\nonumber\\ &\times&\sum_{n_x=-N_\perp/2}^{N_\perp/2} \sum_{n_y=-N_\perp/2}^{N_\perp/2} \exp\left(ik_0[-an_x\sin\theta\cos\phi-an_y\sin\theta\sin\phi +{n_x^2a^2+n_y^2a^2)\over 2r}]\right)\\ &\times&\sum_{n_z=-N_z/2}^{N_z/2} \exp\left(ik_0[(1-\cos\theta)n_za+{n_z^2a^2\over 2r}]\right)\nonumber \end{eqnarray} For a random choice of $k_0,\theta,\phi$, the total number of contributing atoms in the sums of Eq.(\ref{sefflong}) is of the order $N\sim N_\perp^2N_z=D^2d/a^3$. Furthermore, the $N$ phasors have an effectively random phase. Thus, the total sum is of order $N^{1/2}$, and we are lead to the simple estimate $$ S_{eff}\sim Dd^{1/2}/a^{3/2}\exp[-(r+l_0-v_0 t)^2/(2(l^2+i\hbar t/m))] $$ called, hereafter, the `diffuse term' of the effective Fraunhofer function. Using this term we have \begin{equation}\label{psioutl2} \psi_{outgoing}\simeq \ {B(t)Z_1Zq_e^2mDd^{1/2}\rho^{3/2}\over 4\pi\epsilon_0\hbar^2} \exp\left(-{(r+l_0-v_0 t)^2\over{2(l^2+{i\hbar t\over m})}}\right) f(r,\theta)e^{ik_0r} \end{equation} where $\rho=a^{-3}$ is the number density of the atoms in the target. The model (\ref{psioutl2}) is used in a number of numerical simulations below. It describes a {\it radial pulse} propagating outwards with speed $v_0$, which emerges from the center at the time $l_0/v_0$. It should be stressed, however, that Eq.(\ref{psioutl2}) requires modifications close to particular angles where the phasors in the sums of (\ref{sefflong}) are added coherently. This will be examined in subsection 3.3. Further modifications are required when $l$ and $D$ become comparable or smaller than the inter-atomic distance $a$ in the target. This case is examined in section 5. \subsection{$D>>l>>a$} The modeling of $S_{eff}$ is a more subtle problem if the transverse coherence length is larger than the longitudinal coherence length. Without loss of generality, we can consider a fixed meridian plane defined e.g. by the angle $\phi=0$. We set $\xi=r+l_0-v_0t$ and we make the change of variables $u=-x\sin\theta+(x^2+y^2)/(2r)$, $R=(x^2+y^2)^{1/2}$. Considering now a random choice of $k_0,\theta$, and ignoring the quantity $i\hbar t/m$, for a given value of $\xi$ the sum over the variables $n_x,n_y$ in Eq.(\ref{seff}) can by approximated by a sum over a domain of values of $(n_x,n_y)$ such that $u(n_x,n_y)$ belongs to a ball of radius $l$ around $\xi$. The total number of contributing atoms is then of order $N\sim l^2d/a^3$. If i) we consider the sum over the phasors as a sum of random numbers (yielding a total magnitude $\sim N^{1/2}$), and ii) we substitute the second exponential in (\ref{seff}) by a delta function around $\xi$, we find $S_{eff}\approx (ld^{1/2}/a^{3/2}) I$, where \begin{equation} I=\int\int J(u,R) e^{-R^2/2D^2}\delta(u+\xi)dudR \end{equation} and $J(u,R)$ is the determinant of the Jacobian matrix of the transformation $(x,y)\rightarrow (u,R)$. We find \begin{equation}\label{iseff} I=\int_{R_{min}}^{R_{max}} {e^{-R^2/2D^2}\over \sin\theta \sqrt{1-{1\over sin^2\theta}\left({R\over 2r}+{\xi\over R}\right)^2}} dR \end{equation} where $R_{min}=|r(\sin\theta-\sqrt{\sin^2\theta-2\xi/r})|$, $R_{max}=r(\sin\theta+\sqrt{\sin^2\theta-2\xi/r})$. Eq.(\ref{iseff}) yields non-zero values of the integral $I$ {\it below} a cut-off radius $r<(v_0t-l_0)/(1-{1\over 2}\sin^2\theta)$. However, the asymptotic behavior of $I$ when $r$ is large is found by noticing that $R_{min}\approx \xi/sin\theta$ and $R_{max}\rightarrow\infty$ in this limit. We then find \begin{equation}\label{isefflim} I\sim \exp\left(-{(r+l_0-v_0 t)^2\over 2\sin^2\theta D^2}\right) \end{equation} The essential point to retain is that the profile of the radial outgoing pulse in the case $D>>l$ is a Gaussian whose dispersion is of order $D$. Thus, the conclusion is that, in both cases $l>D$ or $D>l$, the outgoing wavefunction has always the form of a packet with dispersion $\sigma_r$ of the same order as the {\it largest} of the two quantum coherence lengths, i.e. $\sigma_r\sim\max(l,D)$. Furthermore, we notice that in the case $D>>l$ the dispersion $\sigma_r$ depends also on $\theta$. A detailed investigation of the Bohmian trajectories in this case is, however, not possible from a numerical point of view, because the asymptotic formulae (\ref{iseff}) and (\ref{isefflim}) are not valid at distances $r<D$, where the scattering effects take place. Thus in the sequel we limit ourselves to a detailed investigation of the Bohmian trajectories in the case $l>>D$, while a qualitative discussion of the case $D>>l$ will be made in section 4, referring to the issue of the particles' arrival time distribution. \section{Quantum trajectories} We now discuss the main features of the de Broglie - Bohm quantum trajectories focusing on the case $l>>D$. \subsection{Separator and quantum vortices} The form of the trajectories can be found by carefully examining the structure of the quantum currents $\mathbf{j}=(\hbar/2mi) (\psi^*\nabla\psi-\psi\nabla\psi^*)$. The main remark is that, due to Eqs.(\ref{psiin}) and (\ref{psioutl2}), the ingoing wavefunction term (which has a Gaussian form both in the $R$ and $z$ directions) has a falling exponential profile at large distances from the center of the Gaussian, while the outgoing wavefunction has a more complex form falling asymptotically as a power-law $1/r$ due to the factor $f$. Thus, there is an inner domain of the quantum flow where $\psi_{ingoing}$ prevails, and an outer domain where $\psi_{outgoing}$ prevails. We call {\it separator} the boundary delimiting the two domains. Formally, the separator is defined as the (time-evolving) geometric locus where \begin{equation}\label{separcon} |\psi_{ingoing}| =|\psi_{outgoing}| \end{equation} In the case where the outgoing wavefunction is given by Eq.(\ref{psioutl2}), the condition (\ref{separcon}) takes the form: \begin{equation}\label{septimel} \exp\left(-{R^2\over 2D^2} -{(z+l_0-v_0t)^2\over 2l^2}\right)= {|Z_1Z|q_e^2m\over 4\pi\epsilon_0\hbar^2} {D\over a}\sqrt{d\over a}f(r,\theta) \exp\left(-{(r+l_0-v_0t)^2\over 2l^2}\right) \end{equation} where we use the approximations $D^2+i \hbar t/m\simeq D^2$ and $l^2+i \hbar t/m\simeq l^2$. We note that these approximations hold within a range of parameter values relevant to concrete experimental setups. For example, assuming that incident particles have velocities of order $v_0\sim 10^8$m/s, the time required to travel a distance of order $10^{-2}$ -- $10^{-1}$m (which is the typical size of an experimental setup) is of the order of $t=10^{-10}$ -- $10^{-9}$s. On the other hand, the typical coherence lengths in experiments e.g. for electrons are of order 1$\mu$m or larger. Hence, $\hbar t/m$ is much smaller than $D^2$ or $l^2$. \begin{figure} \caption{(a) The form of the separator at four different time snapshots $t_1=0$, $t_2=3l_0/(5v_0)$, $t_3=6l_0/(5v_0)$ and $t_4=9l_0/(5v_0)$ in the model where $\psi_{ingoing}$ is given by Eq.(\ref{psiin}), $\psi_{outgoing}$ is given by Eq.(\ref{psioutl2}), and the parameters are $Z_1=-1$, $m=m_e$, $k_0=8.877\times 10^2$nm$^{-1}$ (corresponding to electrons with energy $E=30$KeV, or wavelength $\lambda_0=7\times 10^{-3}$nm), $D=1000$nm, $l=10000$nm (corresponding to transverse and longitudinal quantum coherence lengths 1$\mu$m and 10$\mu$m respectively), $l_0=3l$, $Z=79$ (gold), $d=420$nm, $a=0.257$nm. (b) The form of the quantum current flow at the snapshot $t=t_2$. } \label{septor} \end{figure} \begin{figure} \caption{Local form of the quantum flow at the `nodal point - X-point complex' (quantum vortex) around the nodal point (N) with coordinates $R=1934.42$nm, $z=137.178$nm in the model with parameters as in Fig.\ref{septor} at the time $t=l_0/v_0$. The thick solid curves show the unstable (U,U') and stable (S,S') asymptotic manifolds of the X-point (X) formed under the instantaneous portrait of the quantum flow. } \label{vortex} \end{figure} The time evolution of the separator in the plane $(R,z)$ depends now on the time evolution of the relative amplitude of the ingoing compared to the outgoing wave at any point of the configuration space. We note first that according to Eq.(\ref{psioutl2}), the outgoing wave corresponds to a wavepacket with dispersion $l$ which emerges from the center in the time interval $t_0<t<t_0'$, with $t_0=(l_0-l)/v_0$, $t_0'=(l_0+l)/v_0$, which is the interval during which the support of the ingoing wavepacket (moving from left to right in Figs.\ref{setup} and \ref{septor}) essentially overlaps with the spatial domain occupied by the atoms in the target (see \cite{mes1961} for an introductory description of this phenomenon in a simple Rutherford scattering case). As indicated by Eq.(\ref{psioutl2}), after its emergence the outgoing wave moves in all radial directions maintaining essentially its Gaussian profile, while its overall amplitude drops like $r^{-1}$. As the outgoing wave moves outwards, it first encounters the ingoing wavepacket at times close to $t_0$. In Fig.\ref{septor}a, this encounter results in a gradual approach of the separator towards the z-axis (the indicated times are $t_1=0$, $t_2=3l_0/(5v_0)<t_0$, $t_0<t_3=6l_0/(5v_0)<t_0'$). As, however, the ingoing packet moves from left to right in Fig.\ref{setup}, its center crosses the target at the time $t=l_0/v_0$. Afterwards, the ingoing wave emerges from the right side of the target, and its support lies nearly completely in the semi-plane $z>0$. At a still longer time ($t_4=9l_0/(5v_0)$), the center of the outgoing wavepacket has traveled a distance $\approx 2.5l$ apart, and there is no longer any overlapping between the ingoing and outgoing wavepackets. As observed in Fig.\ref{septor}a, a transition takes place at some time between $t_3$ and $t_4$, such that, before the transition, the separator is formed by a a pair of open curves on either side of the axis $R=0$, while after the transition there is only one closed curve intersecting twice the axis $R=0$ both for $z>0$ and $z<0$. Taking into account the cylindrical symmetry around the z-axis, the form of the separator in space before the transition is a cylindrical-like surface of rotation, while after the transition it becomes a prolate spheroidal-like surface. In fact, this time-changing surface marks a sharp limit between the domains of prevalence of the axial ingoing flow and the radial outgoing flow, as shown in Fig.\ref{septor}b for the time $t=t_2$. If Eq.(\ref{septimel}) is supplemented by an equation for the phases of the ingoing and outgoing waves, which, for $Z_1<0$ takes the form \begin{equation}\label{phaseop} k_0R\tan(\theta/2)-\pi=2\bar{q}\pi~~~~~~~~\bar{q}\in{\cal Z}~~, \end{equation} a simultaneous solution of Eqs.(\ref{septimel}) and (\ref{phaseop}) defines the set of all points of the configuration space where the total wavefunction (Eq.(\ref{psiinout})) becomes equal to zero. Such points are called `nodal points'. Around the nodal points, the quantum flow forms {\it quantum vortices} (Figure \ref{vortex}). The local form of the quantum currents in a vortex domain is very different from the general flow shown in Fig.\ref{septor}. If we `freeze' the time $t$, the instantaneous pattern formed by the vector field of quantum probability current $\mathbf{j}$ corresponds to a characteristic structure called {\it quantum vortex}, or {\it nodal point - X-point complex} \cite{eftetal2007}\cite{coneft2008}\cite{eftetal2009}. That is, close to a nodal point we find a second critical point of the flow, where one has $\mathbf{j}=0$. This is called an `X-point', since it can be shown that it is always simply unstable, i.e. there are two real eigenvalues of the matrix of the linearized flow around X, which are one positive and one negative. Accordingly, there are two opposite branches of unstable (U,U') and stable (S,S') manifolds emanating from X. On the other hand, the nodal point can be an attractor, center, or repellor. This determines the local form of the invariant manifolds U and S. It has been established theoretically \cite{eftetal2007} that, except for a set of very small measure, most quantum trajectories {\it avoid} the nodal point, being instead scattered along the asymptotic directions of the manifolds of the X-point, leading to large distances from the nodal point - X-point complex. Furthermore, while, in general, the motion of the nodal point - X-point complexes introduces chaos (\cite{fri1997}\cite{wispuj2005} \cite{eftetal2007}\cite{eftetal2009}), in the present problem this effect is negligible because i) the speed of vortices is extremely small (of order $\sim\hbar/(k_0mD^2)<<v_0$), and ii) the quantum trajectories exhibit only a small number of encounters with nodal point - X-point complexes, as will be shown with numerical examples below. In conclusion, the effect of the nodal point - X-point complexes on the trajectories can be described as a scattering process without recurrences. For the model parameters used in Fig.\ref{vortex}, the size of the quantum vortex, estimated by the distance $R_X$ from the nodal point to the X-point, is of the order of $10^{-18}$m. The size of vortices in the present model is in fact time dependent. However, in the time interval $t_2\leq t\leq t_3$ when there is essential overlapping of the ingoing and outgoing wavefunction terms, $R_X$ is approximately constant and it is given by the same estimate as in the second of the equations (25) of ref.\cite{deletal2011}, namely \begin{equation}\label{rxest} R_X=O\left({1\over Dk_0^2}\right)~~. \end{equation} The above estimate is obtained by expanding the wavefunction around a nodal point up to terms of second degree in $(R-R_0)$ and $(z-z_0)$, where $(R_0,z_0)$ are the coordinates of the nodal point, and by applying general formulae derived in \cite{eftetal2009} regarding the dependence of $R_X$ on the coefficients of this local expansion. \subsection{Trajectories} \begin{figure} \caption{(a) A swarm of Bohmian trajectories in the same wavefunction model as in Fig.\ref{septor}. (b) The resulting radial distribution $P_{radial}(r;\theta_i)$ for sixteen different angles $\theta_i=5^\circ+(i-1)10^\circ$, $i=1,2,\ldots,16$. The near coincidence of all distributions $P_{radial}(r;\theta_i)$ and with the theoretical profile corresponding to the outgoing wavefunction model (\ref{psioutl2}) indicates the degree of preservation of the continuity equation by the numerical trajectories of (a).} \label{orbitsl} \end{figure} The deflection of an orbit happens at the crossing of the separator, and it is due to the orbit necessarily following the flow imposed by the asymptotic manifolds of the X-points that exist along the separator. Figure \ref{orbitsl}a shows a swarm of $625$ quantum trajectories in the model with same parameters as in Fig.\ref{septor}. The initial conditions are taken on a regular grid $25\times 25$ with $-l_0-2l\leq z\leq -l_0+2l$ (where $l_0=3l$) and $D/100\leq z\leq 4D$. An important test of the correctness of the numerical calculations is by checking whether the probabilities associated with the chosen initial conditions of the quantum trajectories respect the continuity equation. To this end, setting $\psi\simeq\psi_{ingoing}$ at $t=0$, a volume of initial conditions $\Delta V_0=2\pi R_0 \Delta R_0 \Delta z_0$ centered around the point $(z_0, R_0)$ has an associated probability $\Delta P= \mid \psi_{ingoing}(z_0,R_0,t=0)\mid^2 \Delta V_0$. Let $(z_0,R_0)\rightarrow (r,\theta)$ be the mapping from initial conditions to a trajectory' s coordinates at $t=2l_0/v_0$. We want to estimate the mapping of the probabilities $\Delta P$ from the volume $V_0$ to the image of this volume under the mapping $(z_0,R_0)\rightarrow (r,\theta)$. This is obtained numerically, by quadratically interpolating first the functions $r(z_0,R_0)$ and $\theta(z_0,R_0)$ from the data available by the integration of the orbits with initial conditions on the grid of points described in the previous paragraph. The quadratic interpolation allows to obtain local approximations to the functions $r(z_0,R_0)$ and $\theta(z_0,R_0)$ by formulae of the form $r=A_0+A_1(z-z_0)+A_2(R-R_0)+A_3(z-z_0)(R-R_0)$, $\theta=B_0+B_1(z-z_0)+B_2(R-R_0)+B_3(z-z_0)(R-R_0)$, where the coefficients $A_i, B_i$ change values at every grid point. These formulae, in turn, allow to numerically compute the Jacobian determinant $J(r,\theta;z_0,R_0)= \Delta r \Delta \theta /\Delta R_0\Delta z_0$. Finally, we compute the probability function $P_{radial}(r,\theta)= {\cal N}(\theta)|\psi_{in}(z_0,R_0,t=0)|^2R_0J(r,\theta;z_0,R_0)$, where $z_0,R_0$ are functions of $(r,\theta)$ and ${\cal N}$ is a normalization constant. Figure \ref{orbitsl}b shows $P_{radial}$ as function of $r$ for 16 different values of $\theta$ in the interval $5^\circ\leq\theta\leq 165^\circ$. The fact that all curves nearly coincide implies that the numerically computed Bohmian trajectories respect the continuity equation of the quantum flow. \begin{figure} \caption{The nearly straight gray zones in the bottom left side correspond to the loci (1 or 2) of initial conditions for which the corresponding de Broglie - Bohm trajectories end in the angular sectors (1) $\theta=54^\circ\pm 5^\circ$ and (2) $\theta=134^\circ\pm 5^\circ$ at the end of the numerical integration. The black lines on top of the gray zones (1) and (2) correspond to the fitting by Eq.(\ref{inil3}).} \label{zonelong} \end{figure} In the analysis of the arrival times or the times of flight in section 4, use is made of the following information: we seek to determine, as a function of the scattering angle $\theta$, the locus of all initial conditions on the $(z_0,R_0)$ plane, whereby the trajectories are eventually scattered close to the angle $\theta$. The determination of these loci follows by approximating all the quantum trajectories as {\it piecewise straight lines}. Namely, from Fig.\ref{orbitsl}, it is evident that any trajectory can be considered as a nearly perfect horizontal line up to the point (in space and time) where the trajectory encounters the separator. In fact, as we can see in Fig.\ref{septor}, the general motion of the separator itself, as $t$ increases, is downwards. That is, if we consider a ray from the center outwards along any fixed value of the angle $\theta$, the separator intersects this ray at a continually decreasing value of $r$, denoted by $r_s(t;\theta)$. Since all trajectories are horizontal before the encounter, we have that $R(t)=R_{(0)}$ for the z-coordinate of a trajectory with initial conditions $(z_{(0)},R_{(0)})$. Then, the encounter takes place at the time $t=t_{coll}$ when $R(t_{coll})=R_{(0)}=r_s(t_{coll};\theta)\sin\theta$. The last condition determines the time $t_{coll}$, which is given by \begin{equation}\label{tcoll1} t_{coll}={-z_{(0)}+R_{(0)}\cot\theta\over v_0} \end{equation} Substituting Eq.(\ref{tcoll1}) in the separator equation (\ref{septimel}), with $R=R_{(0)}$ we find: \begin{equation}\label{inil1} -{l^2g(\theta)\over 4D^2}R_{(0)}+{1\over g(\theta)}R_{(0)} +z_{(0)}+l_0= {l^2g(\theta)\over 2R_{(0)}} \ln\left({|C S_{eff}| g(\theta)\over 2k_0^2R_{(0)}}\right) \end{equation} where $g(\theta)={2\sin \theta/{(1-\cos\theta)}}$ and $C=mZ_1 Zq_e^2/(4\pi\epsilon_0 \hbar^2)$. The last equation allows to determine $R_{(0)}$ as a function of $z_{(0)}$. In the limit $l^2g^2(\theta)/(4D^2)>>1$, we find an approximative formula by replacing the r.h.s. of Eq.(\ref{inil1}) by a constant average value, i.e. \begin{equation}\label{inil3} R_{(0)}=R_c+{4D^2(z_{(0)}+l_0)\over l^2g(\theta)} \end{equation} where we take $R_c$ equal to the root for $R_{(0)}$ of Eq.(\ref{inil1}) when $z_{(0)}=z_c=-l_0$. Eq.(\ref{inil3}) is an analytical expression which gives the locus of initial conditions of trajectories that are scattered close to the angle $\theta$. Figure \ref{zonelong} shows lines of this form for two angles $\theta_1=54^{0}$, $\theta_2=134^{0}$ along with the loci, at $(t=2l_0/v_0)$, formed by the final points of the trajectories scattered in a bin around these two angles. \begin{figure} \caption{(a) Local deformation of the separator at the time $t=l_0/v_0$ after the inclusion of the Bragg angles $\theta_q$ (Eq.\ref{bragg}) in the effective Fraunhofer function (we mark the first three angles as $A,B,C$). (b) The deflection of the Bohmian trajectories at the channels of radial flow formed around the Bragg angles A,B corresponding to $\theta_1=0.23...$ and $\theta_2=0.33...$ respectively). (c) The concentration of the scattered trajectories close to Bragg angles in a larger scale. (d) Angular distribution corresponding to the numerical trajectories of (c). The dashed lines denote the exact positions of the Bragg angles. } \label{diforb} \end{figure} \subsection{Emergence of the diffraction pattern} As mentioned in subsection 2.1, Eq.(\ref{psioutl2}) provides an approximation to the outgoing wavefunction for nearly all sets of values $(k_0,\theta,\phi)$ except very close to combinations resulting in the appearance of a diffraction pattern. For specific values of $k_0$ this pattern can be non-axisymmetric. Here, however, we examine for simplicity only the appearance of Bragg angles for which the resulting diffraction pattern is axisymmetric. This implies considering the double sum over $n_x$, $n_y$ in Eq.(\ref{sefflong}) as a sum of random phasors, while allowing for a coherent addition of the phasors in the second sum of (\ref{sefflong}). Eq.(\ref{seff}) takes the form \begin{equation}\label{seffdif} S_{eff}(k_0,\mathbf{r},t)\simeq (D/a)e^{-{(r+l_0-v_0t)^2\over 2(l^2+{i\hbar t\over m})}} \sum_{n_z=-N_z/2}^{N_z/2} e^{ik_0[(1-\cos\theta)n_za+n_z^2a^2/(2r)]}~~. \end{equation} In order to estimate the sum in the r.h.s. of (\ref{seffdif}), we first note that coherent contributions come only from atoms whose z-position satisfies the condition $k_0z_j^2/(2r)<1$. The coherent terms appear at the Bragg angles \begin{equation}\label{bragg} \sin^2(\theta_q/2)={q\pi\over k_0a},~~~q=1,2,...q_{max} \end{equation} Expanding the terms in the phase of $S_{eff}$ depending on $\theta$ around one Bragg angle we find \begin{equation}\label{seffbragg} S_{eff}(k_0,\mathbf{r},t)\sim(D/a)\sum_{n_z=-n_{z_0}}^{n_{z_0}} e^{ik_0[\sin\theta_q(\theta-\theta_q)n_za +{n_z^2a^2\over 2r}]} \end{equation} where $n_{z_0}\sim[(1/a){({r\over k_0})^{1/2}}]$. Exploiting the foil's symmetry in the $z$ direction we approximate the sum in (\ref{seffbragg}) as \begin{equation}\label{sall} S_{eff}(k_0,\mathbf{r},t)\sim{ 2(D/a)(1/a)[\int_{0}^{u_{max}}du e^{ik_0u^2\over{2r}}-{1\over 2}\int_{0}^{u_{max}}}du e^{ik_0u^2\over{2r}}k_0^2\sin^2\theta_q(\theta-\theta_q)^2u^2] \end{equation} where $u_{max}\sim(r/k_0)^{1/2}$. An explicit formula for the above integral can be given in terms of error functions. However, a qualitative understanding of its behavior is offered by the approximation $e^{ik_0u^2\over{2r}}\simeq {1+{ik_0u^2\over{2r}}}$, whereby it follows that $ \int_{0}^{u_{max}}du e^{ik_0u^2\over{2r}}\simeq {e^{ik_0u_{max}^2\over{2r}}(u_{max}-{{1\over{3}}{ik_0\over r}u_{max}^3})}$ and $\int_{0}^{u_{max}}du e^{ik_0u^2\over{2r}}u_{max}^2= {{1\over{3}}u_{max}^3}e^{ik_0u_{max}^2\over{2r}}$. Substituting the above expressions in (\ref{sall}) we find \begin{equation}\label{seffbragg2} S_{eff}(k_0,\mathbf{r},t)\sim{ 2(D/a)(1/a) e^{ik_0u_{max}^2\over{2r}}[u_{max}-{1\over3} {{ik_0\over r}}u_{max}^3-{1\over 6}k_0^2\sin^2\theta_q(\theta-\theta_q)^2u_{max}^3]} \end{equation} Taking into account also the diffuse term, the final form of the outgoing wavefunction is \begin{equation}\label{psioutfinal} \psi_{outgoing}\simeq \ 2{B(t)Z_1Zq_e^2m\over 4\pi\epsilon_0\hbar^2}(D/a) e^{-{(r+l_0-v_0 t)^2\over{2(l^2+{i\hbar t\over m})}}} f(r,\theta) e^{ik_0r} \left[\sqrt{d\over a}+\sum_q U_q(r,\theta)e^{i\Phi_q(r,\theta)}\right] \end{equation} where the sum is considered with respect to all Bragg angles, while the following estimates hold for the functions $U_q$ and $\Phi_q$: \begin{equation}\label{uq} U_q\sim \frac{2\sin\left[k_0r\sin(\theta_q)(\theta-\theta_q)/2\right]} {k_0a\sin(\theta_q)(\theta-\theta_q)}\nonumber\\ \end{equation} and \begin{equation}\label{phiq} \Phi_q\sim \tan^{-1} \left({1\over{-3+{1\over 2}rk_0\sin^2\theta_q(\theta-\theta_q)^2}}\right) \end{equation} sufficiently far from the target. The last equation implies that at angular distances $|\theta-\theta_q|\sim \pi/(rk_0)^{1/2}$, the particles' Bohmian trajectories acquire a transverse velocity $v_t=(1/r)\partial\Phi_q/\partial\theta$ pointing towards the direction of the straight line with inclination equal to $\tan\theta_q$, while $v_t=0$ exactly at $\theta=\theta_q$. Furthermore, the presence of the coherent terms in $S_{eff}$ causes a local deformation of the separator around the Bragg angles, as shown in Fig.\ref{diforb}a. We note that the separator comes locally closer to the center at the directions corresponding to the Bragg angles, since the magnitude of $\psi_{outgoing}$ is locally enhanced due to the local peaks of the functions $U_q$. The effect of this deformation on the Bohmian trajectories is analogous to the one described in \cite{deletal2011}. Namely, this deformation results in the formation of local {\it channels of radial flow}, whereby the Bohmian trajectories are preferentially scattered around the Bragg angles. An example of this concentration is shown in Fig.\ref{diforb}b. Clearly, the inclusion of the coherent terms causes a variation of the angular distribution of the Bohmian trajectories, by creating local maxima of the density around the Bragg angles ($\theta_1=0.23...$, $\theta_2=0.33...$ in Fig.\ref{diforb}b). Figure \ref{diforb}c shows this concentration in a larger scale, while Fig.\ref{diforb}d shows the angular distribution corresponding to the trajectories of Fig.\ref{diforb}c. This distribution exhibit clear peaks at all the angles $\theta=\theta_q$ (the first local maximum around $\theta=0.1$ is not due to a concentration at a Bragg angle, but it is only caused by the trajectories moving nearly horizontally, i.e. within the support of the ingoing wavepacket). We note that plots of the quantum trajectories in a different scattering problem (atom surface scattering), appearing in \cite{sanzetal2004a}\cite{sanzetal2004b}, show a similar qualitative picture as in Fig.\ref{diforb}b, a fact which was identified in that case too as a dynamical effect of the quantum vortices. We may thus conjecture that the quantum vortices play an important role in a wide context of different quantum-mechanical diffraction problems. Finally, it should be stressed that the modification of the outgoing wavefunction according to Eq.(\ref{psioutfinal}) only influences the Bohmian velocity field in the transverse direction, while the radial flow of all Bohmian trajectories (as in Fig.\ref{diforb}) takes place at a constant speed $\hbar k_0/m$. Thus, the emergence of a diffraction pattern does not influence estimates on times of arrival or the times of flight of the particles to detectors placed at the same distance from the center, independently of the angle $\theta$. This subject is now discussed in section 4. \section{Arrival times and times of flight} An important practical utility of the quantum trajectory approach regards the possibility to unambiguously determine the probability distributions of the so-called {\it arrival times}, or of the {\it times of flight} of the scattered particles. This question is of particular interest, because it is related to the well known `problem of time' in quantum theory (see \cite{muglea2000,mugetal2002} for reviews). This problem stems from a theorem of Pauli \cite{pau1926}, according to which it is not possible to properly define a self-adjoint time operator consistent with all axioms of quantum mechanics. This implies that the usual (Copenhagen) formalism based on state vectors or density matrices is not applicable to a quantum-theoretical calculation of probabilities related to time observables. In fact, in both Schr\"{o}dinger's and Heisenberg's pictures, time is considered only as a parameter of the quantum equations of motion. Among various proposals in the literature aiming to remedy this gap of standard quantum theory (\cite{muglea2000}), the Bohmian formalism offers a straightforward solution. This is in principle subject to experimental testing, as will be proposed below. Furthermore, as was mentioned in the introduction, the use of a wavepacket approach allows for a comparison of the Bohmian approach with two other main approaches to the same subject, namely the `history approach' (based on Feynman paths) \cite{hartle1988}\cite{yamtak1993}, and the Kijowski approach, based on so-called `Bohm-Aharonov operators \cite{kij1974}. We should emphasize, however, that so far in the literature the latter approaches were given a consistent formulation only in the case of asymptotically free wavepacket motion \cite{muglea2000}, while their implementation in the case of scattered wavepackets is an open issue. This will be discussed in a future work. In the present paper, on the other hand, we focus on results regarding the time observables as defined in the Bohmian approach, and only provide a rough comparison of what should be expected by other theories of time observables. Any definition of a time observable in quantum mechanics (and also in classical physics) requires the occurrence of two events serving as the `start' and the `stop' event in the process of timing. In the definition of the {\it arrival times} of particles to detectors, we take as start event the preparation of the whole initial wavefunction at a certain moment $t=t_1$, which can be conveniently set equal to $t_1=0$. The stop event is the detection of the particle at a time $t=t_2>t_1$. The arrival time of the particle to the detector is $t_{arrival}=t_2-t_1$. The above definition of arrival times is independent of the adopted picture of quantum mechanics, since its only requirement is to assume that the preparation of an initial quantum state can be controllable in time (see \cite{bal2008}), i.e. that replicas of the same state can be prepared at any given time $t_1$ (a technique realizing this experimentally will be proposed below). However, a consistent calculation of the arrival time probabilities by the various pictures is still an open theoretical issue (see \cite{muglea2000}). On the other hand, the definition of a {\it time of flight} for a particle depends on the adopted picture of quantum mechanics, since it requires the use of some notion of {\it spacetime paths} that the particles presumably follow within the picture's framework. In fact, the time of flight is defined as the time elapsing between the crossing by the particle of two surfaces $S_1$ and $S_2$ in the configuration space. Thus, this time is different from the arrival time, and the difference depends on where exactly the particle lies within the support of the initial wavefunction. In the case of particle diffraction, it is convenient to choose the two surfaces as shown in Fig.\ref{setup}: $S_1$ is taken normal to the z-axis at a point $z=-l_0$, while $S_2$ is a spherical surface surrounding the target at the radial distance $r_2=l_0$. The time of flight $T(\theta)$ depends on the scattering angle $\theta$, and the quantity of interest is the difference $T(\theta_2)-T(\theta_1)$ for two angles $\theta_1$, $\theta_2$. This difference is independent of $l_0$, provided that $l_0$ is such that both $S_1$ and $S_2$ are sufficiently far from the target. We now discuss separately the arrival time and the time-of-flight probabilities in the setup of Fig.\ref{setup}. \subsection{Arrival times} The motion of all scattered particles (like in Fig.\ref{orbitsl}) in the domain beyond a sphere of radius $2l_0$ around the center can be considered as a radially outward motion with constant speed, since, by taking $\psi\simeq\psi_{outgoing}$ (where $\psi_{outgoing}$ is given by Eq.(\ref{psiout})), the Bohmian equations of motion in spherical coordinates read $dr/dt=v_0=\hbar k_0/m$, $d\theta/dt=0$. The last equation is modified when the diffraction terms in $S_{eff}$ are taken into account. This modification, however, does not influence the motions in the radial direction, which, as shown now, are the only ones affecting the distribution of arrival times to a detector placed at a distance $l_D$ and at any fixed angle $\theta$. By definition, the latter distribution is given by \begin{equation}\label{arrtimedis} P_{arrival}(t)={\Delta N_{\theta,l_D}(t)\over\Delta t} \end{equation} where $\Delta N_{\theta,l_D}(t)$ denotes the number of particles within the (assumed fixed) detector conic aperture $d\Omega_D=\sin\theta\Delta\theta_D\Delta\phi_D$ around the angle $\theta$ arriving to the detector between the times $t$ and $t+\Delta t$. Since in the Bohmian approach all the particles move with constant speed, we have $$ {\Delta N_{\theta,l_D}(t)\over\Delta t} = {\Delta N_{\theta,l_D}(t)\over\Delta r} {\Delta r\over\Delta t}=l_D^2d\Omega_D\rho v_0= l_D^2d\Omega_D|\psi_{out}|^2{\hbar k_0\over m} $$ In the case $l>>D>>a$, substituting Eq.(\ref{psioutl2}) and making the usual approximations $l^2>>\hbar t/m$, $D^2>>\hbar t/m$ results in \begin{equation}\label{arrtimedisl} P_{arrival}(t)\simeq P_0 e^{-(l_D+l_0-v_0t)^2\over \ell^2} \end{equation} where $P_0$ is a normalization constant. On the other hand, in the case $D>>l>>a$ (via Eq.(\ref{isefflim}) we find \begin{equation}\label{arrtimediss} P_{arrival}(t,\theta)= P_0' e^{-(l_D+l_0-v_0t)^2\over \sin^2\theta D^2} \end{equation} The main result can be summarized as follows: in either case $l>>D$ or $D>>l$, the arrival time distribution is a localized distribution (Gaussian, around the mean time $(\ell_D+\ell_0)/v_0$), whose dispersion is always of the order of {\it $v_0^{-1}$ $\times$ the maximum of the transverse and longitudinal coherence lengths}. The latter property implies that the trajectory approach makes predictions regarding the arrival time distribution which depend on two main beam parameters, and are thus testable in principle by concrete experimental setups. One possible proposal in this direction is the use of the so-called {\it laser-induced cold field emission technique} (see \cite{baretal2007}). In this technique, a cold-field electron source (nanotip) is exposed to well separated in time focused weak laser pulses of time width $\sim 10$--$100$fs. The photo-emitted electrons are accelerated towards the anode, whereby their initial state can be effectively described by an ingoing wavefunction of the form (\ref{psiin}). The scattered particles pass through detector placed at fixed angles $\theta$ as indicated in Fig.\ref{setup}. The key point to notice is that time measurements in such a setup conform with the definition of the {\it arrival times}, since a detection of the triggering laser pulse can serve as a start event marking the initial time when the {\it whole state} $\psi_{in}$ was prepared, while a later detection of a scattered particle serves as the stop time $t_2$. We propose that by monitoring the electron beam one can achieve different values of $l$ and $D$, thus probing quantitatively the predictions for $P_{arrival}(t,\theta)$ as given by the quantum trajectory approach. \subsection{Times-of-flight} The total time of flight from a point on $S_1$ to a point on $S_2$ is a function of the initial conditions $(z_0,R_0)$. Using the information from the swarm of numerical Bohmian trajectories of Fig.\ref{orbitsl}, this function can also be quadratically interpolated by the numerical data on grid points. The mean time $T(\theta)$ for all initial conditions leading to the same $\theta$ is then found numerically. Since the choice of surfaces $S_1$ and $S_2$ in Fig.\ref{setup} is arbitrary, the invariant quantity of interest is the difference $T(\theta)-T(\theta_0)$, where $\theta_0$ is a fixed reference angle. Figure \ref{timeavanglel} shows this difference for the numerical trajectories of Fig.\ref{orbitsl}a. To estimate $T(\theta)$ theoretically, we make use of Eq.(\ref{inil3}), yielding the locus ${\cal L}(\theta)$ of all initial conditions leading to a scattering close to the angle $\theta$. Using also the separator equation (Eq.\ref{septimel}), we also find the point $(z_s,R_s)$ where the moving separator encounters an orbit moving horizontally from $(z_{{\cal L}(\theta)}(R),R)$ at $t=0$, with speed $v_0$. Thus, we set $R=R_s$ and $z_s=R_s/\tan(\theta)$. The time of flight of this trajectory from $S_1$ to $S_2$ is then $t(z_{{\cal L}(\theta)}(R),R)=(z_s+2l_0-l_1-R/\sin\theta)/v_0$. The mean time of flight $T(\theta)$ can then be approximated by $T(\theta)\approx \int_{{\cal L}(\theta)} 2\pi R|\psi_{in}(z_{{\cal L}(\theta)}(R),R,t=0)|^2 t(z_{{\cal L}(\theta)}(R),R)dR$. We thus find: \begin{eqnarray}\label{dtth2} \Delta T=T(\theta_1)-T(\theta_2)&\approx &{DR_0\over v_0} \left[\tan(\theta_2/2))- \tan(\theta_1/2)\right]~~~ \end{eqnarray} where $R_0=[\sqrt{2\ln(C_{0})}+1/(1+\sqrt{2\ln(C_{0})})]$, with $C_{0}=8\pi\epsilon_0k_0^2\hbar^2 /(|Z_1Z|q_e^2m\rho^{1/2}d^{1/2})$. \begin{figure} \caption{The time difference $T(\theta)-T(150^\circ)$ (see text) for the Bohmian trajectories of Fig.\ref{orbitsl}. The smooth solid curve is the theoretical prediction of Eq.(\ref{dtth2}) while the dots represent numerical results.} \label{timeavanglel} \end{figure} Due to Eq.(\ref{dtth2}), the time difference $T(\theta_1)-T(\theta_2)$ has an $O(D\tan(\theta/2)/v_0)$ dependence on $\theta$. In fact, it is noticeable that Eq.(\ref{dtth2}), which gives the difference of the mean times of flight in the wavepacket approach, turns to be identical to the estimate of \cite{deletal2011} (their Eq.(26)), referring to the plane wave approximation. This is expected, since the plane wave approximation can be considered as a limiting case of the wavepacket approach with $l>>D$, corresponding to the limit $l\rightarrow\infty$. One more interesting remark concerning the times of flight found by the Bohmian approach is that the difference $T(\theta_1)-T(\theta_2)$ predicted in Eq.(\ref{dtth2}) has a completely different behavior from analogous quantities calculated in the framework of other theories of quantum time observables. While we defer a detailed reference to this problem to a future work, here we give some rough estimates concerning the sum-over-histories and the Kijowski approaches referred to in the introduction. We find \begin{eqnarray}\label{his} \Delta T= T(\theta_1)-T(\theta_2)\approx {-Z Z_1q_e^2\over 2\pi\epsilon_0m v_0^3} \ln\left(\sqrt{1+\cot^2(\theta_1/2)\over 1+\cot^2(\theta_2/2)}\right)\\ ~~~~~\mbox{(in sum-over-histories formalization, semiclassical approximation)}\nonumber \end{eqnarray} and \begin{equation}\label{kij} \Delta T=T(\theta_1)-T(\theta_2)=0~~~~~\mbox{(in Kijowski formalization)} \end{equation} An outline of the derivation of these formulae is given in Appendix I. A comparison of all three approaches yields that (i) the sum-over-histories approach (which, using a semiclassical approximation yields essentially the same result as in classical scattering theory), predicts a mean time difference depending on the particle velocity $v_0$, by the scaling law $\Delta T\sim [m_e/m][10^8\mbox{m sec}^{-1}/v_0]^3\cdot 10^{-19}sec$. (ii) The Kijowski formalism predicts no time difference. (iii) The Bohmian formalism predicts a time difference depending on $v_0$ as well as on the transverse quantum coherence length $D$. The scaling $\Delta T\sim [D/1\mu\mbox{m}] [10^8\mbox{m sec}^{-1}/v_0]10^{-13}$sec holds. As a final conclusion, we propose that experiments aiming to measure time observables in setups of particle diffraction may provide new insight into fundamental problems such as the role of time in quantum mechanics. In particular, the predictions of the de Broglie Bohm theory are within the possibilities of present day experimental techniques. \section{Semiclassical limit (Rutherford scattering)} So far we have considered charged particles with quantum coherence lengths much larger than the distance between nearest neighbors in the target. However, this study does not cover the so-called short wavelength limit, as e.g. in the case of $\alpha-$particle or ion scattering. In this case, the quantum coherence length becomes comparable to or smaller than the distance between nearest neighbors in the target. As a result, such particles `see' each of the atoms in the target as an individual scattering center and they do not interact with the target lattice as a whole. Furthermore, both $D$ and $l$ become comparable to a classical `impact parameter' $b$ (of the order of a few fermi) which is relevant to the classical description of Rutherford scattering. The incorporation of $b$ in the wavefunction model can be done essentially as described in \cite{mes1961}, assuming a Gaussian form of his wavepacket, referred to by the notation $\chi$, and aligning his vector denoted by $\mathbf{b}$ along the x-axis of our coordinate system we have: \begin{equation} \psi(\mathbf{r},t)={1\over (2\pi)^{3/2}} \int d^3\mathbf{k}~\tilde{c}(\mathbf{k})(1+f_k(\theta)){e^{ikr}\over r})e^{{-i\hbar k^2t/2m}} \end{equation} with \begin{equation}\label{psimom2} \tilde{c}(\mathbf{k})={1\over \pi^{1/2}\sigma_\perp} e^{(-{k_x^2+k_y^2\over 2\sigma_\perp^2})}{1\over \pi^{1/4}\sigma_\parallel^{1/2}} e^{{-(k_z-k_0)^2}\over{2\sigma_\parallel^2}}e^{-ik_xb} \end{equation} and \begin{equation}\label{psimom2} f_k(\theta)=-{Z_1Zq_e^2\over 4\pi\epsilon_0}{m\over \hbar^2}{1\over{2\sin^2({\theta\over 2})k^2}} \end{equation} After the calculation of the above Gaussian integrals we are led to the following wavefunction model: \begin{equation} \psi(\mathbf{r},t)= \psi_{ingoing}(\mathbf{r},t)+\psi_{outgoing}(\mathbf{r},t) \end{equation} where \begin{equation}\label{psiingse} \psi_{ingoing}(\mathbf{r},t)= A\exp\left(-{(x-b)^2+y^2 +(z-v_0 t)^2\over 2(D^2+i\hbar t/m)} +i(k_0 z - \hbar k_0^2 t/2m)\right) \end{equation} \begin{eqnarray}\label{psioutse} \psi_{outgoing}(\mathbf{r},t)&=& -A\left({Z_1 Zq_e^2\over 4\pi\epsilon_0}\right) \left({m\over 2\hbar^2k_0^2\sin^2(\theta/2)r}\right)\\ &\times&\exp\left(-{(r-v_0 t)^2+b^2\over 2(D^2+i\hbar t/m)} +i(k_0 r - \hbar k_0^2 t/2m)\right)\nonumber~~. \end{eqnarray} where \begin{equation} A={D\over{\pi^{1/2}}}({1\over{D^2+{{i\hbar t}\over{m}}}}){\ell^{1/2}\over{\pi^{1/4}}}({l\over{\ell^2+{{i\hbar t}\over{m}}}})^{1/2} \end{equation} is a nearly constant quantity, not affecting the Bohmian trajectories, and $v_0=\hbar k_0/m$. The time $t=0$ in the above formulae is taken so that, in the absence of scattering, the center of the ingoing wavepacket crosses the plane $z=0$ at the moment $t=0$. Furthermore, we consider the Bohmian trajectories at positive or negative times satisfying $|t|<mD^2/\hbar$, i.e. smaller than the decoherence time of the packet. The outgoing term is modulated by the Gaussian factor $$ \exp\left(-{(r-v_0 t)^2+b^2\over 2(D^2+i\hbar t/m)}\right)~~. $$ This factor implies that a replica of the ingoing wavepacket propagates from the center outwards as a spherical wavefront of the outgoing wave, albeit by a phase difference $iv_0 t$ with respect to the ingoing wavepacket. This new factor is the most important for the analysis of Bohmian trajectories, because it implies that the form of the latter depends crucially {\it on the choice of the value of the parameter $b$}, which actually changes the form of the wavefunction. \begin{figure} \caption{Three bohmian trajectories guided by the wavefunction defined in Eqs.(\ref{psiingse}) and (\ref{psioutse}), with $D=l=10$fermi, $Z_1=2$, $m=7.1\times 10^3m_e$ (alpha particle), $Z=79$ (gold), $k_0=10^{-14}m^{-1}$, and three different impact parameters $b_1=10$fm, $b_2=12$fm, $b_3=15$fm. All three trajectories are initially posed at the centers of their corresponding guiding wavepackets.} \label{ruth} \end{figure} A careful inspection of Eqs.(\ref{psiingse}) and (\ref{psioutse}) shows that the spherical wavefront emanating from $r=0$ encounters the ingoing wavepacket at a time $t_c$ which {\it decreases as $b$ decreases}. As a result, the Bohmian trajectories, which are forced to follow the motion of the radial wavefront after the collision, are scattered to angles which are larger on the average for smaller $b$. Thus, the Bohmian trajectories recover on the average the behavior of the classical Rutherford trajectories. An example of this behavior is given in Fig.\ref{ruth}, showing three Bohmian trajectories corresponding to an initial condition taken at the center of the wavepacket defined by Eq.(\ref{psiingse}) at the time $t=-D^2m/\hbar$, ensuring that the packet's spreading does not change appreciably in time up to the moment of collision with the outgoing wavefront. The three Bohmian trajectories {\it cross each other}, yielding a larger scattering angle for a smaller initial distance from the z-axis, i.e. they are close to the familiar classical picture. It should be noted, however, that this closeness is only in an average sense, since the exact form of a Bohmian trajectory guided by a wavepacket depends on where exactly the initial condition of the trajectory lies with respect to the center of mass of the initial packet. In fact, for {\it one fixed value of b} one obtains a swarm of de Broglie - Bohm trajectories (with initial conditions around this value of $b$). These trajectories are scattered in various directions and they do not cross each other, while they can define (in a statistical sense) a most probable scattering angle. However, this angle increases as $b$ decreases. Hence, we conclude that when we consider the `semiclassical limit' of small wavelengths as well as small quantum coherence lengths, the Bohmian trajectories yield results which agree on the average with the classical theory of Rutherford scattering. \section{Conclusions} We developed a wavefuntion model providing a wavepacket approach to the phenomenon of charged particle diffraction from thin material targets, and we employed the method of the de Broglie - Bohm quantum trajectories in order to interpret the emergence of diffraction patterns as well as to calculate arrival time probabilities for scattered particles detected at various scattering angles $\theta$. Our main conclusions are the following: 1) In both cases when the longitudinal wavepacket coherence length $l$ is larger than the transverse wavepacket coherence length $D$, or vice versa, the outgoing wavepacket has the form of a pulse propagating outwards in all possible radial directions, with a dispersion $\sigma_r$ which is of the order of the maximum of $l$ and $D$. Furthermore, in the case $D>>l$ (applying e.g. to cold-field emitted electrons), $\sigma_r$ depends on $\theta$ as $\sigma_r\sim \sin\theta D$. 2) We study the structure of the quantum currents in the above model. We provide theoretical estimates regarding the form and time evolution of a locus called {\it separator}, i.e. the border between the domains of prevalence of the ingoing and ourgoing quantum flow. We show how the separator forms channels of radial flow close to every Bragg angle, and leading to a concentration of the quantum trajectories to particular directions giving rise to a diffraction pattern. 3) The deflection of quantum trajectories is due to their interaction with an array of {\it quantum vortices} formed around a large number of nodal points located on the separator. We show examples of the quantum flow structure forming a `nodal point - X-point complex' around any nodal point, and we calculate the form of the stable and unstable manifolds yielding the local directions of approach to or recession from an X-point. In view of the similar role played by quantum vortices in different examples of diffraction problems \cite{sanzetal2004a} \cite{sanzetal2004b}, it can be anticipated that the mechanism of emergence of the diffraction pattern described in section 3 is quite general. 4) We compute arrival time probability distributions for both cases $l>>D$ and $D>>l$ using the de Broglie - Bohm trajectories of particles detected at a fixed distance and various scattering angles with respect to the target. In all cases, the dispersion of the arrival time distribution turns to be $\sigma_t\sim v_0^{-1}\times\max(l,D)$, where $v_0$ is the mean particles' velocity. We propose a realistic experimental setup aiming to test this prediction for electrons. We also calculate time-of-flight differences, where the time of flight is defined as the time interval separating the crossing by a de Broglie - Bohm particle of two fixed surfaces located in the directions of asymptotically free motion before and after the target. We discuss the ambiguity of the definition of the times of flight when using different approaches besides de Broglie - Bohm, and provide a rough calculation in the framework of the sum-over-histories approach and the Kijowski approach. 5) We finally examine how the de Broglie - Bohm trajectories recover (in a statistical sense) the semiclassical limit of Rutherford scattering, by examining the form of the quantum trajectories when the packet mean wavelength, as well as $l$ and $D$ become smaller than the inter-atomic distance in the target. In particular, we incorporate an impact parameter $b$ in the wavefunction model and demonstrate that the de Broglie Bohm trajectories are scattered on the average at larger angles $\theta$ as $b$ decreases. \\ \\ \noindent {\bf Acknowledgments:} C. Delis was supported by the State Scholarship Foundation of Greece (IKY) and by the Hellenic Center of Metals Research. C.E. has worked in the framework of the COST Action MP1006 - Fundamental Problems of Quantum Physics. He also acknowledges suggestions by H. Batelaan regarding the possibility to use the laser-induced field emission technique for arrival time measurements in the quantum regime. \appendix \section{Time-of-Flight differences outside the Bohmian formalism} We hereby outline the derivation of Eqs. (\ref{his}) and (\ref{kij}) estimating the time-of-flight differences $T(\theta_2)-T(\theta_1)$ by the sum-over-histories and the Kijowski approach respectively. {\it i) Sum-over-histories approach.} Neglecting the question of consistency (see \cite{muglea2000}), a rough calculation of the difference $T(\theta_2)-T(\theta_1))$ in the sum-over-histories approach can be made in the framework of the method developed in \cite{yamtak1993}. Denoting by $\Omega$ a fixed space-time volume with time support within the interval from two fixed times $t_A$ to $t_B$, the probability of a particle crossing $\Omega$ is \begin{equation} P(\Omega)=\int d^3\mathbf{r}_B{\Bigg |}\int d^3\mathbf{r}_A \Phi(B;\Omega;A)\psi(\mathbf{r}_A,t_A){\Bigg |}^2 \end{equation} where $A\equiv(\mathbf{r}_A,t_A),B\equiv(\mathbf{r}_B,t_B)$, while $\Phi(B;\Omega;A)=\sum_{\gamma\in\{B\leftarrow\Omega\leftarrow A\}}e^{(i/\hbar)S(\gamma)}$ is the sum over all Feynman paths $\gamma$ with fixed ends $A$ and $B$ passing through $\Omega$. The quantity $P(t_A,t_B,\mathbf{r}_B;\Omega)=$ $\mid\int d\mathbf{r}_A \Phi(B;\Omega;A)\psi(\mathbf{r}_A,t_A)\mid^2d^3\mathbf{r}_B$ is identified as the probability that an electron being anywhere in space at $t=t_A$ reaches a volume $\mathbf{r}_B+d^3\mathbf{r}_B$ at $t=t_B$ by passing first through $\Omega$. Let $S_1$ be a fixed surface normal to the beam's central axis (Fig.\ref{setup}) at $-l_0<z_{S_1}<0$. We choose $\Omega$ by the conditions that $\mathbf{r}_\Omega$ belongs to an area element $\Delta S_1$ on $S_1$, around the point $R=0,z_\Omega=z_{S_1}$, while $t_0\leq t_\Omega\leq t_0+\Delta t$, where $t_0>t_A$. Let now $S_2$ be a spherical surface or radius $l_0$ around $O$, and $\mathbf{r}_B$ a point on $S_2$ in the plane of Fig.\ref{setup}. Setting $d^3{\mathbf r}_B= \Delta S_2v_0\Delta t$, where $\Delta S_2$ is an area element on $S_2$ around $\mathbf{r}_B$, the mean time of flight from $\Delta S_1$ to $\Delta S_2$ becomes a function of $\theta$ only, given by $T(\theta)=P_0\int (t_B-t_0) P(t_A=0,t_B,\mathbf{r}_B(\theta),\Omega) dt_0$ where $P_0=\left(\int P(t_A=0,t_B,\mathbf{r}_B(\theta),\Omega) dt_0\right)^{-1}$. To estimate $P(t_A,t_B,\mathbf{r}_B;\Omega)$ we extend results of Hartle \cite{hartle1988} and Yamada and Takagi \cite{yamtak1993} in our case. For $\Phi(B;\Omega;A)$ we adopt a 3D extension of a formula proposed in \cite{yamtak1993}: \begin{eqnarray}\label{phiyana} \Phi(B;\Omega;A) = \int d^3\mathbf{r'}\Bigg[ \int d^3\mathbf{r} \Phi(\mathbf{r}_B,t_B;\mathbf{r'},t_0+\Delta t)\nonumber\\ \Phi(\mathbf{r'},t_0+\Delta t;\Omega;\mathbf{r},t_0) \Phi(\mathbf{r},t_0;\mathbf{r}_A,t_A)\Bigg] \end{eqnarray} where $\Phi(\mathbf{r},t_0;\mathbf{r}_A,t_A)$ is approximated by a free Feynman propagator, $ \Phi(\mathbf{r}_B,t_B;\mathbf{r'},t_0+\Delta t) = \int d^3\mathbf{k} e^{-{i\hbar k^2(t_B-t_0+\Delta t)\over 2m}} \phi_\mathbf{k}(\mathbf{r'}) \phi_\mathbf{k}(\mathbf{r}_B) $ (with $\phi_\mathbf{k}$ as in Eq.(\ref{phiall})), while $\Phi(\mathbf{r'},t_0+\Delta t;\Omega;\mathbf{r},t_0)$, is approximated by a 3D analog of Hartle's approach \cite{hartle1988} \begin{eqnarray}\label{hartle} & &\Phi(\mathbf{r'},t_0+\Delta t;\Omega;\mathbf{r},t_0)\approx\nonumber\\ &=&\int_{t_0}^{t_0+\Delta t} dt \Bigg[ \int_{\Delta S_1}d^2\mathbf{R}_{S_1} \left(m\over 2\pi i\hbar (t_0+\Delta t-t)\right)^{3/2}\nonumber\\ &\times&\left({|\mathbf{r}-\mathbf{r}_{S_1}|\over t-t_0}\right) e^{{im\over\hbar}\left[{|\mathbf{r'}-\mathbf{r}_{S_1}|^2\over 2(t_0+\Delta t-t)} + {|\mathbf{r}_{S_1}-\mathbf{r}|^2\over 2(t-t_0)}\right]}\Bigg] \end{eqnarray} where $\mathbf{r}_{S_1}$ are points on $\Delta S_1$ and $\mathbf{R}_{S_1}=\mathbf{r}_{S_1}-z_{S_1}\mathbf{\hat{e}_z}$. An exact calculation of all integrals is untractable. Through a stationary phase approximation, however, one obtains for fixed $t_A<0$ and $\psi(\mathbf{r},t_A)\simeq\psi_{in}$, that $P(t_A,t_B,\mathbf{r}_B;\Omega(t_0))$ is peaked essentially around a mean `classical' time of flight for a trajectory starting from the center of the wavepacket $|\psi_{in}|$, scattered by an atom at O, and arriving to a point on $\Delta S_2$. For two angles $\theta_1$, $\theta_2$, the mean time of flight difference can be estimated as \begin{eqnarray} T(\theta_1)-T(\theta_2)\approx {|Z Z_1| e^2\over 2\pi\epsilon_0m v_0^3} \ln\left(\sqrt{1+\cot^2(\theta_1/2)\over 1+\cot^2(\theta_2/2)}\right) \end{eqnarray} i.e. we find Eq.(\ref{his}). {\it ii) Kijowski approach}. The Kijowski approach \cite{kij1974}, assuming one-dimensional wave-packet propagation along some direction $z$, we set \begin{equation}\label{kij1} \Pi(T,z)= \sum_{s=-1,1}\Bigg|\int_{0}^{s\infty}dk\left({\hbar k\over m}\right)^{1/2} \tilde{c}(k) e^{-i\hbar k^2 T/2m + ikz}\Bigg|^2 \end{equation} to be the probability that the particle arrives on a normal surface at a point $z$ between times $T$ and $T+dT$ ($\tilde{c}(k)$ is the wavefunction in momentum space). If $\tilde{c}(k)\propto e^{-(k-k_0)^2/2\sigma_\parallel^2 -ikz_0}$ is a narrow packet ($k_0>>\sigma_\parallel$), neglecting exponentially small negative component terms of $\tilde{c}(k)$, we find \begin{equation}\label{kij2} \Pi(T,z)={hk_0\over m}\bigg[1+O\left(\sigma_\parallel/k_0\right)\bigg]|\psi(z,T)|^2 \end{equation} i.e. $\Pi(T,z)$ practically coincides with the flux function $J(z,T)=(hk_0/m)|\psi(x,T)|^2$. We apply Kijowski's formalism in the setup of Fig.\ref{setup} for the asymptotically free wavepacket motions at times long before or after $t=l_0/v_0$. The mean time-of-flight from $S_1$ to a point of fixed $\theta$ on $S_2$ can be written as $T(\theta)=<t_2>-<t_1>$, where $t_1,t_2$ are the arrival times to $S_1$ and $S_2$. Applying (\ref{kij2}) we find $<t_1>=l_1/v_0$, $<t_2>=2l_0/v_0$. Thus, $T(\theta)=(2l_0-l_1)/v_0$ independently of the scattering direction, i.e. \begin{equation} T(\theta_1)-T(\theta_2)=0 \end{equation} i.e. we find Eq.(\ref{kij}). \end{document}
arXiv
# Mechanical Behavior of Materials: A Comprehensive Guide": ## Foreword In the ever-evolving field of materials engineering, understanding the mechanical behavior of materials is of paramount importance. This book, "Mechanical Behavior of Materials: A Comprehensive Guide", is designed to provide a thorough understanding of the subject matter, catering to the needs of advanced undergraduate students, researchers, and professionals alike. The study of materials is not confined to a single discipline but is a multidisciplinary field that encompasses physics, chemistry, and engineering. The mechanical behavior of materials, in particular, is a critical aspect that determines their suitability for various applications. From the austenitic stainless steels used in construction and manufacturing to the carbon-based materials in electronics, the mechanical properties of these materials dictate their performance and longevity. This book aims to provide a comprehensive overview of the mechanical behavior of a wide range of materials. It delves into the experimental, computational, and design data of materials, offering a holistic view of the subject. The book also discusses the importance of materials databases (MDBs) in the efficient retrieval and conservation of materials data. MDBs have played a significant role in the advancement of materials engineering since their inception in the 1980s, and their importance cannot be overstated. In the age of the Internet, the capability of MDBs has increased manifold. Web-enabled MDBs have revolutionized the way we manage and conserve materials data, making it faster and easier to find and access the required data. This book will explore these advancements and their implications for the field of materials engineering. The journey through this book will be both enlightening and challenging, as we delve into the complexities of the mechanical behavior of materials. It is our hope that this guide will serve as a valuable resource for those seeking to understand and apply the principles of materials engineering in their respective fields. As we embark on this journey, let us remember that the study of materials is not just about understanding their properties and behavior. It is about harnessing this knowledge to create better, more efficient, and sustainable materials for the future. ## Chapter 1: Introduction ### Introduction The study of materials and their mechanical behavior is a vast and complex field, encompassing a wide range of disciplines and applications. From the smallest microstructures to the largest man-made structures, the mechanical properties of materials play a crucial role in determining their functionality, durability, and overall performance. In this introductory chapter, we will lay the groundwork for our exploration of the mechanical behavior of materials. We will begin by defining what we mean by 'mechanical behavior' and 'materials', and why these concepts are so important in the fields of engineering and materials science. We will then provide an overview of the key concepts and principles that underpin this field of study, including stress and strain, elasticity and plasticity, and fracture mechanics. We will also discuss the different types of materials - metals, ceramics, polymers, and composites - and how their unique structures and properties influence their mechanical behavior. For example, metals, with their closely packed atomic structures and strong metallic bonds, typically exhibit high strength and ductility. On the other hand, ceramics, with their covalent or ionic bonding and crystalline structures, are generally brittle but have high hardness and resistance to heat and wear. Finally, we will touch upon the various methods and techniques used to study and analyze the mechanical behavior of materials, such as tensile testing, hardness testing, and fracture toughness testing. These tests provide valuable data that can be used to predict how a material will behave under different conditions and loads, and to design materials with specific properties for specific applications. Understanding the mechanical behavior of materials is not just about understanding the materials themselves, but also about understanding the world around us. From the buildings we live in, the cars we drive, the devices we use every day, to the natural world itself - all are made up of materials whose mechanical behavior determines their form, function, and lifespan. By studying the mechanical behavior of materials, we can learn to use, manipulate, and create materials in ways that meet our needs and aspirations, and that contribute to the advancement of technology and society. In the following chapters, we will delve deeper into these topics, exploring the theoretical foundations, practical applications, and cutting-edge research in the field of mechanical behavior of materials. Whether you are a student, a researcher, or a practicing engineer, we hope that this book will serve as a comprehensive guide to this fascinating and important field. ### Section: 1.1 Force distributions: #### 1.1a Introduction to forces and their distributions In the realm of materials science and engineering, understanding the distribution of forces is crucial. Forces, which we model as Euclidean vectors or members of $\mathbb{R}^2$, are the primary drivers of the mechanical behavior of materials. They cause deformation, induce stress, and can lead to failure under certain conditions. The distribution of forces refers to how these forces are spread out or concentrated in a material or structure. This distribution can greatly influence the mechanical behavior of the material. For instance, a uniformly distributed load may cause a different type of deformation compared to a concentrated load, even if the total force is the same in both cases. One of the fundamental principles in understanding force distributions is the parallelogram of force. This principle states that the resultant of two forces acting at a point is the diagonal of the parallelogram whose sides represent the two forces in magnitude and direction. This principle is widely accepted as an empirical fact, although its mathematical proof has been a subject of controversy. The parallelogram of force is a cornerstone in the study of dynamics, a field that deals with the motion of bodies under the action of forces. In the context of materials science, dynamics helps us understand how materials respond to forces, particularly in terms of their deformation and failure mechanisms. In the following sections, we will delve deeper into the concept of force distributions, exploring how they influence the mechanical behavior of different types of materials - metals, ceramics, polymers, and composites. We will also discuss various methods and techniques used to study force distributions, such as analytical dynamics and experimental testing. As we navigate through this chapter, it is important to remember that understanding the mechanical behavior of materials is not just about understanding the materials themselves, but also about understanding the forces that act upon them and how these forces are distributed. This knowledge is crucial in designing materials and structures that can withstand the demands of their intended applications. #### 1.1b Internal force distribution in materials In the study of materials, it is not only the external forces that matter but also the internal forces. These forces are responsible for the internal stress distribution within a material, which can significantly influence its mechanical behavior. The internal force distribution can be understood by considering the material as a system of interconnected elements. Each element experiences forces from its neighboring elements, leading to a complex network of internal forces. This concept is fundamental in the finite element method, a numerical technique widely used in structural mechanics. The internal virtual work in the system can be represented as: $$ \mbox{System internal virtual work} = \sum_{e} \delta\ \mathbf{r}^T \left( \mathbf{k}^e \mathbf{r} + \mathbf{Q}^{oe} \right) = \delta\ \mathbf{r}^T \left( \sum_{e} \mathbf{k}^e \right)\mathbf{r} + \delta\ \mathbf{r}^T \sum_{e} \mathbf{Q}^{oe} $$ where $\delta\ \mathbf{r}^T$ is the transpose of the displacement vector, $\mathbf{k}^e$ is the stiffness matrix of the element, $\mathbf{r}$ is the displacement vector, and $\mathbf{Q}^{oe}$ is the vector of external forces on the element. The internal force distribution is also influenced by the external forces acting on the material. The work done by these forces can be represented as: $$ -\delta\ \mathbf{r}^T \sum_{e} \left(\mathbf{Q}^{te} + \mathbf{Q}^{fe}\right) $$ where $\mathbf{Q}^{te}$ and $\mathbf{Q}^{fe}$ are the vectors of external forces on the element due to traction and body forces, respectively. In the context of soil mechanics, the internal force distribution can be represented by the stress tensor, which is a matrix that describes the state of stress at a point in the material. For instance, in plane stress conditions, the stress tensor can be represented as: $$ \sigma=\left[\begin{matrix}\sigma_{xx}&0&\tau_{xz}\\0&0&0\\\tau_{zx}&0&\sigma_{zz}\\\end{matrix}\right] =\left[\begin{matrix}\sigma_{xx}&\tau_{xz}\\\tau_{zx}&\sigma_{zz}\\\end{matrix}\right] $$ where $\sigma_{xx}$, $\sigma_{zz}$ are the normal stresses and $\tau_{xz}$, $\tau_{zx}$ are the shear stresses. Understanding the internal force distribution is crucial in predicting the mechanical behavior of materials under different loading conditions. In the following sections, we will explore how this concept is applied in the study of different types of materials. #### 1.1c External force distribution on materials The external forces acting on a material can significantly influence its mechanical behavior. These forces can be categorized into nodal forces, surface forces, and body forces. Nodal forces, denoted as $\mathbf{R}$, are the forces applied at specific points or nodes of the material. The work done by these forces can be represented as: $$ \delta\ \mathbf{r}^T \mathbf{R} $$ where $\delta\ \mathbf{r}^T$ is the transpose of the displacement vector and $\mathbf{R}$ is the vector of nodal forces. Surface forces, denoted as $\mathbf{T}^e$, are the forces applied on the surface of the material. The work done by these forces can be represented as: $$ \mathbf{Q}^{te} = -\int_{S^e} \mathbf{N}^T \mathbf{T}^e \, dS^e $$ where $\mathbf{N}^T$ is the transpose of the shape function vector, $\mathbf{T}^e$ is the vector of surface forces, and $S^e$ is the surface area of the element. Body forces, denoted as $\mathbf{f}^e$, are the forces that act throughout the volume of the material. The work done by these forces can be represented as: $$ \mathbf{Q}^{fe} = -\int_{V^e} \mathbf{N}^T \mathbf{f}^e \, dV^e $$ where $\mathbf{N}^T$ is the transpose of the shape function vector, $\mathbf{f}^e$ is the vector of body forces, and $V^e$ is the volume of the element. The total work done by the external forces can be represented as: $$ -\delta\ \mathbf{r}^T \sum_{e} \left(\mathbf{Q}^{te} + \mathbf{Q}^{fe}\right) $$ The distribution of these external forces can significantly influence the internal force distribution within the material, and hence, its mechanical behavior. For instance, a material subjected to a uniform body force will have a different stress distribution compared to a material subjected to concentrated nodal forces. Understanding the distribution of these external forces is crucial in predicting the mechanical behavior of materials under different loading conditions. ### Section: 1.2 Deformation under force: Materials undergo deformation when subjected to external forces. This deformation can be categorized into two types: elastic deformation and plastic deformation. In this section, we will focus on elastic deformation. #### 1.2a Elastic deformation Elastic deformation is a type of deformation in which the material returns to its original shape once the external force is removed. This behavior is governed by Hooke's law, which states that the strain in a material is directly proportional to the applied stress, up to the yield point. Mathematically, this can be represented as: $$ \sigma = E \epsilon $$ where $\sigma$ is the stress, $E$ is the modulus of elasticity (also known as Young's modulus), and $\epsilon$ is the strain. The modulus of elasticity is a measure of the material's stiffness, i.e., its resistance to elastic deformation. A high modulus of elasticity indicates a stiff material that does not deform easily under stress, while a low modulus indicates a flexible material that deforms easily. The strain, on the other hand, is a measure of the deformation of the material. It is defined as the change in length per unit length. For small deformations, the strain can be approximated as: $$ \epsilon = \frac{\delta l}{l_0} $$ where $\delta l$ is the change in length and $l_0$ is the original length. In the context of incremental deformations, the perturbed position $\bar{\bf x}$ is given by: $$ \bar{\bf x} = {\bf x}^0 + \delta{\bf x} $$ where ${\bf x}^0$ is the basic position vector and $\delta{\bf x}$ is the small displacement superposed on the finite deformation basic solution. The perturbed deformation gradient, which gives the rate of change of the deformation with respect to the current configuration, is given by: $$ \bar{\bf F} = {\bf F}^0 + \mathbf{\Gamma} $$ where ${\bf F}^0$ is the basic deformation gradient and $\mathbf{\Gamma} = {\rm grad} \,\chi^1({\bf x}^0)$ is the gradient of the mapping function $\chi^1({\bf x}^0)$. The perturbed Piola stress, which gives the force per unit area in the reference configuration, is given by: $$ \bar{\bf P} = {\bf P}^0 + \delta p \mathcal{A}^1 $$ where ${\bf P}^0$ is the basic Piola stress, $\delta p$ is the increment in $p$, and $\mathcal{A}^1$ is the elastic moduli associated to the pairs $({\bf S},{\bf F})$. The incremental governing equations and boundary conditions, which describe the behavior of the material under small deformations, can be derived from these quantities. Understanding these equations is crucial in predicting the mechanical behavior of materials under different loading conditions. #### 1.2b Plastic deformation Plastic deformation is the second type of deformation that materials can undergo when subjected to external forces. Unlike elastic deformation, plastic deformation is permanent and the material does not return to its original shape once the external force is removed. This behavior is typically observed when the applied stress exceeds the yield point of the material. The yield point is the stress at which a material begins to deform plastically. Beyond this point, the material will deform permanently, and the relationship between stress and strain is no longer linear as in the case of elastic deformation. The yield point is a critical property of materials and is used to determine the material's suitability for various applications. In the context of incremental deformations, the perturbed Piola stress, which gives the force per unit area in the material, is given by: $$ \bar{\bf P} = {\bf P}^0 + \delta p \mathbf{F}^0 + \mathcal{A}^1 \mathbf{\Gamma} $$ where ${\bf P}^0$ is the basic Piola stress, $\delta p$ is the increment in $p$, $\mathbf{F}^0$ is the basic deformation gradient, and $\mathcal{A}^1$ is the elastic moduli associated to the pairs $({\bf S},{\bf F})$. The incremental governing equations, which describe the equilibrium of forces in the material, can be written as: $$ \delta {\bf S} = \mathcal{A}^1_0 \delta {\bf E} $$ where $\delta {\bf S}$ is the increment in the second Piola-Kirchhoff stress tensor, $\mathcal{A}^1_0$ is the tensor of instantaneous moduli, and $\delta {\bf E}$ is the increment in the Green-Lagrange strain tensor. The incremental incompressibility constraint, which ensures that the volume of the material remains constant during deformation, can be written as: $$ \delta p = - \mathcal{A}^1_0 : \delta {\bf E} $$ where ":" denotes the double dot product. In the next section, we will discuss the concept of strain hardening, which is a phenomenon observed in many materials undergoing plastic deformation. #### 1.2c Viscoelastic deformation Viscoelastic deformation is the third type of deformation that materials can undergo when subjected to external forces. This type of deformation exhibits characteristics of both elastic and plastic deformation, and is time-dependent. In viscoelastic materials, the strain is not instantaneous with applied stress, and some fraction of the deformation recovers upon unloading. The viscoelastic behavior of materials is often described using models that combine elastic springs and viscous dashpots. The simplest of these models is the Maxwell model, which consists of a spring and a dashpot in series. The Voigt-Kelvin model, which consists of a spring and a dashpot in parallel, is another commonly used model. In the context of incremental deformations, the perturbed Piola stress in viscoelastic materials can be given by: $$ \bar{\bf P} = {\bf P}^0 + \delta p \mathbf{F}^0 + \mathcal{A}^1 \mathbf{\Gamma} + \mathcal{V}^1 \delta \mathbf{E} $$ where ${\bf P}^0$ is the basic Piola stress, $\delta p$ is the increment in $p$, $\mathbf{F}^0$ is the basic deformation gradient, $\mathcal{A}^1$ is the elastic moduli associated to the pairs $({\bf S},{\bf F})$, and $\mathcal{V}^1$ is the viscoelastic moduli associated to the pairs $({\bf S},{\bf F})$. The incremental governing equations for viscoelastic materials can be written as: $$ \delta {\bf S} = \mathcal{A}^1_0 \delta {\bf E} + \mathcal{V}^1 \delta \dot{\bf E} $$ where $\delta {\bf S}$ is the increment in the second Piola-Kirchhoff stress tensor, $\mathcal{A}^1_0$ is the tensor of instantaneous moduli, $\mathcal{V}^1$ is the tensor of viscoelastic moduli, $\delta {\bf E}$ is the increment in the Green-Lagrange strain tensor, and $\delta \dot{\bf E}$ is the increment in the rate of the Green-Lagrange strain tensor. The incremental incompressibility constraint for viscoelastic materials can be written as: $$ \delta p = - \mathcal{A}^1_0 : \delta {\bf E} - \mathcal{V}^1 : \delta \dot{\bf E} $$ where ":" denotes the double dot product. In the next section, we will discuss the concept of creep and stress relaxation, which are phenomena observed in many materials undergoing viscoelastic deformation. ### Conclusion In this introductory chapter, we have laid the groundwork for understanding the mechanical behavior of materials. We have explored the basic concepts and terminologies that will be used throughout this book. While we have not delved into the specifics of different materials and their mechanical behaviors, we have set the stage for these discussions in the subsequent chapters. The mechanical behavior of materials is a vast and complex field, encompassing a wide range of materials and behaviors. It is a field that is constantly evolving, with new materials and technologies continually being developed. However, the fundamental principles remain the same, and it is these principles that we have introduced in this chapter. In the chapters to follow, we will build upon this foundation, exploring in greater detail the mechanical behaviors of different materials, the factors that influence these behaviors, and the ways in which these behaviors can be manipulated and controlled. We will also delve into the practical applications of these principles, demonstrating how an understanding of the mechanical behavior of materials can be used to design and engineer better, more efficient, and more sustainable materials and products. ### Exercises #### Exercise 1 Define the term "mechanical behavior of materials" and explain why it is important in the field of materials science and engineering. #### Exercise 2 List and briefly describe the three main types of mechanical behaviors that materials can exhibit. #### Exercise 3 Explain the concept of stress and strain in materials. Use the stress-strain curve to illustrate your explanation. #### Exercise 4 Discuss the factors that can influence the mechanical behavior of a material. Provide examples to support your discussion. #### Exercise 5 Describe a practical application of understanding the mechanical behavior of materials. Explain how this understanding can lead to the design and engineering of better materials and products. ## Chapter: Stress Distributions ### Introduction The study of materials is incomplete without understanding the concept of stress distributions. This chapter, "Stress Distributions", is dedicated to providing a comprehensive understanding of how stress is distributed within materials under various conditions. Stress, in the context of materials science, is a measure of the internal forces that particles in a material exert on each other. It is a fundamental concept that helps us understand how materials deform and fail under different loading conditions. The distribution of these stresses within a material can significantly influence its mechanical behavior, including its strength, ductility, and toughness. In this chapter, we will delve into the mathematical models and theories that describe stress distributions. We will explore the concept of stress concentration and how it can lead to material failure. We will also discuss the effects of different types of loading, such as tensile, compressive, and shear, on stress distributions. The chapter will also cover the influence of material properties and geometry on stress distributions. For instance, the stress distribution in a brittle material will be different from that in a ductile material. Similarly, the geometry of the material or the component, such as whether it is a thin plate, a thick cylinder, or a beam, can significantly affect the stress distribution. We will also discuss the role of stress distributions in the design of materials and components. Understanding stress distributions is crucial in predicting the performance and failure of materials and components under different loading conditions. This knowledge can guide the selection of materials for specific applications and the design of components to withstand specific loads. In summary, this chapter will provide a comprehensive understanding of stress distributions, their influences, and their implications for the mechanical behavior of materials. The knowledge gained from this chapter will be fundamental in understanding the behavior of materials under different loading conditions and in the design of materials and components. ### Section: 2.1 Stress distributions in materials: Stress distribution in materials is a critical aspect of understanding their mechanical behavior. It refers to the way internal forces, or stresses, are spread throughout a material when it is subjected to external forces. The distribution of these stresses can significantly influence the material's mechanical properties, such as its strength, ductility, and toughness. #### 2.1a Types of stress distributions There are several types of stress distributions that can occur in materials, depending on the nature of the applied forces and the material's properties. These include: 1. **Uniform stress distribution:** This occurs when the stress is evenly distributed throughout the material. This is often the case when the material is subjected to a uniform load, and the material's properties are homogeneous. 2. **Non-uniform stress distribution:** This occurs when the stress varies throughout the material. This can be due to non-uniform loading, inhomogeneous material properties, or complex geometries. 3. **Stress concentration:** This is a form of non-uniform stress distribution where the stress is significantly higher in certain areas, known as stress concentrations or stress risers. These are often caused by abrupt changes in geometry, such as holes, notches, or sharp corners. 4. **Hydrostatic stress distribution:** This type of stress distribution occurs when a material is subjected to pressure from all directions. The hydrostatic stress is given by the equation: $$ \sigma_{hydrostatic}=p_{mean}=\frac{\sigma_{xx}+\sigma_{zz}}{2} $$ where $\sigma_{xx}$ and $\sigma_{zz}$ are the normal stresses in the x and z directions, respectively. 5. **Plane stress distribution:** This type of stress distribution occurs in thin plates or sheets where the stress in one direction (usually the thickness direction) is negligible compared to the stresses in the other two directions. The plane stress state can be represented by the following stress matrix: $$ \sigma=\left[\begin{matrix}\sigma_{xx}&0&\tau_{xz}\\0&0&0\\\tau_{zx}&0&\sigma_{zz}\\\end{matrix}\right] =\left[\begin{matrix}\sigma_{xx}&\tau_{xz}\\\tau_{zx}&\sigma_{zz}\\\end{matrix}\right] $$ where $\sigma_{xx}$ and $\sigma_{zz}$ are the normal stresses in the x and z directions, respectively, and $\tau_{xz}$ and $\tau_{zx}$ are the shear stresses. Understanding these different types of stress distributions is crucial for predicting the mechanical behavior of materials under different loading conditions and for designing materials and components to withstand specific loads. #### 2.1b Factors influencing stress distributions Several factors can influence the distribution of stress in a material. Understanding these factors is crucial for predicting how a material will behave under different loading conditions. These factors include: 1. **Material Properties:** The inherent properties of a material, such as its elasticity, plasticity, and toughness, can significantly influence how stress is distributed. For instance, more elastic materials tend to distribute stress more evenly, while more brittle materials may exhibit stress concentrations. 2. **Geometry and Size:** The shape and size of the material can also affect stress distribution. For example, materials with sharp corners or notches can lead to stress concentrations. Similarly, the size of the material can influence the stress distribution, with larger materials often experiencing more non-uniform stress distributions. 3. **Loading Conditions:** The way a material is loaded can greatly affect the stress distribution. Uniform loading tends to lead to uniform stress distribution, while non-uniform or point loading can result in non-uniform stress distributions and potential stress concentrations. 4. **Boundary Conditions:** The constraints on a material, such as fixed or free boundaries, can also influence stress distribution. For instance, a material that is fixed at one end and free at the other will have a different stress distribution compared to a material that is fixed at both ends. 5. **Temperature:** Temperature can also affect stress distribution, particularly in materials that are sensitive to temperature changes. For example, thermal expansion or contraction can lead to non-uniform stress distributions. 6. **Rate of Loading:** The rate at which a load is applied can influence the stress distribution. Rapid loading can cause stress waves to propagate through the material, leading to non-uniform stress distributions. In the following sections, we will delve deeper into each of these factors, providing a comprehensive understanding of how they influence the mechanical behavior of materials. ### Section: 2.2 Strain and stress: #### 2.2a Relationship between strain and stress The relationship between strain and stress is fundamental to understanding the mechanical behavior of materials. This relationship is often described by the material's stress-strain curve, which is a graphical representation of the material's response to applied stress. Strain, denoted by $\epsilon$, is a measure of deformation representing the displacement between particles in the material body that is the result of stress. Stress, denoted by $\sigma$, is the force applied to a material divided by the area over which the force is distributed. The relationship between stress and strain can be expressed as: $$ \sigma = E \cdot \epsilon $$ where $E$ is the modulus of elasticity, also known as Young's modulus. This equation is known as Hooke's law and it states that the strain in a material is proportional to the applied stress within the elastic limit of that material. In the context of plane stress and plane strain states, the relationship between stress and strain becomes more complex due to the multidimensional nature of the stress and strain. For instance, in the plane stress state, the stress matrix is given by: $$ \sigma=\left[\begin{matrix}\sigma_{xx}&0&\tau_{xz}\\0&0&0\\\tau_{zx}&0&\sigma_{zz}\\\end{matrix}\right] =\left[\begin{matrix}\sigma_{xx}&\tau_{xz}\\\tau_{zx}&\sigma_{zz}\\\end{matrix}\right] $$ The corresponding strain matrix, assuming a linear elastic material behavior and using the generalized Hooke's law, can be expressed as: $$ \epsilon=\left[\begin{matrix}\epsilon_{xx}&\gamma_{xz}\\\gamma_{zx}&\epsilon_{zz}\\\end{matrix}\right] $$ where $\epsilon_{xx}$ and $\epsilon_{zz}$ are the normal strains in the x and z directions, respectively, and $\gamma_{xz}$ is the shear strain. The relationship between the stress and strain matrices is then given by the constitutive equations of the material. In the following sections, we will delve deeper into the relationship between stress and strain, exploring concepts such as elastic and plastic deformation, yield strength, and material failure. #### 2.2b Stress-strain curves Stress-strain curves are graphical representations of the relationship between stress, the force per unit area on a material, and strain, the deformation caused by the applied stress. These curves reveal many properties of materials, such as the elastic limit, yield strength, ultimate tensile strength, and fracture point. The stress-strain curve is typically divided into several regions: the elastic region, the plastic region, and the fracture point. In the **elastic region**, the material returns to its original shape after the stress is removed. This behavior is described by Hooke's law, which states that the stress is proportional to the strain: $$ \sigma = E \cdot \epsilon $$ where $\sigma$ is the stress, $E$ is the modulus of elasticity or Young's modulus, and $\epsilon$ is the strain. The **plastic region** begins at the yield point, where the material starts to deform permanently. Beyond this point, the material will not return to its original shape when the stress is removed. The **fracture point** is where the material breaks under the applied stress. In the context of plane stress and plane strain states, the stress-strain relationship becomes more complex due to the multidimensional nature of the stress and strain. For instance, the stress matrix in the plane stress state is given by: $$ \sigma=\left[\begin{matrix}\sigma_{xx}&0&\tau_{xz}\\0&0&0\\\tau_{zx}&0&\sigma_{zz}\\\end{matrix}\right] =\left[\begin{matrix}\sigma_{xx}&\tau_{xz}\\\tau_{zx}&\sigma_{zz}\\\end{matrix}\right] $$ The corresponding strain matrix, assuming a linear elastic material behavior and using the generalized Hooke's law, can be expressed as: $$ \epsilon=\left[\begin{matrix}\epsilon_{xx}&\gamma_{xz}\\\gamma_{zx}&\epsilon_{zz}\\\end{matrix}\right] $$ where $\epsilon_{xx}$ and $\epsilon_{zz}$ are the normal strains in the x and z directions, respectively, and $\gamma_{xz}$ is the shear strain. The relationship between the stress and strain matrices is then given by the constitutive equations of the material. In the following sections, we will delve deeper into the relationship between stress and strain in different states of stress, including uniaxial, biaxial, and triaxial states, and how these relationships are represented in stress-strain curves. ### Conclusion In this chapter, we have delved into the fundamental concept of stress distributions in materials. We have explored how stress, a measure of the internal forces within a material, is distributed across the material under different conditions. Understanding these distributions is crucial in predicting how a material will behave under various loads and in different environments. We have also discussed the mathematical models used to describe stress distributions, such as the stress tensor. This mathematical tool allows us to represent the complex, three-dimensional state of stress within a material in a concise and manageable form. The stress tensor not only provides a snapshot of the current state of stress within a material but also serves as a foundation for predicting future behavior under changing conditions. In addition, we have examined the factors that influence stress distributions, such as the material's properties, the type of loading, and the geometry of the material. By understanding these factors, we can better predict and control the mechanical behavior of materials in a wide range of applications. In conclusion, the study of stress distributions is a key aspect of understanding the mechanical behavior of materials. It provides the basis for predicting how materials will respond to loads and stresses, which is essential in the design and analysis of materials in engineering and other fields. ### Exercises #### Exercise 1 Given a stress tensor for a material, calculate the principal stresses and discuss their significance in understanding the material's behavior. #### Exercise 2 Describe how the properties of a material influence its stress distribution. Provide examples of materials with different properties and discuss how these properties affect the stress distributions. #### Exercise 3 Explain how the type of loading (e.g., tensile, compressive, shear) affects the stress distribution within a material. Use diagrams to illustrate your explanation. #### Exercise 4 Discuss the role of geometry in determining the stress distribution within a material. Provide examples of different geometries and explain how they influence the stress distributions. #### Exercise 5 Given a real-world scenario (e.g., a bridge under load, a pressurized vessel), describe how you would use the concepts of stress distribution to analyze the situation and predict the material's behavior. ## Chapter: Pressure Vessels ### Introduction Pressure vessels are ubiquitous in modern industry, playing a crucial role in numerous sectors such as oil and gas, nuclear power, and chemical processing. Understanding the mechanical behavior of materials used in these vessels is of paramount importance to ensure their safe and efficient operation. This chapter, "Pressure Vessels", aims to provide a comprehensive overview of the mechanical behavior of materials under the unique conditions found within these critical components. The chapter will delve into the fundamental principles governing the behavior of materials under pressure, exploring the effects of various factors such as temperature, stress, and strain. We will discuss the different types of pressure vessels, their design considerations, and the materials typically used in their construction. The mathematical modeling of pressure vessels is a key aspect of their design and analysis. Therefore, we will introduce and explain important equations and concepts, such as the thin-walled pressure vessel theory and the Lame's equation. For instance, the hoop stress in a thin-walled pressure vessel can be calculated using the formula `$\sigma_{\theta} = \frac{Pr}{t}$`, where `P` is the internal pressure, `r` is the radius, and `t` is the wall thickness. Furthermore, we will discuss the failure modes of pressure vessels, including brittle and ductile failures, and how material properties influence these failure modes. The chapter will also cover the testing methods used to assess the mechanical properties of materials used in pressure vessels, such as tensile and impact tests. By the end of this chapter, readers should have a solid understanding of the mechanical behavior of materials in pressure vessels, the factors influencing this behavior, and the methods used to analyze and predict it. This knowledge is not only essential for engineers and material scientists but also for anyone involved in industries where pressure vessels are used. ### Section: 3.1 Pressure vessels: Pressure vessels are containers designed to hold gases or liquids at a pressure substantially different from the ambient pressure. They have a variety of applications in industry, including in settings such as private homes, mining operations, nuclear reactor vessels, submarine and space ship habitats, oil refineries, and more. #### 3.1a Types of pressure vessels Pressure vessels can be classified into different types based on their design and construction. ##### 1. Vessel Thread Until 1990, high-pressure cylinders were produced with conical (tapered) threads. Two types of threads have dominated the full metal cylinders in industrial use from in volume. Taper thread (17E), with a 12% taper right-hand thread, standard Whitworth 55° form with a pitch of 14 threads per inch (5.5 threads per cm) and pitch diameter at the top thread of the cylinder of . These connections are sealed using thread tape and torqued to between on steel cylinders, and between on aluminium cylinders. To screw in the valve, a high torque of typically is necessary for the larger 25E taper thread, and for the smaller 17E thread. Until around 1950, hemp was used as a sealant. Later, a thin sheet of lead pressed to a hat with a hole on top was used. Since 2005, PTFE-tape has been used to avoid using lead. A tapered thread provides simple assembly, but requires high torque for connecting and leads to high radial forces in the vessel neck. All cylinders built for working pressure, all diving cylinders, and all composite cylinders use parallel threads. Parallel threads are made to several standards: The 3/4"NGS and 3/4"BSP are very similar, having the same pitch and a pitch diameter that only differs by about , but they are not compatible, as the thread forms are different. All parallel thread valves are sealed using an elastomer O-ring at the top of the neck thread which seals in a chamfer or step in the cylinder neck and against the flange of the valve. ##### 2. Composite Vessels To classify the different structural principles cylinders, 4 types are defined. Type 2 and 3 cylinders have been in production since around 1995. Type 4 cylinders are commercially available at least since 2016. ##### 3. Safety Features One of the key safety features of pressure vessels is the 'Leak before burst' design. This design principle ensures that a crack in the vessel will leak before it bursts, providing an early warning of failure and preventing catastrophic rupture. In the following sections, we will delve deeper into the mechanical behavior of these different types of pressure vessels, their design considerations, and the materials used in their construction. We will also discuss the mathematical models used to predict their behavior under various conditions, and the testing methods used to assess their performance. #### 3.1b Design considerations for pressure vessels Designing pressure vessels is a complex task that requires a deep understanding of the mechanical behavior of materials. The design process involves several considerations, including the intended application, the operating conditions, and the safety standards that the vessel must meet. ##### Design Pressure and Temperature The design pressure and temperature are the maximum pressure and temperature that a pressure vessel is designed to withstand safely. These parameters are crucial in determining the material and thickness of the vessel. The design pressure is typically higher than the operating pressure to provide a safety margin. The design temperature, on the other hand, is the maximum temperature that the vessel is expected to experience during operation. ##### Material Selection The material used for constructing a pressure vessel must be able to withstand the design pressure and temperature. It should also be resistant to the fluid contained in the vessel. Common materials used in pressure vessel construction include carbon steel, stainless steel, and high-strength alloys. The material's mechanical properties, such as its yield strength and Young's modulus, are important considerations in the design process. ##### Safety Standards and Codes Pressure vessels must comply with various safety standards and codes, which vary by region. These codes specify the design, fabrication, inspection, testing, and certification requirements for pressure vessels. For instance, in North America, the ASME Boiler and Pressure Vessel Code is widely used. In the European Union, the Pressure Equipment Directive (PED) is followed. Other international standards include the Japanese Industrial Standard (JIS), CSA B51 in Canada, Australian Standards in Australia, and others like Lloyd's, Germanischer Lloyd, Det Norske Veritas, Société Générale de Surveillance (SGS S.A.), Lloyd’s Register Energy Nederland, etc. ##### Stress Analysis Stress analysis is a critical part of pressure vessel design. The vessel must be designed to withstand various stresses, including hoop stress, axial stress, and breaking stress. These stresses are related to the loads of the design, such as the internal pressure and strain. In the case of fibre-reinforced plastic tanks and vessels, an anisotropic analysis may be required due to the filament winding process. ##### Testing Pressure vessels must undergo rigorous testing to ensure their safety and performance. This includes hydrostatic testing, where the vessel is filled with a liquid and pressurized to check for leaks and to verify that it can withstand the design pressure. Other tests may include ultrasonic testing to detect internal and surface defects, and radiographic testing to inspect welds. In conclusion, the design of pressure vessels is a complex process that requires a deep understanding of the mechanical behavior of materials, the operating conditions, and the safety standards that the vessel must meet. By carefully considering these factors, engineers can design pressure vessels that are safe, efficient, and reliable. ### Conclusion In this chapter, we have delved into the fascinating world of pressure vessels, exploring their mechanical behavior and the factors that influence their performance. We have learned that pressure vessels are crucial components in various industries, including the oil and gas, chemical, and power generation sectors, among others. Their design and operation are governed by a complex interplay of materials science, mechanical engineering, and thermodynamics. We have examined the fundamental principles that underpin the behavior of pressure vessels, including the concepts of stress, strain, and deformation. We have also discussed the importance of understanding the material properties of the vessel, such as its strength, ductility, and toughness, which can significantly impact its ability to withstand high pressures and temperatures. Moreover, we have explored the various types of pressure vessels, each with its unique design features and operational characteristics. We have also looked at the different failure modes of pressure vessels, including brittle fracture, fatigue, and creep, and the measures that can be taken to prevent these failures. In conclusion, the mechanical behavior of pressure vessels is a complex and multifaceted subject that requires a deep understanding of materials science and engineering principles. By mastering these concepts, engineers can design and operate pressure vessels that are safe, efficient, and reliable. ### Exercises #### Exercise 1 Calculate the hoop stress and longitudinal stress in a thin-walled pressure vessel with an internal pressure of 10 MPa, a wall thickness of 10 mm, and a radius of 500 mm. #### Exercise 2 Describe the difference between brittle fracture and ductile fracture in pressure vessels. How do these failure modes relate to the material properties of the vessel? #### Exercise 3 Explain the concept of creep in pressure vessels. How does it occur, and what measures can be taken to prevent it? #### Exercise 4 A pressure vessel is made of a material with a yield strength of 250 MPa. If the vessel is subjected to an internal pressure of 15 MPa, will it undergo plastic deformation? Justify your answer. #### Exercise 5 Design a simple pressure vessel using the principles discussed in this chapter. Specify the material, dimensions, and operational parameters of the vessel, and explain your choices. ## Chapter: Stress Transformations ### Introduction In the realm of materials science and engineering, understanding the mechanical behavior of materials is of paramount importance. This chapter, "Stress Transformations", delves into one of the most crucial aspects of this field. Stress transformations are fundamental to the study of materials under different loading conditions. They provide a mathematical framework to analyze and predict how materials respond to various external forces. This chapter will introduce the concept of stress transformations, explaining why they are necessary and how they are used in the field of materials science. The concept of stress transformations is rooted in the principles of mechanics and mathematics. It involves the transformation of stress components from one coordinate system to another, often to simplify the analysis or to examine the material behavior under different orientations of the stress tensor. This is typically achieved through the use of transformation equations and matrices, which will be discussed in detail in this chapter. The chapter will also cover the principal stresses and the maximum shear stress, which are key concepts in stress transformations. These are the stresses that occur along certain orientations, and they provide valuable insights into the material's behavior under stress. Understanding these concepts is crucial for predicting failure modes, designing materials, and optimizing their performance. In addition, the chapter will explore the graphical representation of stress transformations, known as Mohr's Circle. This powerful tool provides a visual means to understand and perform stress transformations, and it is widely used in both academia and industry. In summary, this chapter aims to provide a comprehensive understanding of stress transformations, equipping readers with the knowledge and tools to analyze the mechanical behavior of materials under various stress conditions. Whether you are a student, a researcher, or a practicing engineer, this chapter will serve as a valuable resource in your study or work. Remember, the beauty of materials science lies not just in understanding the materials themselves, but also in understanding how they behave under different conditions. And stress transformations are a key piece of that puzzle. ### Section: 4.1 Stress transformations: #### 4.1a Introduction to stress transformations Stress transformations are a fundamental concept in the study of the mechanical behavior of materials. They provide a mathematical framework that allows us to analyze and predict how materials respond to various external forces. This section will introduce the concept of stress transformations, explaining why they are necessary and how they are used in the field of materials science. The concept of stress transformations is rooted in the principles of mechanics and mathematics. It involves the transformation of stress components from one coordinate system to another, often to simplify the analysis or to examine the material behavior under different orientations of the stress tensor. This is typically achieved through the use of transformation equations and matrices. #### 4.1b Transformation Equations and Matrices The transformation of stress components from one coordinate system to another is achieved through the use of transformation equations. These equations are derived from the equilibrium conditions and geometric compatibility conditions of the material. The transformation equations are typically represented in matrix form, which simplifies the calculations and allows for easy manipulation of the stress components. The transformation matrix, often denoted as $[T]$, is a square matrix that contains the direction cosines of the new coordinate system with respect to the old one. The transformed stress tensor, denoted as $[\sigma']$, can be obtained by pre-multiplying the original stress tensor, $[\sigma]$, with the transformation matrix and its transpose: $$ [\sigma'] = [T][\sigma][T]^T $$ #### 4.1c Principal Stresses and Maximum Shear Stress Principal stresses are the maximum and minimum normal stresses that occur at a point in a material. They are important because they provide valuable insights into the material's behavior under stress. The principal stresses are found by solving the characteristic equation of the stress tensor, which is a cubic equation in the normal stress. The roots of this equation give the principal stresses. The maximum shear stress is another key concept in stress transformations. It is the maximum value of shear stress that occurs at a point in a material. The maximum shear stress can be found from the principal stresses using the following equation: $$ \tau_{max} = \frac{\sigma_{max} - \sigma_{min}}{2} $$ where $\sigma_{max}$ and $\sigma_{min}$ are the maximum and minimum principal stresses, respectively. #### 4.1d Mohr's Circle Mohr's Circle is a graphical representation of stress transformations. It is a powerful tool that provides a visual means to understand and perform stress transformations. The circle is constructed using the normal and shear stresses on the axes, and the principal stresses and maximum shear stress can be found directly from the circle. In summary, this section aims to provide a comprehensive understanding of stress transformations, equipping readers with the knowledge and tools to analyze the mechanical behavior of materials under various stress conditions. Whether you are a student, a researcher, or a practicing engineer, understanding stress transformations is crucial for predicting failure modes, designing materials, and optimizing their performance. #### 4.1d Stress Transformations in 2D In two-dimensional stress transformations, we consider the stresses acting on a plane. The stress state is represented by a 2x2 stress matrix, which includes the normal stresses ($\sigma_{xx}$ and $\sigma_{zz}$) and the shear stress ($\tau_{xz}$). The 2D stress matrix is given by: $$ \sigma=\left[\begin{matrix}\sigma_{xx}&\tau_{xz}\\\tau_{zx}&\sigma_{zz}\\\end{matrix}\right] $$ The transformation of this stress matrix from one coordinate system to another is achieved through the use of transformation equations and matrices, similar to the process described in the previous sections. The transformed stress matrix, denoted as $[\sigma']$, can be obtained by pre-multiplying the original stress matrix, $[\sigma]$, with the transformation matrix and its transpose: $$ [\sigma'] = [T][\sigma][T]^T $$ In the context of plane stress, the hydrostatic stress, $\sigma_{hydrostatic}$, is defined as the average of the normal stresses: $$ \sigma_{hydrostatic}=p_{mean}=\frac{\sigma_{xx}+\sigma_{zz}}{2} $$ The stress matrix can be separated into distortional and volumetric parts. The distortional part represents the shape-changing component of the stress, while the volumetric part represents the size-changing component. This separation is useful in the analysis of material behavior under different stress conditions. After loading, the stress state can be represented as: $$ \left[\begin{matrix}\sigma_{xx}-\sigma_{hydrostatic}&\tau_{xz}\\\tau_{zx}&\sigma_{zz}-\sigma_{hydrostatic}\\\end{matrix}\right]+\left[\begin{matrix}\sigma_{hydrostatic}&0\\0&\sigma_{hydrostatic}\\\end{matrix}\right] +\left[\begin{matrix}0&0\\0&\sigma_{z}\ \\\end{matrix}\right] $$ In the next section, we will discuss the concept of principal stresses and maximum shear stress in the context of 2D stress transformations. #### 4.1c Stress Transformations in 3D In three-dimensional stress transformations, we consider the stresses acting in a three-dimensional space. The stress state is represented by a 3x3 stress matrix, which includes the normal stresses ($\sigma_{xx}$, $\sigma_{yy}$, $\sigma_{zz}$) and the shear stresses ($\tau_{xy}$, $\tau_{xz}$, $\tau_{yz}$). The 3D stress matrix is given by: $$ \sigma=\left[\begin{matrix}\sigma_{xx}&\tau_{xy}&\tau_{xz}\\\tau_{yx}&\sigma_{yy}&\tau_{yz}\\\tau_{zx}&\tau_{zy}&\sigma_{zz}\\\end{matrix}\right] $$ Similar to the 2D case, the transformation of this stress matrix from one coordinate system to another is achieved through the use of transformation equations and matrices. The transformed stress matrix, denoted as $[\sigma']$, can be obtained by pre-multiplying the original stress matrix, $[\sigma]$, with the transformation matrix and its transpose: $$ [\sigma'] = [T][\sigma][T]^T $$ In the context of three-dimensional stress, the hydrostatic stress, $\sigma_{hydrostatic}$, is defined as the average of the normal stresses: $$ \sigma_{hydrostatic}=p_{mean}=\frac{\sigma_{xx}+\sigma_{yy}+\sigma_{zz}}{3} $$ The stress matrix can be separated into distortional and volumetric parts. The distortional part represents the shape-changing component of the stress, while the volumetric part represents the size-changing component. This separation is useful in the analysis of material behavior under different stress conditions. After loading, the stress state can be represented as: $$ \left[\begin{matrix}\sigma_{xx}-\sigma_{hydrostatic}&\tau_{xy}&\tau_{xz}\\\tau_{yx}&\sigma_{yy}-\sigma_{hydrostatic}&\tau_{yz}\\\tau_{zx}&\tau_{zy}&\sigma_{zz}-\sigma_{hydrostatic}\\\end{matrix}\right]+\left[\begin{matrix}\sigma_{hydrostatic}&0&0\\0&\sigma_{hydrostatic}&0\\0&0&\sigma_{hydrostatic}\\\end{matrix}\right] $$ In the next section, we will discuss the concept of principal stresses and maximum shear stress in the context of 3D stress transformations. We will also introduce Mohr's circle for a general three-dimensional state of stresses, which is a graphical representation of the transformation of stresses. ### Conclusion In this chapter, we have delved into the concept of stress transformations, a fundamental aspect of understanding the mechanical behavior of materials. We have explored how stress, a measure of the internal forces within a material, can be transformed and represented in different coordinate systems. This is crucial in analyzing the response of materials under different loading conditions and orientations. We have also discussed the mathematical principles behind stress transformations, including the use of Mohr's Circle and transformation equations. These tools allow us to visualize and calculate the principal stresses and the maximum shear stress in a material, providing valuable insights into the material's behavior under stress. In essence, stress transformations provide a more comprehensive understanding of how materials behave under different types of stress. This knowledge is invaluable in the fields of material science, mechanical engineering, and structural engineering, where the mechanical behavior of materials under stress is of utmost importance. ### Exercises #### Exercise 1 Given a state of stress with $\sigma_x = 10$ MPa, $\sigma_y = 20$ MPa, and $\tau_{xy} = 5$ MPa, calculate the principal stresses using the transformation equations. #### Exercise 2 Using Mohr's Circle, determine the maximum shear stress for the state of stress given in Exercise 1. #### Exercise 3 Explain the significance of principal stresses in the context of material failure. #### Exercise 4 Given a state of stress with $\sigma_x = 15$ MPa, $\sigma_y = 25$ MPa, and $\tau_{xy} = 10$ MPa, calculate the principal stresses and the maximum shear stress. Compare these values with those obtained in Exercise 1. #### Exercise 5 Discuss the role of stress transformations in the design of engineering structures. Provide examples where stress transformations would be crucial in determining the suitability of a material for a particular application. ## Chapter 5: Elasticity ### Introduction Elasticity is a fundamental concept in the study of materials and their mechanical behavior. It refers to the ability of a material to return to its original shape after being deformed by an external force. This chapter will delve into the principles of elasticity, providing a comprehensive understanding of this essential property. We will begin by exploring the basic concept of elasticity, its definition, and its importance in the field of materials science. We will then proceed to discuss the different types of elastic behavior exhibited by materials, such as linear and non-linear elasticity, and the factors that influence these behaviors. The chapter will also cover the mathematical models used to describe elastic behavior, including Hooke's Law, which states that the strain in a material is directly proportional to the applied stress, expressed mathematically as `$\sigma = E \epsilon$`, where `$\sigma$` is the stress, `$E$` is the modulus of elasticity, and `$\epsilon$` is the strain. We will also delve into the concept of elastic constants and their significance in predicting the behavior of materials under different loading conditions. These constants, including the Young's modulus, shear modulus, and bulk modulus, provide valuable insights into the material's resistance to deformation. Finally, we will discuss the practical applications of elasticity in various fields, such as civil engineering, mechanical engineering, and materials science. Understanding the elastic behavior of materials is crucial in these fields as it helps in the design and selection of materials for various applications. In summary, this chapter aims to provide a comprehensive understanding of the concept of elasticity, its mathematical representation, and its practical applications. By the end of this chapter, you should have a solid foundation in understanding the elastic behavior of materials and its significance in the field of materials science. ### Section: 5.1 Continuum linear elasticity: Continuum linear elasticity is a branch of elasticity that deals with the deformation of a continuous, or continuum, material body. This approach assumes that the material is homogeneous and isotropic, meaning that its properties are the same in all directions and at every point in the material. #### 5.1a Hooke's law Hooke's law is a fundamental principle in the field of continuum linear elasticity. Named after the 17th-century British physicist Robert Hooke, it states that the force (`F`) needed to extend or compress a spring by some distance (`x`) scales linearly with respect to that distance. Mathematically, this relationship is expressed as: $$ F = kx $$ where `k` is a constant factor characteristic of the spring (i.e., its stiffness), and `x` is small compared to the total possible deformation of the spring. Hooke's law is not only applicable to springs but also holds true in many other situations where an elastic body is deformed, such as wind blowing on a tall building, or a musician plucking a string of a guitar. An elastic body or material for which this equation can be assumed is said to be linear-elastic or Hookean. However, it's important to note that Hooke's law is only a first-order linear approximation to the real response of springs and other elastic bodies to applied forces. It must eventually fail once the forces exceed some limit, since no material can be compressed beyond a certain minimum size, or stretched beyond a maximum size, without some permanent deformation or change of state. Many materials will noticeably deviate from Hooke's law well before those elastic limits are reached. Despite its limitations, Hooke's law is an accurate approximation for most solid bodies, as long as the forces and deformations are small enough. For this reason, Hooke's law is extensively used in all branches of science and engineering, and is the foundation of many disciplines such as seismology, molecular mechanics, and acoustics. It is also the fundamental principle behind the spring scale, the manometer, the galvanometer, and the balance wheel of the mechanical clock. In the context of continuum linear elasticity, Hooke's law can be generalized to three dimensions. The stress (`σ`) in a material is directly proportional to the applied strain (`ε`), expressed mathematically as: $$ \sigma = E \epsilon $$ where `E` is the modulus of elasticity, a measure of the stiffness of the material. This equation is a tensorial version of Hooke's law, and it forms the basis of the theory of linear elasticity. In the following sections, we will delve deeper into the mathematical representation of Hooke's law and its implications in understanding the mechanical behavior of materials. #### 5.1b Elastic modulus The elastic modulus, also known as modulus of elasticity, is a measure of a material's stiffness or resistance to elastic deformation. If a material has a high elastic modulus, it means the material is very stiff and resists deformation under an applied load. Conversely, a material with a low elastic modulus is more flexible and will deform more easily under the same load. The elastic modulus is a fundamental property of materials and is used in a wide range of engineering applications. It is a key parameter in the equations of elasticity, which describe how a material deforms under stress. The elastic modulus is typically denoted by the symbol `E` and is defined as the ratio of stress (`σ`) to strain (`ε`) in the linear elastic region of a material's stress-strain curve: $$ E = \frac{σ}{ε} $$ This equation is essentially a restatement of Hooke's law, which we discussed in the previous section. It tells us that the stress experienced by a material is directly proportional to the strain it undergoes, with the constant of proportionality being the elastic modulus. The units of elastic modulus are pressure units, typically Pascals (Pa) in the International System of Units (SI). However, it can also be expressed in other units of pressure such as pounds per square inch (psi) or gigapascals (GPa). It's important to note that the elastic modulus is a material property, meaning it is inherent to the material itself and does not depend on the size or shape of the object made from the material. However, the elastic modulus can be affected by factors such as temperature, pressure, and the presence of defects in the material. Different materials have different elastic moduli. For example, rubber has a low elastic modulus, meaning it is very flexible and can be easily stretched or compressed. On the other hand, steel has a high elastic modulus, meaning it is very stiff and resists deformation. In the next section, we will discuss different types of elastic moduli, including Young's modulus, shear modulus, and bulk modulus, and how they are used to describe the elastic behavior of materials under different types of loading conditions. #### 5.1c Poisson's ratio Poisson's ratio, denoted by the Greek letter `ν` (nu), is another fundamental property of materials that describes the elastic behavior under deformation. Specifically, it measures the ratio of the lateral strain to the axial strain in a material subjected to axial stress. Mathematically, Poisson's ratio is defined as: $$ ν = -\frac{ε_{lateral}}{ε_{axial}} $$ where $ε_{lateral}$ is the lateral or transverse strain (the change in width or breadth per unit original width or breadth) and $ε_{axial}$ is the axial or longitudinal strain (the change in length per unit original length). The negative sign in the definition indicates that the lateral strain and the axial strain are typically of opposite nature. That is, when a material is compressed or stretched along one axis (causing axial strain), it tends to expand or contract along the perpendicular axis (causing lateral strain), respectively. Poisson's ratio is a dimensionless quantity, meaning it has no units. Its value typically lies between -1 and 0.5 for most materials. A positive Poisson's ratio (between 0 and 0.5) indicates that a material tends to become thinner in the lateral direction when stretched in the axial direction (and vice versa), which is the behavior exhibited by most common materials. A Poisson's ratio of 0.5 represents an incompressible material, which maintains a constant volume under deformation. A negative Poisson's ratio, although rare, is possible and indicates that a material expands laterally when stretched axially, a behavior known as auxetic. Like the elastic modulus, Poisson's ratio is a material property and does not depend on the size or shape of the object made from the material. However, it can be affected by factors such as temperature, pressure, and the presence of defects in the material. In the next section, we will discuss the concept of shear modulus and its relationship with the elastic modulus and Poisson's ratio. ### Section: 5.2 Linear elasticity: #### 5.2a Linear elastic behavior in solids Linear elasticity is a mathematical model that describes how solid objects deform and become internally stressed due to prescribed loading conditions. It is a simplification of the more general nonlinear theory of elasticity and a branch of continuum mechanics. The fundamental "linearizing" assumptions of linear elasticity are: infinitesimal strains or "small" deformations (or strains) and linear relationships between the components of stress and strain. In addition, linear elasticity is valid only for stress states that do not produce yielding. These assumptions are reasonable for many engineering materials and engineering design scenarios. Linear elasticity is therefore used extensively in structural analysis and engineering design, often with the aid of finite element analysis. #### Mathematical formulation The equations governing a linear elastic boundary value problem are based on three tensor partial differential equations for the balance of linear momentum and six infinitesimal strain-displacement relations. The system of differential equations is completed by a set of linear algebraic constitutive relations. In direct tensor form that is independent of the choice of coordinate system, these governing equations are: $$ \boldsymbol{\sigma} = \mathsf{C}:\boldsymbol{\varepsilon}, \quad \boldsymbol{\varepsilon} = \frac{1}{2}(\nabla \mathbf{u} + (\nabla \mathbf{u})^\mathrm{T}), \quad \rho \ddot{\mathbf{u}} = \nabla \cdot \boldsymbol{\sigma} + \mathbf{F} $$ where $\boldsymbol{\sigma}$ is the Cauchy stress tensor, $\boldsymbol{\varepsilon}$ is the infinitesimal strain tensor, $\mathbf{u}$ is the displacement vector, $\mathsf{C}$ is the fourth-order stiffness tensor, $\mathbf{F}$ is the body force per unit volume, $\rho$ is the mass density, $\nabla$ represents the nabla operator, $(\bullet)^\mathrm{T}$ represents a transpose, $\ddot{(\bullet)}$ represents the second derivative with respect to time, and $\mathsf{A}:\mathsf{B} = A_{ij}B_{ij}$ is the inner product of two second-order tensors (summation over repeated indices). In the next section, we will delve deeper into the mathematical formulation of linear elasticity, exploring the implications of these equations and how they can be applied to solve practical problems in materials science and engineering. #### 5.2b Linear elastic behavior in fluids Linear elasticity in fluids, also known as fluid elasticity, is a concept that describes the behavior of fluids under stress. This is particularly relevant in the study of viscoelastic fluids, which exhibit both viscous and elastic characteristics. Unlike solids, fluids do not retain a fixed shape and can flow under applied stress. However, viscoelastic fluids can resist deformation and exhibit elastic recovery, similar to solids. The linear elastic behavior of fluids can be described by the Navier-Stokes equations, which are a set of nonlinear partial differential equations that describe the motion of viscous fluid substances. These equations are based on the principles of conservation of mass, conservation of linear momentum, and conservation of energy. The Navier-Stokes equations for an incompressible, isotropic, and homogeneous fluid are given by: $$ \rho \left( \frac{\partial \mathbf{v}}{\partial t} + \mathbf{v} \cdot \nabla \mathbf{v} \right) = -\nabla p + \mu \nabla^2 \mathbf{v} + \mathbf{F} $$ $$ \nabla \cdot \mathbf{v} = 0 $$ where $\rho$ is the fluid density, $\mathbf{v}$ is the fluid velocity vector, $p$ is the pressure, $\mu$ is the dynamic viscosity, and $\mathbf{F}$ is the body force per unit volume. In the context of viscoelastic fluids, the stress tensor $\boldsymbol{\sigma}$ is not only dependent on the rate of deformation but also on the entire deformation history. This behavior can be described by the Oldroyd-B model, a constitutive equation for viscoelastic fluids, given by: $$ \boldsymbol{\sigma} + \lambda_1 \frac{\partial \boldsymbol{\sigma}}{\partial t} = 2\eta_0 \left( \boldsymbol{\varepsilon} + \lambda_2 \frac{\partial \boldsymbol{\varepsilon}}{\partial t} \right) $$ where $\boldsymbol{\sigma}$ is the stress tensor, $\boldsymbol{\varepsilon}$ is the rate of strain tensor, $\eta_0$ is the zero shear viscosity, and $\lambda_1$ and $\lambda_2$ are relaxation times. The linear elastic behavior of fluids is a complex topic due to the time-dependent nature of fluid deformation and the interplay between viscous and elastic effects. Understanding this behavior is crucial in many engineering applications, including the design and analysis of systems involving viscoelastic fluids. ### Conclusion In this chapter, we have explored the concept of elasticity, a fundamental property of materials that describes their ability to return to their original shape after being deformed. We have delved into the mathematical models that describe this behavior, including Hooke's Law, which states that the strain in a material is proportional to the applied stress within the elastic limit. We have also discussed the elastic modulus, a measure of a material's resistance to elastic deformation under load. Different materials have different elastic moduli, which can be determined experimentally. The elastic modulus is a critical parameter in the design and analysis of mechanical systems, as it helps engineers predict how materials will behave under different loading conditions. In addition, we have examined the concepts of shear modulus and bulk modulus, which describe how a material deforms under shear stress and volumetric stress, respectively. These properties are also crucial in understanding and predicting the mechanical behavior of materials. In conclusion, the study of elasticity provides a foundation for understanding the mechanical behavior of materials. It allows us to predict how materials will respond to different types of stress, which is essential in the design and analysis of mechanical systems. ### Exercises #### Exercise 1 Calculate the elastic modulus of a material if a stress of 200 MPa causes a strain of 0.002. Use the formula $E = \frac{\sigma}{\epsilon}$, where $E$ is the elastic modulus, $\sigma$ is the stress, and $\epsilon$ is the strain. #### Exercise 2 A material has a shear modulus of 80 GPa. If a shear stress of 40 MPa is applied, what is the shear strain? Use the formula $\gamma = \frac{\tau}{G}$, where $\gamma$ is the shear strain, $\tau$ is the shear stress, and $G$ is the shear modulus. #### Exercise 3 If a material has a bulk modulus of 140 GPa and is subjected to a volumetric stress of 70 MPa, what is the volumetric strain? Use the formula $\epsilon_v = \frac{\sigma_v}{K}$, where $\epsilon_v$ is the volumetric strain, $\sigma_v$ is the volumetric stress, and $K$ is the bulk modulus. #### Exercise 4 Discuss the difference between the elastic modulus, shear modulus, and bulk modulus. How do these properties influence the mechanical behavior of materials? #### Exercise 5 Explain why the study of elasticity is important in the design and analysis of mechanical systems. Provide examples to support your explanation. ## Chapter: Superelasticity ### Introduction Superelasticity, also known as pseudoelasticity, is a unique mechanical behavior exhibited by certain materials, particularly some alloys and ceramics. This chapter will delve into the fascinating world of superelastic materials, exploring their unique properties, the underlying mechanisms that give rise to superelasticity, and their wide-ranging applications in various fields. Superelastic materials are characterized by their ability to undergo large deformations and then return to their original shape once the stress is removed. This behavior is not due to the conventional elastic deformation but is a result of a reversible phase transformation that occurs within the material. The chapter will elucidate this phase transformation, often referred to as a martensitic transformation, in detail. The chapter will also discuss the factors that influence superelastic behavior, such as temperature, strain rate, and the specific composition of the alloy or ceramic. Understanding these factors is crucial for tailoring superelastic materials for specific applications. In addition, the chapter will explore the various applications of superelastic materials. Due to their unique properties, these materials have found extensive use in fields as diverse as aerospace, biomedical engineering, and consumer electronics. The chapter will provide an overview of these applications, highlighting the role of superelastic materials in shaping our world. Finally, the chapter will touch upon the latest research and advancements in the field of superelastic materials. This rapidly evolving field continues to push the boundaries of material science, and keeping abreast of the latest developments is essential for anyone interested in the mechanical behavior of materials. In summary, this chapter aims to provide a comprehensive understanding of superelasticity, from the fundamental mechanisms to the practical applications. Whether you are a student, a researcher, or a professional in the field of materials science, this chapter will serve as a valuable resource in your exploration of superelastic materials. ### Section: 6.1 Superelasticity: Superelasticity, also known as pseudoelasticity, is a unique mechanical behavior exhibited by certain materials, particularly some alloys and ceramics. This behavior is characterized by the ability of these materials to undergo large deformations and then return to their original shape once the stress is removed. This is not due to the conventional elastic deformation but is a result of a reversible phase transformation that occurs within the material. #### 6.1a Mechanisms of superelastic behavior The superelastic behavior of materials is primarily due to a reversible phase transformation, often referred to as a martensitic transformation. This transformation is a diffusionless, shear-based process that involves a change in the crystal structure of the material from a high-temperature phase (austenite) to a low-temperature phase (martensite) under stress. The transformation starts when the material is subjected to stress, causing the austenite phase to transform into the martensite phase. This transformation is accompanied by a change in the shape of the material, leading to deformation. However, unlike conventional plastic deformation, this deformation is fully reversible. When the stress is removed, the martensite phase transforms back into the austenite phase, and the material returns to its original shape. The superelastic behavior can be represented mathematically using the finite strain theory. If we consider the principal stretches $\lambda_i$, the spectral decompositions of $\mathbf{C}$ and $\mathbf{B}$ can be given by $$ \mathbf{C} = \sum_{i=1}^3 \lambda_i^2 \mathbf{N}_i \otimes \mathbf{N}_i \qquad \text{and} \qquad \mathbf{B} = \sum_{i=1}^3 \lambda_i^2 \mathbf{n}_i \otimes \mathbf{n}_i $$ Here, $\mathbf{C}$ and $\mathbf{B}$ are the right and left Cauchy-Green deformation tensors, respectively, and $\mathbf{N}_i$ and $\mathbf{n}_i$ are the principal directions in the reference and current configurations, respectively. The effect of the deformation gradient $\mathbf{F}$ acting on $\mathbf{N}_i$ is to stretch the vector by $\lambda_i$ and to rotate it to the new orientation $\mathbf{n}_i$, i.e., $$ \mathbf{F}~\mathbf{N}_i = \lambda_i~(\mathbf{R}~\mathbf{N}_i) = \lambda_i~\mathbf{n}_i $$ This equation represents the deformation of the material during the phase transformation. The superelastic behavior is characterized by the reversibility of this deformation, which is represented by the reversibility of the phase transformation. In the next sections, we will delve deeper into the factors that influence superelastic behavior and the applications of superelastic materials. #### 6.1b Applications of superelastic materials Superelastic materials, due to their unique mechanical behavior, have found a wide range of applications in various fields. The ability of these materials to undergo large deformations and then return to their original shape once the stress is removed makes them ideal for use in applications where resilience and durability are required. One of the most common applications of superelastic materials is in the field of biomedical engineering. For instance, Nitinol, a nickel-titanium alloy known for its superelastic properties, is widely used in the manufacturing of medical devices such as stents and orthodontic wires. The superelastic behavior of Nitinol allows these devices to be deformed during insertion and then return to their pre-deformed shape once in place, providing a perfect fit. In the aerospace industry, superelastic materials are used in the design of morphing structures, which can change their shape in response to external stimuli. The superelastic behavior of these materials allows for the creation of structures that can withstand large deformations without permanent damage, making them ideal for use in applications where resilience to extreme conditions is required. In civil engineering, superelastic materials are used in the design of seismic-resistant structures. The ability of these materials to absorb and dissipate energy through their phase transformation process makes them ideal for use in applications where resistance to seismic activity is required. The mathematical representation of superelastic behavior can be used to predict the performance of these materials in various applications. For instance, the finite strain theory can be used to model the deformation behavior of superelastic materials under different loading conditions. This can help in the design of devices and structures that can take full advantage of the unique properties of these materials. In conclusion, the unique mechanical behavior of superelastic materials, characterized by their ability to undergo large deformations and then return to their original shape, has made them invaluable in a wide range of applications. As our understanding of these materials and their behavior continues to grow, so too will their potential applications. ### Conclusion In this chapter, we have delved into the fascinating world of superelasticity, a unique mechanical behavior exhibited by certain materials. We have explored the fundamental principles that govern this behavior, the specific materials that exhibit superelasticity, and the various applications of superelastic materials in different fields. We have learned that superelasticity is a reversible, temperature-dependent phenomenon that occurs due to a phase transformation in the material. This transformation allows the material to withstand large strains and return to its original shape once the stress is removed, a property that is highly desirable in many applications. We have also discussed the different types of materials that exhibit superelasticity, including certain alloys and ceramics. These materials are used in a wide range of applications, from medical devices to aerospace components, due to their unique mechanical properties. In conclusion, understanding the mechanical behavior of materials, particularly superelasticity, is crucial in the design and development of new materials and technologies. As we continue to push the boundaries of material science, the study of superelasticity will undoubtedly play a key role in shaping the future of various industries. ### Exercises #### Exercise 1 Explain the phase transformation process that occurs in a superelastic material when it is subjected to stress. #### Exercise 2 List and describe three applications of superelastic materials in the medical field. #### Exercise 3 Compare and contrast the mechanical behavior of a superelastic material with that of a traditional elastic material. #### Exercise 4 Identify a material that exhibits superelasticity and discuss its composition and the factors that contribute to its superelastic behavior. #### Exercise 5 Discuss the potential impact of superelastic materials on the future of material science and technology. ## Chapter: Nonlinear Elasticity ### Introduction In the realm of materials science, understanding the mechanical behavior of materials is crucial. This chapter, "Nonlinear Elasticity," delves into one of the more complex aspects of this field. Nonlinear elasticity refers to the behavior of materials when the relationship between stress and strain is not linear, i.e., the deformation of the material is not directly proportional to the applied stress. In the linear elasticity regime, Hooke's law, which states that the force exerted by a spring is directly proportional to the displacement, is a governing principle. However, when materials are subjected to high levels of stress, they often exhibit nonlinear elastic behavior. This means that the stress-strain relationship deviates from Hooke's law, and the material's response becomes more complex. Nonlinear elasticity is a critical concept in materials science and engineering because it helps us understand and predict how materials will behave under extreme conditions. This knowledge is particularly important in the design and analysis of structures and materials that are subjected to high levels of stress, such as in aerospace, civil engineering, and materials design applications. In this chapter, we will explore the mathematical models that describe nonlinear elasticity, including the concepts of strain hardening and softening, and the role of material anisotropy in nonlinear elastic behavior. We will also discuss the experimental methods used to study nonlinear elasticity and the challenges associated with these methods. Understanding nonlinear elasticity is not a straightforward task, as it involves complex mathematical equations and concepts. However, with the help of this chapter, readers will gain a comprehensive understanding of this important aspect of the mechanical behavior of materials. ### Section: 7.1 Nonlinear elasticity: Nonlinear elasticity is a branch of elasticity that deals with the behavior of materials under stress conditions where the stress-strain relationship is not linear. This nonlinearity can be due to several factors, including the inherent properties of the material, the magnitude of the applied stress, and the environmental conditions. #### Subsection 7.1a Nonlinear stress-strain behavior The stress-strain relationship in materials is a fundamental concept in materials science. In the linear regime, this relationship is governed by Hooke's law, which states that the stress is directly proportional to the strain. However, under certain conditions, this relationship becomes nonlinear, leading to complex material behavior. The nonlinear stress-strain behavior can be observed in many materials, especially those subjected to high levels of stress. This behavior is characterized by a deviation from Hooke's law, where the stress is no longer directly proportional to the strain. Instead, the stress-strain curve becomes nonlinear, often exhibiting a sigmoidal shape. This nonlinear behavior can be attributed to several factors. For instance, the inherent properties of the material, such as its microstructure and crystallography, can influence the stress-strain relationship. Additionally, the magnitude of the applied stress can also lead to nonlinearity. When the stress exceeds a certain threshold, the material may undergo plastic deformation, leading to a permanent change in shape. The environmental conditions, such as temperature and humidity, can also affect the stress-strain relationship. For instance, at high temperatures, materials may exhibit thermal expansion, leading to an increase in strain for a given stress. Similarly, in humid conditions, materials may absorb moisture, leading to changes in their mechanical properties. Understanding the nonlinear stress-strain behavior is crucial for predicting the mechanical behavior of materials under extreme conditions. This knowledge can be used to design and analyze structures and materials that are subjected to high levels of stress, such as in aerospace, civil engineering, and materials design applications. In the following sections, we will delve deeper into the mathematical models that describe nonlinear elasticity, including the concepts of strain hardening and softening, and the role of material anisotropy in nonlinear elastic behavior. We will also discuss the experimental methods used to study nonlinear elasticity and the challenges associated with these methods. #### Subsection 7.1b Elastic limit and yield point The elastic limit and yield point are two critical parameters in the study of the mechanical behavior of materials, particularly in the context of nonlinear elasticity. The elastic limit, also known as the proportionality limit, is the maximum stress that a material can withstand without any permanent deformation. In other words, it is the highest point on the stress-strain curve where the material still obeys Hooke's law. Beyond this point, the material will undergo plastic deformation and will not return to its original shape even after the removal of the applied stress. Mathematically, the elastic limit ($\sigma_{el}$) can be represented as: $$ \sigma_{el} = E \cdot \epsilon_{el} $$ where $E$ is the modulus of elasticity and $\epsilon_{el}$ is the strain at the elastic limit. The yield point, on the other hand, is the point at which a material begins to deform plastically. Beyond this point, the material will continue to deform even with no increase in load. The yield point is typically characterized by a sudden drop in the stress-strain curve, followed by a region of constant stress known as the yield plateau. The yield stress ($\sigma_y$) can be defined as: $$ \sigma_y = E \cdot \epsilon_y $$ where $\epsilon_y$ is the strain at the yield point. It is important to note that not all materials have a well-defined yield point. For such materials, the yield stress is typically determined using an offset method, where a line is drawn parallel to the initial linear portion of the stress-strain curve, but offset by a certain amount of strain (usually 0.2%). Understanding the elastic limit and yield point is crucial for designing materials and structures that can withstand specific loads without undergoing permanent deformation. These parameters also play a key role in determining the safety factor of a design, which is the ratio of the ultimate stress (the maximum stress a material can withstand before failure) to the working stress (the actual stress under normal operating conditions). ### Conclusion In this chapter, we have delved into the complex and fascinating world of nonlinear elasticity. We have explored the fundamental principles that govern the mechanical behavior of materials under non-linear elastic deformation. We have also examined the mathematical models that describe this behavior, and the physical phenomena that these models represent. We have seen that nonlinear elasticity is a crucial concept in understanding the behavior of many materials, especially those that undergo large deformations. It is a field that has wide-ranging applications, from the design of engineering structures to the study of biological tissues. The mathematical models of nonlinear elasticity, while complex, provide a powerful tool for predicting and understanding the behavior of materials. These models, based on the principles of continuum mechanics, allow us to describe the stress-strain relationship in materials under large deformations, and to predict their response to applied forces. In conclusion, the study of nonlinear elasticity is not just an academic exercise. It is a vital tool in the toolbox of any engineer or scientist who works with materials. By understanding the principles of nonlinear elasticity, we can design better, safer, and more efficient structures and materials. ### Exercises #### Exercise 1 Derive the stress-strain relationship for a hyperelastic material under large deformations. Assume the material is isotropic and incompressible. #### Exercise 2 Consider a rubber band being stretched. Describe the mechanical behavior of the rubber band in terms of nonlinear elasticity. What are the key factors that influence this behavior? #### Exercise 3 Explain the concept of strain energy density function in the context of nonlinear elasticity. How is it used in the mathematical modeling of material behavior? #### Exercise 4 Consider a material that exhibits nonlinear elastic behavior. How would you experimentally determine the parameters of the mathematical model that describes its behavior? #### Exercise 5 Discuss the applications of nonlinear elasticity in the design of engineering structures. How does understanding of nonlinear elasticity contribute to the design of safer and more efficient structures? ## Chapter 8: Viscoelasticity ### Introduction In the fascinating world of materials science, understanding the mechanical behavior of materials is crucial. This chapter, Chapter 8, delves into the concept of viscoelasticity, a unique property that combines the characteristics of both viscous and elastic materials. Viscoelasticity is a complex phenomenon that describes the time-dependent deformation of materials under stress. Unlike purely elastic materials that immediately return to their original shape after the removal of stress, or purely viscous materials that flow under stress, viscoelastic materials exhibit a blend of these behaviors. They deform slowly when subjected to a constant force and gradually return to their original shape once the force is removed. The study of viscoelasticity is essential in various fields, including polymer science, biomechanics, and geophysics, to name a few. It helps us understand and predict how materials will behave under different conditions, thereby guiding the design and manufacturing processes in industries. In this chapter, we will explore the fundamental principles of viscoelasticity, starting with the basic definitions and models. We will then delve into the mathematical descriptions of viscoelastic behavior, using the powerful tools of calculus and differential equations. The use of mathematical expressions, such as `$\sigma(t)$` for stress and `$\epsilon(t)$` for strain, will be prevalent throughout this chapter. We will also discuss the factors that influence viscoelastic behavior, such as temperature and loading rate. By the end of this chapter, you should have a comprehensive understanding of viscoelasticity and its implications in the realm of materials science. Remember, the study of viscoelasticity is not just about understanding the behavior of certain materials. It's about harnessing this knowledge to innovate, create, and improve the materials that shape our world. Let's embark on this journey of discovery together. ### Section: 8.1 Viscoelasticity: #### 8.1a Viscoelastic behavior in polymers Polymers are a class of materials that exhibit viscoelastic behavior. This behavior is a result of their molecular structure, which consists of long, flexible chains of atoms. These chains can slide past each other under stress, leading to viscous flow, but they can also stretch and recover, leading to elastic behavior. The viscoelastic behavior of polymers is often described using models that incorporate both viscous and elastic elements. These models can be represented mathematically, allowing us to predict the behavior of a polymer under different conditions. One of the most common models used to describe the viscoelastic behavior of polymers is the Maxwell model. This model represents a viscoelastic material as a spring (representing the elastic behavior) and a dashpot (representing the viscous behavior) connected in series. The equation for the Maxwell model is: $$ \sigma(t) = E \epsilon(t) + \eta \frac{d\epsilon(t)}{dt} $$ where $\sigma(t)$ is the stress, $\epsilon(t)$ is the strain, $E$ is the elastic modulus, $\eta$ is the viscosity, and $t$ is time. Another common model is the Kelvin-Voigt model, which represents a viscoelastic material as a spring and a dashpot connected in parallel. The equation for the Kelvin-Voigt model is: $$ \sigma(t) = E \epsilon(t) + \eta \frac{d\sigma(t)}{dt} $$ These models are useful for predicting the behavior of polymers under different conditions. However, they are simplifications and do not capture all the complexities of real-world materials. For example, they do not account for the non-Newtonian behavior of many polymers, which can exhibit properties such as shear thinning or thickening. In the following sections, we will delve deeper into the viscoelastic behavior of polymers, exploring how factors such as temperature and loading rate can influence this behavior. We will also discuss how this knowledge can be applied in the design and manufacturing of polymer-based products. #### 8.1b Time-dependent deformation in viscoelastic materials Viscoelastic materials exhibit time-dependent deformation when subjected to a constant load. This behavior, known as creep, is characterized by an initial rapid deformation, or instantaneous strain, followed by a slower rate of deformation that eventually reaches a steady state. The creep behavior of viscoelastic materials can be described mathematically using the following equation: $$ \epsilon(t) = \epsilon_0 + \epsilon_\infty \left(1 - e^{-t/\tau}\right) $$ where $\epsilon(t)$ is the strain at time $t$, $\epsilon_0$ is the instantaneous strain, $\epsilon_\infty$ is the steady-state strain, and $\tau$ is the time constant that characterizes the rate of creep. The time-dependent deformation of viscoelastic materials is also influenced by the temperature. As the temperature increases, the rate of creep increases. This is due to the increased mobility of the polymer chains at higher temperatures, which allows them to slide past each other more easily. The time-dependent deformation of viscoelastic materials can also be influenced by the loading rate. A higher loading rate can lead to a higher rate of deformation, as the material does not have enough time to relax and recover its original shape. Understanding the time-dependent deformation of viscoelastic materials is crucial for predicting their behavior under different conditions. This knowledge can be applied in the design and optimization of materials for various applications, such as automotive components, medical devices, and consumer products. In the next section, we will discuss the effects of temperature and loading rate on the viscoelastic behavior of materials in more detail. We will also explore how these factors can be manipulated to optimize the performance of viscoelastic materials in different applications. ### Conclusion In this chapter, we have delved into the fascinating world of viscoelasticity, a property that characterizes materials which exhibit both viscous and elastic characteristics when undergoing deformation. We have explored the fundamental concepts, mathematical models, and practical applications of viscoelasticity. We have learned that viscoelastic materials, unlike purely elastic or purely viscous materials, exhibit time-dependent strain. This means that the deformation of the material depends not only on the applied stress but also on the rate at which it is applied. We have also discussed the three main models of viscoelasticity: the Maxwell model, the Kelvin-Voigt model, and the Standard Linear Solid Model. Each of these models provides a different perspective on the behavior of viscoelastic materials, and each has its own strengths and limitations. In the realm of practical applications, we have seen how understanding viscoelasticity can help us predict and control the behavior of a wide range of materials, from polymers and biological tissues to metals and ceramics. By understanding the viscoelastic properties of these materials, we can design better products, improve manufacturing processes, and even develop new materials with tailored properties. In conclusion, viscoelasticity is a complex and fascinating field that bridges the gap between solid mechanics and fluid dynamics. It provides a powerful tool for understanding and predicting the behavior of a wide range of materials, and it has important applications in many areas of engineering and science. ### Exercises #### Exercise 1 Explain the difference between elastic, viscous, and viscoelastic materials. Provide examples of each. #### Exercise 2 Describe the Maxwell model of viscoelasticity. What are its strengths and limitations? #### Exercise 3 Describe the Kelvin-Voigt model of viscoelasticity. What are its strengths and limitations? #### Exercise 4 Describe the Standard Linear Solid Model of viscoelasticity. What are its strengths and limitations? #### Exercise 5 Choose a material that exhibits viscoelastic behavior. Describe its properties and discuss how understanding its viscoelastic behavior can help us in its practical applications. ## Chapter: Rubber Elasticity ### Introduction Rubber elasticity, a fascinating aspect of materials science, is the focus of this ninth chapter. This chapter will delve into the unique mechanical behavior of rubber and other elastomeric materials, which exhibit significant elastic deformation under stress and can recover their original shape upon the removal of the applied force. Rubber elasticity is a phenomenon that is not only of academic interest but also of immense practical importance. It is the fundamental principle behind the operation of a wide range of products and applications, from automobile tires and conveyor belts to medical devices and protective equipment. Understanding the mechanical behavior of rubber is therefore crucial for engineers and scientists involved in the design and manufacture of these products. In this chapter, we will explore the molecular basis of rubber elasticity, discussing how the random and entropic nature of polymer chains in the unstressed state contributes to the elastic response of rubber. We will also examine the effect of temperature and other environmental factors on rubber elasticity, providing a comprehensive overview of the factors that influence the mechanical behavior of rubber. The mathematical models that describe rubber elasticity will also be a key focus of this chapter. We will introduce and explain the key equations and theories, such as the Gaussian chain model and the neo-Hookean model, that have been developed to predict the elastic behavior of rubber. These models, which are often expressed in terms of strain energy functions, will be presented using the TeX and LaTeX style syntax, for example, `$\sigma = \frac{\partial W}{\partial \lambda}$`, where `$\sigma$` is the stress, `$W$` is the strain energy function, and `$\lambda$` is the stretch ratio. By the end of this chapter, readers should have a solid understanding of the principles and theories of rubber elasticity, and be equipped with the knowledge to apply these concepts in their own work. Whether you are a student, a researcher, or a professional in the field of materials science, this chapter on rubber elasticity promises to be an enlightening and informative read. ### Section: 9.1 Rubber elasticity: #### Subsection: 9.1a Molecular origins of rubber elasticity The molecular origins of rubber elasticity can be traced back to the unique structure and behavior of the polymer chains that make up rubber. These chains are composed of repeating units of isoprene, a hydrocarbon molecule with five carbon atoms and eight hydrogen atoms. The chains are linked together to form a three-dimensional network, which gives rubber its characteristic elasticity. The Molecular Kink Paradigm provides a useful framework for understanding the behavior of these chains under strain. According to this paradigm, the chains are constrained within a 'tube' by the surrounding chains. When a strain is applied, elastic forces are produced in a chain and propagated along the chain contour within this tube. The chains explore their possible conformations mainly by rotating about the carbon-carbon single bonds. Sections of chain containing between two and three isoprene units have sufficient flexibility that they may be considered statistically de-correlated from one another. These non-straight regions, referred to as 'kinks', are a manifestation of the random-walk nature of the chain. Over time scales of seconds to minutes, only these relatively short sections of the chain, i.e., kinks, have sufficient volume to move freely amongst their possible rotational conformations. The thermal interactions tend to keep the kinks in a state of constant flux, as they make transitions between all of their possible rotational conformations. Because the kinks are in thermal equilibrium, the probability that a kink resides in any rotational conformation is given by a Boltzmann distribution. We can associate an entropy with its end-to-end distance, which can be expressed as: $$ S = k \ln \Omega $$ where `$S$` is the entropy, `$k$` is the Boltzmann constant, and `$\Omega$` is the number of microstates. This entropy is a measure of the randomness or disorder of the kinks. When a rubber material is stretched, the kinks are straightened, and the entropy decreases. This decrease in entropy is what gives rise to the elastic force that resists the deformation and causes the material to return to its original shape when the strain is removed. In the next section, we will delve deeper into the mathematical models that describe this entropic elasticity, starting with the Gaussian chain model. #### Subsection: 9.1b Physical properties of elastomers Elastomers, such as Polydimethylsiloxane (PDMS), exhibit unique mechanical properties that are largely attributed to their viscoelastic nature. This viscoelasticity is a form of nonlinear elasticity that is common amongst noncrystalline polymers, including rubber. At long flow times or high temperatures, PDMS behaves like a viscous liquid, similar to honey. Conversely, at short flow times or low temperatures, it behaves like an elastic solid, akin to rubber. This behavior is due to the long-chain structure of the polymer, which allows it to adapt to different conditions. The stress-strain curve for PDMS does not coincide during loading and unloading. Instead, the amount of stress varies based on the degree of strain, with increasing strain resulting in greater stiffness. When the load is removed, the strain is slowly recovered rather than instantaneously. This time-dependent elastic deformation is a result of the long-chains of the polymer. However, this behavior is only observed when cross-linking is present. Without cross-linking, the polymer PDMS cannot revert back to its original state even when the load is removed, resulting in a permanent deformation. This is rarely seen in PDMS, as it is almost always cured with a cross-linking agent. The mechanical properties of PDMS allow it to conform to a wide variety of surfaces. If some PDMS is left on a surface overnight (long flow time), it will flow to cover the surface and mold to any surface imperfections. However, if the same PDMS is poured into a spherical mold and allowed to cure (short flow time), it will bounce like a rubber ball. These properties are influenced by a variety of factors, making PDMS relatively easy to tune. This makes PDMS a good substrate that can be easily integrated into a variety of microfluidic and microelectromechanical systems. The mechanical properties can be determined before PDMS is cured, allowing for a wide range of opportunities to achieve a desirable elastomer. In the next section, we will delve deeper into the role of cross-linking in the mechanical behavior of elastomers and how it contributes to their unique properties. ### Conclusion In this chapter, we have delved into the fascinating world of rubber elasticity, a unique mechanical behavior exhibited by elastomeric materials. We have explored the fundamental principles that govern the elastic behavior of rubber and similar materials, and how these principles can be applied in practical scenarios. We have learned that rubber elasticity is primarily due to the entropic effect, which is a consequence of the random-walk configuration of polymer chains. The elasticity of rubber is not due to the stretching of bonds, as is the case with most materials, but rather due to the change in entropy of the polymer chains as they are stretched and then allowed to return to their original, random configuration. We have also discussed the models that describe rubber elasticity, such as the affine and phantom network models. These models provide a mathematical framework for understanding and predicting the behavior of rubber under various conditions. In conclusion, understanding rubber elasticity is crucial in many fields, including materials science, mechanical engineering, and product design. It allows us to predict and control the behavior of elastomeric materials, leading to the development of better, more efficient products and systems. ### Exercises #### Exercise 1 Explain the concept of entropy and how it relates to the elasticity of rubber. #### Exercise 2 Compare and contrast the affine and phantom network models. What are the key differences and similarities between these two models? #### Exercise 3 Describe a practical application of rubber elasticity. How does understanding rubber elasticity contribute to the success of this application? #### Exercise 4 Using the affine model, calculate the elastic modulus of a rubber sample given the following parameters: number of chains per unit volume $N$, Boltzmann's constant $k$, and absolute temperature $T$. The formula for the elastic modulus $E$ is given by: $$ E = nkT $$ #### Exercise 5 Discuss the limitations of the models we have studied in this chapter. How might these limitations affect the accuracy of predictions made using these models? ## Chapter: Continuum Plasticity ### Introduction In the realm of materials science, understanding the mechanical behavior of materials is crucial. This chapter, Chapter 10, delves into the concept of Continuum Plasticity, a fundamental aspect of the mechanical behavior of materials. Continuum Plasticity is a branch of continuum mechanics that deals with the permanent deformation of materials when subjected to external forces. It is a critical concept in materials science, as it helps us understand how materials respond to stress and strain, and how they deform and fail under certain conditions. In this chapter, we will explore the mathematical models that describe the plastic behavior of materials. We will delve into the principles of plasticity theory, which is based on the concept of a yield surface. This surface represents the limit beyond which a material begins to deform plastically, and it is described mathematically by the yield function $f(\sigma)$, where $\sigma$ is the stress tensor. We will also discuss the concept of plastic strain, which is the permanent deformation that remains after the load is removed. This is represented mathematically by the plastic strain tensor $\varepsilon^p$, and its evolution is governed by the flow rule, which is often expressed in terms of the plastic potential $g(\sigma)$. Finally, we will explore the concept of hardening, which describes how a material's yield surface evolves with plastic deformation. This is often modeled using the hardening rule, which describes how the yield stress or the size of the yield surface changes with plastic strain. By the end of this chapter, you should have a comprehensive understanding of the principles of continuum plasticity and how they are used to model the mechanical behavior of materials. This knowledge will be invaluable in your further studies and applications in materials science and engineering. ### Section: 10.1 Continuum plasticity: #### 10.1a Plastic deformation mechanisms Plastic deformation is a permanent change in shape that occurs when a material is subjected to external forces. It is a critical concept in materials science, as it helps us understand how materials respond to stress and strain, and how they deform and fail under certain conditions. There are several mechanisms by which plastic deformation can occur, including dislocation motion, grain boundary sliding, and diffusion. In this section, we will explore these mechanisms in detail. ##### Dislocation Motion Dislocation motion is the primary mechanism of plastic deformation in crystalline materials. A dislocation is a line defect in the crystal structure, and its motion results in the permanent deformation of the material. The presence of a high hydrostatic pressure, in combination with large shear strains, is essential for producing high densities of crystal lattice defects, particularly dislocations, which can result in a significant refining of the grains. The rate at which dislocations are generated and annihilated is a key factor in determining the grain size of the material. F.A. Mohamad has proposed a model for the minimum grain size achievable using mechanical milling, based on this concept. While the model was developed specifically for mechanical milling, it has also been successfully applied to other severe plastic deformation (SPD) processes. ##### Grain Boundary Sliding Grain boundary sliding is another mechanism of plastic deformation. It occurs when the grains in a material slide past each other along their boundaries. This mechanism is particularly important in materials with small grain sizes, where the grain boundaries make up a significant fraction of the total volume of the material. The mechanism by which the subgrains rotate is less understood. Wu et al. describe a process in which dislocation motion becomes restricted due to the small subgrain size and grain rotation becomes more energetically favorable. Mishra et al. propose a slightly different explanation, in which the rotation is aided by diffusion along the grain boundaries, which is much faster than through the bulk. ##### Diffusion Diffusion is the process by which atoms move from areas of high concentration to areas of low concentration. In the context of plastic deformation, diffusion can occur along the grain boundaries, aiding in the rotation of the grains. This process is much faster than diffusion through the bulk of the material, making it a significant factor in plastic deformation. In the next section, we will delve into the mathematical models that describe these plastic deformation mechanisms. These models will provide a deeper understanding of the principles of continuum plasticity and how they are used to model the mechanical behavior of materials. #### 10.1b Plastic flow and strain hardening Plastic flow is the irreversible deformation of a material under stress. It is a critical aspect of the mechanical behavior of materials, particularly in the context of plasticity. Plastic flow can be visualized as the movement of dislocations through the crystal lattice of a material. This movement is facilitated by the application of an external stress, which causes the dislocations to move and the material to deform. Strain hardening, also known as work hardening, is a phenomenon that occurs when a material becomes stronger and harder as it is plastically deformed. This is due to the increase in dislocation density that occurs during plastic deformation. As the dislocation density increases, the movement of dislocations becomes more difficult, leading to an increase in the material's strength and hardness. ##### Plastic Flow The plastic flow of a material can be described mathematically using the flow rule, which relates the rate of plastic strain to the applied stress. The flow rule can be expressed as: $$ \dot{\epsilon}_{p} = \dot{\lambda} \frac{\partial f}{\partial \sigma} $$ where $\dot{\epsilon}_{p}$ is the rate of plastic strain, $\dot{\lambda}$ is the plastic multiplier, $f$ is the yield function, and $\sigma$ is the stress tensor. The plastic multiplier $\dot{\lambda}$ is a scalar quantity that determines the rate of plastic deformation. It is typically determined using a hardening law, which describes how the yield stress of a material changes with plastic deformation. ##### Strain Hardening Strain hardening can be described using the hardening law, which relates the yield stress of a material to its plastic strain. The hardening law can be expressed as: $$ \sigma_{y} = \sigma_{0} + K \epsilon_{p}^{n} $$ where $\sigma_{y}$ is the yield stress, $\sigma_{0}$ is the initial yield stress, $K$ is the hardening constant, $\epsilon_{p}$ is the plastic strain, and $n$ is the hardening exponent. The hardening constant $K$ and the hardening exponent $n$ are material properties that can be determined experimentally. The hardening exponent $n$ typically ranges from 0 to 1, with a value of 0 indicating no strain hardening and a value of 1 indicating perfect strain hardening. In the next section, we will explore the concept of plasticity in more detail, focusing on the mathematical models used to describe the plastic behavior of materials. ### Conclusion In this chapter, we have delved into the complex world of continuum plasticity, exploring the mechanical behavior of materials under various conditions. We have examined the fundamental principles that govern the plastic deformation of materials, and how these principles can be applied to predict and understand the behavior of materials under stress. We have learned that the plastic behavior of materials is not a simple linear response, but rather a complex interplay of various factors such as the material's microstructure, the applied stress, and the temperature. We have also seen how the concept of yield stress plays a crucial role in determining the onset of plastic deformation. Moreover, we have discussed the mathematical models that describe the plastic behavior of materials, and how these models can be used to predict the material's response to different loading conditions. These models, while complex, provide a powerful tool for engineers and scientists to design and analyze materials for various applications. In conclusion, the study of continuum plasticity provides a comprehensive understanding of the mechanical behavior of materials. It allows us to predict and control the material's response to external forces, thereby enabling us to design materials with desired properties and performance. ### Exercises #### Exercise 1 Derive the mathematical expression for the yield stress of a material, given its microstructure and the applied stress. #### Exercise 2 Explain the role of temperature in the plastic deformation of materials. How does an increase in temperature affect the yield stress and the plastic behavior of a material? #### Exercise 3 Discuss the limitations of the linear elastic model in describing the plastic behavior of materials. How does the concept of continuum plasticity address these limitations? #### Exercise 4 Given a material's stress-strain curve, determine the onset of plastic deformation. What information can you infer about the material's mechanical behavior from this curve? #### Exercise 5 Describe a practical application where the principles of continuum plasticity are used to design or analyze a material. How does the understanding of the material's plastic behavior contribute to its performance in this application? ## Chapter: Plasticity in Crystals ### Introduction The study of the mechanical behavior of materials is a vast and complex field, and one of the most intriguing aspects of this field is the concept of plasticity in crystals. This chapter, Chapter 11: Plasticity in Crystals, delves into this fascinating topic, exploring the fundamental principles and mechanisms that govern plastic deformation in crystalline materials. Plasticity, in the context of materials science, refers to the permanent deformation that occurs when a material is subjected to external forces beyond its elastic limit. Unlike elastic deformation, which is reversible, plastic deformation results in a permanent change in the shape or size of the material. This behavior is particularly interesting in crystalline materials, which possess a highly ordered, repeating atomic structure. In this chapter, we will explore the various factors that influence plasticity in crystals, such as the crystal structure, the type and intensity of the applied stress, and the temperature. We will also delve into the mechanisms of plastic deformation, including dislocation movement and slip, twinning, and phase transformations. We will also discuss the mathematical models used to describe and predict plastic behavior in crystals. These models, which often involve complex equations and calculations, are crucial for understanding and predicting the behavior of materials under various conditions. For instance, we might use the equation `$\sigma = K \epsilon^n$` to describe the stress-strain relationship in a material undergoing plastic deformation, where `$\sigma$` is the stress, `$\epsilon$` is the strain, `$K$` is the strength coefficient, and `$n$` is the strain hardening exponent. By the end of this chapter, you should have a solid understanding of the principles and mechanisms of plasticity in crystals, and be able to apply this knowledge to predict and analyze the behavior of crystalline materials under various conditions. Whether you are a student, a researcher, or a professional in the field of materials science, this chapter will provide you with valuable insights into the fascinating world of plasticity in crystals. ### Section: 11.1 Plasticity in crystals: #### Subsection: 11.1a Slip systems and dislocations in crystals In the context of plastic deformation in crystals, slip systems and dislocations play a crucial role. A slip system is defined by the combination of a slip plane, which is the plane along which dislocation movement occurs, and a slip direction, which is the direction of the dislocation movement. The number and geometric arrangement of slip systems in a crystal significantly influence its plastic behavior. Dislocations, on the other hand, are defects in the crystal structure that play a key role in plastic deformation. They can be categorized into two main types: edge dislocations and screw dislocations. Edge dislocations occur when an extra half-plane of atoms is inserted into a crystal structure, while screw dislocations occur when a shear stress causes one part of the crystal to slide past another. The movement of dislocations is the primary mechanism of plastic deformation in crystalline materials. When a stress is applied to a material, dislocations move along the slip planes, causing the material to deform. The ease with which dislocations can move depends on several factors, including the crystal structure, the type and intensity of the applied stress, and the temperature. The mathematical description of dislocation movement is complex and involves several variables. For instance, the velocity of a dislocation, `$v$`, can be described by the equation: $$ v = v_0 e^{-\frac{Q}{kT}} $$ where `$v_0$` is the attempt frequency, `$Q$` is the activation energy for dislocation movement, `$k$` is Boltzmann's constant, and `$T$` is the absolute temperature. This equation shows that the velocity of a dislocation increases with increasing temperature and decreasing activation energy. The strain produced by the movement of a single dislocation, `$\epsilon$`, can be calculated using the equation: $$ \epsilon = \frac{b}{L} $$ where `$b$` is the magnitude of the Burgers vector, which represents the magnitude and direction of the lattice distortion caused by a dislocation, and `$L$` is the length of the dislocation line. This equation shows that the strain produced by a dislocation increases with increasing Burgers vector and decreasing dislocation line length. In the following sections, we will delve deeper into the mechanisms of dislocation movement and slip, and explore how they contribute to the plastic behavior of crystalline materials. Burgers vector, `$L$` is the length of the dislocation line. This equation shows that the strain produced by the movement of a dislocation increases with increasing Burgers vector and decreasing length of the dislocation line. #### 11.1b Deformation mechanisms in crystalline materials The deformation of crystalline materials is primarily governed by two mechanisms: dislocation glide or slip, and dislocation creep. As discussed in the previous section, dislocation glide involves the movement of dislocations along the slip planes, leading to plastic deformation. However, this mechanism alone cannot account for large strains due to the effects of strain-hardening. Strain-hardening occurs when a dislocation 'tangle' inhibits the movement of other dislocations, which then pile up behind the blocked ones, causing the crystal to become difficult to deform. This is where the second mechanism, dislocation creep, comes into play. Dislocation creep is a non-linear deformation mechanism in which vacancies in the crystal glide and climb past obstruction sites within the crystal lattice. These migrations within the crystal lattice can occur in one or more directions and are triggered by the effects of increased differential stress. It occurs at lower temperatures relative to diffusion creep. The mechanical process presented in dislocation creep is called slip. The principal direction in which dislocation takes place are defined by a combination of slip planes and weak crystallographic orientations resulting from vacancies and imperfections in the atomic structure. Each dislocation causes a part of the crystal to shift by one lattice point along the slip plane, relative to the rest of the crystal. Each crystalline material has different distances between atoms or ions in the crystal lattice, resulting in different lengths of displacement. The vector that characterizes the length and orientation of the displacement is called the Burgers vector. The development of strong lattice preferred orientation can be interpreted as evidence for dislocation creep as dislocations move only in specific lattice planes. Some form of recovery process, such as dislocation climb or grain-boundary migration must also be active. Slipping of the dislocation results in a more stable state for the crystal as the pre-existing imperfection is removed. In the next section, we will discuss the factors that influence the rate of dislocation creep and how it contributes to the overall plasticity of crystalline materials. ### 11.2 Plasticity in amorphous materials Amorphous materials, unlike their crystalline counterparts, lack a long-range ordered atomic structure. This lack of order results in a unique set of mechanical properties and deformation mechanisms. The primary deformation mechanism in amorphous materials is shear banding, which is a localized deformation that occurs along narrow planes within the material. #### 11.2a Deformation mechanisms in amorphous materials Shear banding in amorphous materials is a result of the material's inability to accommodate strain through dislocation movement, as is the case in crystalline materials. Instead, when an amorphous material is subjected to stress, the atoms within the material rearrange themselves along the plane of maximum shear stress, forming a shear band. This shear band then propagates through the material, leading to plastic deformation. The propagation of shear bands is a highly non-linear process and is influenced by factors such as the applied stress, temperature, and the material's inherent resistance to shear banding. The resistance to shear banding, often referred to as the material's shear modulus, is a measure of the material's ability to resist deformation under shear stress. The shear modulus of an amorphous material is typically lower than that of a crystalline material due to the lack of long-range order in the atomic structure. This results in a lower yield strength and a higher ductility in amorphous materials compared to crystalline materials. However, the lack of dislocations in amorphous materials also means that they do not exhibit work hardening, which is a common strengthening mechanism in crystalline materials. In addition to shear banding, other deformation mechanisms such as cavitation and crazing can also occur in amorphous materials. Cavitation involves the formation of voids within the material under tensile stress, while crazing involves the formation of highly deformed, fibrillar regions within the material. Both of these mechanisms can lead to failure in amorphous materials if not properly managed. In the next section, we will discuss the various methods that can be used to enhance the mechanical properties of amorphous materials, including the introduction of pinning points to inhibit shear band propagation and the use of thermal treatments to increase the material's resistance to shear banding. #### 11.2b Plastic flow in glasses and polymers Plastic flow in amorphous materials such as glasses and polymers is a complex process that involves the movement of atoms or molecules under the influence of an applied stress. This movement is facilitated by the lack of long-range order in the atomic or molecular structure of these materials, which allows for a greater degree of freedom in the rearrangement of atoms or molecules. In glasses, plastic flow is often associated with the glass transition, a temperature-dependent process in which the material transitions from a brittle, glassy state to a ductile, rubbery state. Above the glass transition temperature, the material exhibits viscoelastic behavior, which combines the characteristics of both viscous flow and elastic deformation. This behavior is described by the power law fluid model, which provides a mathematical description of the flow behavior of the material. The power law fluid model is given by: $$ \frac{h_0}{h}=\left ( 1+t*(\frac{2n+3}{4n+2})(\frac{(4*h_0*L_0)^{n+1}*F*(n+2)}{(2*L_0)^{2n+3}*W*m})^{1/n}\right )^{n/2n+3} $$ Where $m$ (or $K$) is the "flow consistency index" and $n$ is the dimensionless "flow behavior index". The flow consistency index $m$ is given by: $$ m=m_0*exp\left ( \frac{-E_a}{R*T} \right ) $$ Where $m_0$ is the "initial flow consistency index", $E_a$ is the activation energy, $R$ is the universal gas constant, and $T$ is the temperature. In polymers, plastic flow can occur through a variety of mechanisms, including shear banding, cavitation, and crazing. These mechanisms are influenced by factors such as the applied stress, temperature, and the material's inherent resistance to deformation. The Bingham fluid model is often used to describe the flow behavior of polymers, particularly those that exhibit yield stress behavior. The understanding of plastic flow in glasses and polymers is crucial for the optimization of processes such as glass recycling and the manufacturing of polymer products. However, the complexity of these materials and their deformation mechanisms presents significant challenges for accurate modeling and prediction of their behavior under different conditions. Further research and development of more accurate models are needed to overcome these challenges. ### Conclusion In this chapter, we have delved into the concept of plasticity in crystals, a crucial aspect of understanding the mechanical behavior of materials. We have explored the fundamental principles that govern plastic deformation in crystalline materials, including the role of dislocations, the concept of slip, and the impact of crystal structure on plastic behavior. We have also discussed the importance of temperature and strain rate on the plasticity of crystals. It was shown that these factors can significantly influence the movement of dislocations and, consequently, the plastic deformation of the material. Moreover, we have examined the effects of plastic deformation on the mechanical properties of materials, such as strength and ductility. We have seen that plastic deformation can lead to work hardening, which increases the strength of the material but decreases its ductility. In summary, the study of plasticity in crystals provides valuable insights into the mechanical behavior of materials. It allows us to predict and control the deformation behavior of materials under different conditions, which is essential for the design and manufacturing of various products. ### Exercises #### Exercise 1 Explain the role of dislocations in the plastic deformation of crystals. How do they move and what factors influence their movement? #### Exercise 2 Describe the concept of slip in crystals. How does it contribute to plastic deformation? #### Exercise 3 Discuss the impact of crystal structure on the plastic behavior of materials. How does the arrangement of atoms in a crystal influence its ability to deform plastically? #### Exercise 4 Explain the effects of temperature and strain rate on the plasticity of crystals. How do these factors influence the movement of dislocations and the plastic deformation of the material? #### Exercise 5 Describe the effects of plastic deformation on the mechanical properties of materials. How does plastic deformation lead to work hardening, and what are its implications for the strength and ductility of the material? ## Chapter: Controlling Plasticity Onset ### Introduction The onset of plasticity in materials is a critical point in their mechanical behavior. It marks the transition from elastic deformation, where the material returns to its original shape after the load is removed, to plastic deformation, where the material undergoes permanent shape change. This chapter, "Controlling Plasticity Onset", delves into the fundamental principles and techniques used to control the onset of plasticity in various materials. Understanding and controlling the onset of plasticity is crucial in many engineering applications. For instance, in the design of structures and components, engineers often need to ensure that the materials used do not undergo plastic deformation under the expected loads. This is because plastic deformation can lead to failure of the structure or component. The chapter begins by discussing the concept of plasticity and its onset. It explains the difference between elastic and plastic deformation, and the factors that influence the transition from one to the other. This includes the role of stress and strain, material properties, and the effect of temperature and strain rate. The chapter then moves on to discuss various techniques for controlling the onset of plasticity. These techniques can be broadly categorized into material selection and treatment, and design and loading strategies. Material selection and treatment involves choosing materials with the desired properties and treating them in a way that enhances their resistance to plastic deformation. Design and loading strategies involve designing the structure or component and choosing the loading conditions in a way that minimizes the likelihood of plastic deformation. Throughout the chapter, the discussion is supported by mathematical models and equations. For example, the relationship between stress and strain is often represented by the equation `$\sigma = E \epsilon$`, where `$\sigma$` is the stress, `$E$` is the modulus of elasticity, and `$\epsilon$` is the strain. The onset of plasticity is often associated with the yield stress, `$\sigma_y$`, which is the stress at which the material begins to deform plastically. In conclusion, this chapter provides a comprehensive guide to understanding and controlling the onset of plasticity in materials. It combines theoretical principles with practical techniques, making it a valuable resource for students, researchers, and engineers in the field of materials science and engineering. ### Section: 12.1 Controlling plasticity onset: #### Subsection 12.1a Strengthening mechanisms in materials The onset of plasticity in materials can be controlled through various strengthening mechanisms. These mechanisms are designed to modify the yield strength, ductility, and toughness of materials, both crystalline and amorphous. By tailoring these mechanical properties, engineers can optimize materials for specific applications. One such mechanism is the interstitial incorporation of atoms into a material's lattice. For instance, the favorable properties of steel result from the incorporation of carbon atoms into the iron lattice. This process increases the yield strength of the material, thereby delaying the onset of plasticity. Another strengthening mechanism is solution strengthening, as seen in the case of brass, a binary alloy of copper and zinc. Brass exhibits superior mechanical properties compared to its constituent metals due to the strengthening effect of the zinc atoms in the copper lattice. Work hardening is a third mechanism used to control the onset of plasticity. This process involves introducing dislocations into materials, which increases their yield strengths. For centuries, blacksmiths have used work hardening to strengthen materials by beating a red-hot piece of metal on an anvil. #### Subsection 12.1b Dislocation and Plastic Deformation Plastic deformation occurs when large numbers of dislocations move and multiply, resulting in macroscopic deformation. The movement of dislocations in the material allows for deformation. To enhance a material's mechanical properties, such as increasing the yield and tensile strength, a mechanism that prohibits the mobility of these dislocations is introduced. The stress required to cause dislocation motion is orders of magnitude lower than the theoretical stress required to shift an entire plane of atoms. This mode of stress relief is energetically favorable. Therefore, the hardness and strength (both yield and tensile) of a material critically depend on the ease with which dislocations move. Pinning points, or locations in the crystal that oppose the motion of dislocations, can be introduced into the lattice to reduce dislocation mobility, thereby increasing mechanical strength. Dislocations may be pinned due to stress field interactions with other dislocations, impurities, or grain boundaries. In the next section, we will delve deeper into the specific techniques used to introduce pinning points into a material's lattice and how these techniques can be used to control the onset of plasticity. #### Subsection 12.1b Control of plasticity through microstructural design The onset of plasticity can also be controlled through the design of the material's microstructure. This approach is particularly relevant in the context of 3D printing, where the microstructure can be precisely controlled during the fabrication process. ##### Microstructures in 3D printing In 3D printing, the microstructure of a material is determined by the design of the 'tile' that is used to build the material. This tile is a small, repeatable unit that is used to construct the larger structure. The properties of the microstructure, and therefore the mechanical behavior of the material, are determined by the connections between the nodes of the tile. ##### Microstructures optimization The optimization of a microstructure involves maximizing the edge space and the vertex space of the tile. This can be achieved by carefully selecting which nodes are connected to each other. However, two key attributes must be met to ensure the printability and tileability of the structure. ###### Printability Printability refers to the ability to physically print the structure. This can be achieved by ensuring that for every set of connected nodes on one level, there is at least one node which is connected to another node located lower in the structure. Additionally, the edges between the nodes must be sufficiently thick. ###### Tileability Tileability refers to the ability to connect multiple tiles together to form the larger structure. This can be achieved by ensuring that the set of nodes on the surface of the tile is the same on every side. This allows two tiles to be connected together at their respective nodes. ##### Shape optimization The shape of the tile can also be optimized to control the mechanical behavior of the material. This involves controlling the material parameters to produce microstructures that can achieve different behaviors. For example, the curvature variation and negatively curved regions can be minimized to reduce stress concentrations. In conclusion, the onset of plasticity in materials can be controlled through various mechanisms, including the design of the material's microstructure. This approach is particularly relevant in the context of 3D printing, where the microstructure can be precisely controlled during the fabrication process. By carefully designing the microstructure, engineers can optimize the mechanical properties of materials for specific applications. #### 12.2a Surface effects on plastic deformation The onset of plasticity is not only influenced by the bulk properties of a material but also by its surface characteristics. The surface of a material can significantly affect the initiation and propagation of plastic deformation due to the presence of surface defects, grain boundaries, and the influence of interfacial energy. ##### Surface Defects Surface defects such as vacancies, interstitials, and dislocations can act as stress concentrators, promoting the onset of plastic deformation. Interstitial defects, for instance, modify the physical and chemical properties of materials, thereby influencing their mechanical behavior. The presence of interstitial atoms can cause lattice distortion, leading to an increase in the yield strength of the material. This phenomenon, known as solid solution strengthening, can effectively delay the onset of plasticity. ##### Grain Boundaries at Surfaces Grain boundaries at the surface of a material can also significantly influence its plastic behavior. As discussed in the previous sections, the interfacial energy of grain boundaries affects the mechanisms of grain boundary sliding and dislocation transmission. High-angle grain boundaries, which have large misorientations between adjacent grains, tend to have higher interfacial energy and are more effective in impeding dislocation motion. This results in a higher resistance to plastic deformation. Conversely, low-angle grain boundaries with small misorientations and lower interfacial energy may allow for easier dislocation transmission, leading to an earlier onset of plasticity. Therefore, controlling the grain boundary orientation at the surface of a material can be an effective strategy for controlling the onset of plasticity. ##### Interfacial Energy at Surfaces The interfacial energy at the surface of a material can also influence its plastic behavior. Higher interfacial energy promotes greater resistance to plastic deformation, as the higher energy barriers inhibit the relative movement of dislocations. By controlling the interfacial energy, it is possible to engineer materials with desirable mechanical properties. For instance, introducing alloying elements into the material can alter the interfacial energy of grain boundaries. Alloying can result in segregation at grain boundaries, which can lower the interfacial energy and promote grain boundary sliding, thereby influencing the onset of plasticity. In conclusion, the surface characteristics of a material, including surface defects, grain boundaries, and interfacial energy, play a crucial role in controlling the onset of plasticity. Understanding these effects can provide valuable insights into the design of materials with tailored mechanical properties. #### 12.2b Surface treatments for controlling plasticity Surface treatments are effective methods for controlling the onset of plasticity in materials. These treatments can modify the surface characteristics of a material, such as its surface defects, grain boundaries, and interfacial energy, thereby influencing its mechanical behavior. ##### Surface Hardening Surface hardening is a common treatment used to increase the resistance of a material to plastic deformation. This process involves the introduction of residual compressive stresses on the surface of a material, which can inhibit the movement of dislocations and delay the onset of plasticity. Techniques such as shot peening, laser peening, and cold working can be used to induce surface hardening. ##### Grain Boundary Engineering at Surfaces As discussed in the previous sections, the grain boundary orientation at the surface of a material can significantly influence its plastic behavior. Therefore, controlling the grain boundary orientation through surface treatments can be an effective strategy for controlling the onset of plasticity. Techniques such as thermomechanical processing and severe plastic deformation can be used to manipulate the grain boundary structure and energy at the surface of a material. ##### Surface Alloying Surface alloying is another effective method for controlling the onset of plasticity. This process involves introducing alloying elements into the surface of a material, which can alter the interfacial energy of grain boundaries and influence the mechanisms of grain boundary sliding and dislocation transmission. Techniques such as ion implantation, laser alloying, and thermal spraying can be used for surface alloying. ##### Surface Coating Surface coating involves the application of a thin layer of material onto the surface of another material. This layer can modify the surface characteristics of the material, such as its surface defects, grain boundaries, and interfacial energy, thereby influencing its mechanical behavior. Techniques such as physical vapor deposition (PVD), chemical vapor deposition (CVD), and thermal spraying can be used for surface coating. In conclusion, surface treatments offer a promising approach for controlling the onset of plasticity in materials. By modifying the surface characteristics of a material, these treatments can influence its mechanical behavior and enhance its resistance to plastic deformation. However, the effectiveness of these treatments depends on several factors, including the type of material, the specific treatment technique, and the operating conditions. Therefore, a comprehensive understanding of the material's behavior and the treatment process is essential for achieving the desired results. ### Conclusion In this chapter, we have delved into the intricate world of material science, specifically focusing on the onset of plasticity in materials. We have explored the various factors that influence the onset of plasticity, including temperature, strain rate, and the microstructure of the material. We have also discussed the importance of controlling the onset of plasticity in order to optimize the mechanical behavior of materials. We have learned that the onset of plasticity is not a fixed property of a material, but rather a dynamic process that can be influenced by a variety of factors. By understanding these factors and how they interact, we can manipulate the onset of plasticity to achieve desired material properties. This knowledge is crucial in fields such as materials engineering, where the mechanical behavior of materials can have a significant impact on the performance and longevity of products. In conclusion, the onset of plasticity is a complex phenomenon that requires a deep understanding of material science. By controlling the onset of plasticity, we can optimize the mechanical behavior of materials, leading to improved performance and durability. The knowledge gained in this chapter provides a solid foundation for further exploration into the fascinating world of material science. ### Exercises #### Exercise 1 Discuss the role of temperature in controlling the onset of plasticity. How does temperature affect the mechanical behavior of materials? #### Exercise 2 Explain the relationship between strain rate and the onset of plasticity. How does strain rate influence the mechanical behavior of materials? #### Exercise 3 Describe the influence of microstructure on the onset of plasticity. How does the microstructure of a material affect its mechanical behavior? #### Exercise 4 Discuss the importance of controlling the onset of plasticity in materials engineering. How can controlling the onset of plasticity improve the performance and longevity of products? #### Exercise 5 Research and write a short essay on a real-world application where controlling the onset of plasticity is crucial. Discuss how the principles learned in this chapter are applied in this context. ## Chapter: Time-dependent Plasticity ### Introduction The study of materials and their mechanical behavior is a vast and complex field, with many different aspects to consider. One such aspect, which is the focus of this chapter, is time-dependent plasticity. This phenomenon refers to the way in which the plastic deformation of a material can change over time under a constant stress. It is a critical concept in materials science, as it can significantly impact the performance and longevity of materials in various applications. Time-dependent plasticity is a non-linear and time-dependent aspect of material behavior, which is influenced by factors such as temperature, strain rate, and the type of material. It is often associated with phenomena such as creep and stress relaxation, which can lead to failure in materials if not properly managed. Understanding time-dependent plasticity is therefore crucial for predicting the long-term behavior of materials and designing materials that can withstand specific conditions over time. In this chapter, we will delve into the fundamental principles of time-dependent plasticity, exploring its causes and effects, and how it can be modeled and predicted. We will also discuss the implications of time-dependent plasticity for the design and use of materials in various industries, from construction and manufacturing to aerospace and electronics. The aim of this chapter is not only to provide a comprehensive understanding of time-dependent plasticity but also to equip readers with the knowledge and tools to apply this understanding in practical contexts. Whether you are a student, a researcher, or a professional in the field of materials science, we hope that this chapter will enhance your understanding of the mechanical behavior of materials and their time-dependent plasticity. ### Section: 13.1 Time-dependent plasticity: Time-dependent plasticity is a critical aspect of the mechanical behavior of materials, particularly in the context of long-term performance and reliability. This section will delve into the mechanisms that govern time-dependent plasticity, with a particular focus on creep deformation mechanisms. #### 13.1a Creep deformation mechanisms Creep is a time-dependent deformation mechanism that occurs under constant stress and elevated temperatures. It is characterized by the slow and progressive deformation of a material over time. The primary mechanisms of creep include dislocation creep, diffusion creep, and grain boundary sliding, among others. ##### Dislocation Creep Dislocation creep is a primary mechanism of creep deformation. It is a non-linear deformation mechanism that involves the movement of dislocations, or defects, within the crystal lattice of a material. This movement is triggered by the effects of increased differential stress and occurs at lower temperatures relative to diffusion creep. In dislocation creep, vacancies in the crystal glide and climb past obstruction sites within the crystal lattice. This process, known as slip, can occur in one or more directions. Each dislocation causes a part of the crystal to shift by one lattice point along the slip plane, relative to the rest of the crystal. The vector that characterizes the length and orientation of this displacement is called the Burgers vector. However, dislocation glide alone cannot produce large strains due to the effects of strain-hardening. In strain-hardening, a dislocation 'tangle' can inhibit the movement of other dislocations, which then pile up behind the blocked ones, causing the crystal to become difficult to deform. To overcome this, some form of recovery process, such as dislocation climb or grain-boundary migration, must also be active. The effective viscosity of a stressed material under given conditions of temperature, pressure, and strain rate will be determined by the mechanism that delivers the smallest viscosity. Therefore, diffusion and dislocation creep can occur simultaneously. Understanding the mechanisms of creep, such as dislocation creep, is crucial for predicting the long-term behavior of materials and designing materials that can withstand specific conditions over time. In the following sections, we will delve deeper into other creep deformation mechanisms and their implications for the mechanical behavior of materials. #### 13.1b Creep resistance and design considerations Creep resistance is a crucial property of materials that are subjected to high temperatures and constant stress over extended periods. The ability of a material to resist creep deformation is dependent on its microstructure and the operating conditions. ##### Creep Resistance Creep resistance is often enhanced by the presence of a fine grain structure, as it increases the number of obstacles to dislocation motion, thereby slowing down the rate of creep. The presence of secondary phases or precipitates can also enhance creep resistance by pinning dislocations and hindering their movement. In addition, the creep resistance of a material can be improved by alloying. Alloying elements can either form a solid solution with the base metal, which can strengthen the material by distorting the crystal lattice and impeding dislocation motion, or they can form precipitates that hinder dislocation glide and climb. ##### Design Considerations When designing components that will be subjected to high temperatures and constant stress, it is important to consider the creep behavior of the material. The operating conditions, such as temperature, stress, and time, should be within the safe limits of the material's creep resistance. The design should also take into account the potential for creep rupture, which is the failure of a material due to creep. This can occur when the material is subjected to a constant load at high temperature for an extended period. The time to rupture decreases with increasing stress and temperature. In addition, the design should consider the effects of creep on the dimensional stability of the component. Creep can lead to significant changes in the dimensions of a component over time, which can affect its performance and functionality. Finally, the design should also consider the potential for creep-fatigue interactions. These occur when a material is subjected to cyclic loading at high temperatures, which can lead to a combination of creep and fatigue damage. This can significantly reduce the life of the component. In conclusion, understanding the creep behavior of materials and their creep resistance is crucial in the design of components for high-temperature applications. This understanding can help in selecting the appropriate materials and in designing components that can withstand the operating conditions without failure. #### 13.2a Creep-resistant materials and structures Creep-resistant materials are those that can withstand high temperatures and constant stress over extended periods without significant deformation. The selection of these materials is crucial in designing structures that are expected to operate under such conditions. ##### Material Selection The selection of creep-resistant materials should be based on their microstructure and the operating conditions. As discussed in the previous section, materials with a fine grain structure, low-dislocation content, and the presence of secondary phases or precipitates are more resistant to creep. Ceramic materials, for instance, are popular for their creep resistance due to their amorphous nature and low-dislocation content. The dislocation glide and climb, which promote creep, can be significantly reduced in these materials. Alloying is another effective method to improve the creep resistance of a material. Alloying elements can form a solid solution with the base metal, strengthening the material by distorting the crystal lattice and impeding dislocation motion. Alternatively, they can form precipitates that hinder dislocation glide and climb. ##### Structural Design The design of structures to resist creep involves considering the operating conditions and the potential for creep rupture. The operating conditions, such as temperature, stress, and time, should be within the safe limits of the material's creep resistance. The potential for creep rupture, which is the failure of a material due to creep, should also be taken into account. This can occur when the material is subjected to a constant load at high temperature for an extended period. The time to rupture decreases with increasing stress and temperature. The effects of creep on the dimensional stability of the component should also be considered. Creep can lead to significant changes in the dimensions of a component over time, which can affect its performance and functionality. Finally, the design should also consider the potential for creep-fatigue interactions. These occur when a material is subjected to cyclic loading at high temperatures and can lead to premature failure. In conclusion, the design against creep involves a careful selection of materials and thoughtful consideration of the operating conditions and potential failure modes. By understanding the mechanisms of creep and how to mitigate them, we can design materials and structures that can withstand high temperatures and constant stress over extended periods. #### 13.2b Creep testing and analysis Creep testing is a crucial step in understanding the time-dependent plasticity of materials. It involves subjecting a material to a constant stress at a specific temperature over an extended period and observing the deformation or strain over time. The data obtained from these tests can be used to predict the long-term behavior of materials under similar conditions. ##### Creep Testing Creep tests are typically performed using a creep testing machine, which applies a constant load to a specimen at a specific temperature. The strain or deformation of the specimen is then measured over time. The data obtained from these tests can be plotted on a creep curve, which shows the strain as a function of time. The creep curve typically consists of three stages: primary creep, secondary creep, and tertiary creep. Primary creep is characterized by a decreasing creep rate, secondary creep by a constant creep rate, and tertiary creep by an increasing creep rate leading to failure. ##### Creep Analysis The analysis of creep data involves determining the creep rate and the time to rupture. The creep rate, which is the rate of strain as a function of time, can be determined from the slope of the secondary creep stage. The time to rupture, on the other hand, can be determined from the time at which the specimen fails during the tertiary creep stage. The total strain under creep conditions can be denoted as $\epsilon_t$, where: $$ \epsilon_t = \epsilon_g + \epsilon_{gbs} + \epsilon_{dc} $$ Here, $\epsilon_g$ is the strain associated with intragranular dislocation processes, $\epsilon_{gbs}$ is the strain due to Rachinger GBS associated with intragranular sliding, and $\epsilon_{dc}$ is the strain due to Lifshitz GBS associated with diffusion creep. In practice, experiments are usually performed under conditions where creep is negligible, reducing the equation to: $$ \epsilon_t = \epsilon_g + \epsilon_{gbs} $$ The contribution of GBS to the total strain can then be denoted as: $$ \eta = \epsilon_{gbs} / \epsilon_t $$ The sliding contribution can be estimated by individual measurements of $\epsilon_{gbs}$ through displacement vectors. A common and easier way in practice is to use interferometry to measure fringes along the v displacement axis. The sliding strain is then given by: $$ \epsilon_{gbs} = k''nr vr $$ Where $k''$ is a constant, $nr$ is the number of measurements, and $vr$ is the average of n measurements. By understanding the creep behavior of materials through testing and analysis, we can design materials and structures that can withstand high temperatures and constant stress over extended periods. This is crucial in many industries, including power generation, aerospace, and automotive, where materials are often subjected to such conditions. ### Conclusion In this chapter, we have delved into the fascinating world of time-dependent plasticity, a key aspect of the mechanical behavior of materials. We have explored how materials deform plastically over time under the influence of stress, and how this deformation is not instantaneous but occurs gradually. This phenomenon, known as creep, is of critical importance in many engineering applications, particularly those involving high temperatures and long durations of stress. We have also discussed the concept of stress relaxation, where the stress in a material decreases over time while the strain remains constant. This is particularly relevant in materials that are subjected to constant deformation, such as in the construction of bridges and buildings. The mathematical models and equations we have studied provide a theoretical framework for understanding and predicting time-dependent plasticity. These models, while complex, are essential tools for engineers and materials scientists in designing and selecting materials for specific applications. In conclusion, time-dependent plasticity is a complex but crucial aspect of the mechanical behavior of materials. Understanding this phenomenon is key to predicting the long-term performance and reliability of materials in various applications. ### Exercises #### Exercise 1 Derive the mathematical model for creep deformation in materials. Discuss the assumptions made in the derivation and their implications. #### Exercise 2 Explain the concept of stress relaxation with the help of a real-world example. How does this phenomenon affect the design and selection of materials in engineering applications? #### Exercise 3 Consider a material subjected to a constant stress over a long period of time. Using the equations discussed in this chapter, predict the creep deformation of the material. #### Exercise 4 Discuss the limitations of the mathematical models for time-dependent plasticity. How can these models be improved to more accurately predict the behavior of materials? #### Exercise 5 Research and write a short report on a case study where time-dependent plasticity played a critical role in the failure of a structure or component. What lessons can be learned from this case study? ## Chapter: Continuum Fracture ### Introduction The study of materials is not complete without understanding their behavior under stress, and more specifically, how they fracture. This chapter, "Continuum Fracture", delves into the intricate world of material fracture, exploring the fundamental principles that govern this phenomenon. Fracture mechanics is a critical field of study in materials science, with far-reaching implications in various industries, from construction and manufacturing to aerospace and biomedical engineering. The ability to predict and control the fracture behavior of materials is crucial in ensuring the safety, reliability, and longevity of structures and devices made from these materials. In this chapter, we will explore the concept of continuum fracture, a theoretical framework that describes the propagation of cracks in materials. This theory is based on the principles of continuum mechanics, which treats materials as continuous, homogeneous entities, rather than discrete atomic structures. This simplification allows us to model and predict the behavior of materials under various loading conditions, providing valuable insights into their fracture behavior. We will delve into the mathematical models that describe continuum fracture, including the stress intensity factor and the fracture toughness. These models provide a quantitative measure of a material's resistance to fracture, and are essential tools in the design and analysis of structures. The chapter will also discuss the limitations of the continuum fracture theory, and how it is complemented by other theories such as linear elastic fracture mechanics (LEFM) and elastic-plastic fracture mechanics (EPFM). These theories provide a more comprehensive understanding of fracture behavior, taking into account the material's elastic and plastic responses to stress. In the world of materials science, understanding fracture mechanics is not just about predicting when and how materials will break. It's about harnessing this knowledge to design better, safer, and more durable materials and structures. As we delve into the world of continuum fracture, we invite you to join us on this fascinating journey of discovery and innovation. ### Section: 14.1 Continuum fracture: Continuum fracture is a theoretical framework that describes the propagation of cracks in materials. This theory is based on the principles of continuum mechanics, which treats materials as continuous, homogeneous entities, rather than discrete atomic structures. This simplification allows us to model and predict the behavior of materials under various loading conditions, providing valuable insights into their fracture behavior. #### 14.1a Types of fracture in materials Fracture in materials can be broadly classified into two types: brittle fracture and ductile fracture. Brittle fracture occurs without any significant deformation and is characterized by rapid crack propagation. This type of fracture is common in materials with a high degree of brittleness, such as glass and some ceramics. Brittle fracture is often catastrophic, as it occurs suddenly and without any prior indication. The stress intensity factor, $K$, is a key parameter in predicting the onset of brittle fracture. It is defined as: $$ K = Y \sigma \sqrt{\pi a} $$ where $Y$ is a dimensionless constant, $\sigma$ is the applied stress, and $a$ is the crack length. Ductile fracture, on the other hand, involves significant plastic deformation before failure. This type of fracture is common in metals and alloys, such as austenitic stainless steel and carbon steel. Ductile fracture is characterized by the formation of a "neck" or localized reduction in cross-sectional area, followed by the formation and coalescence of microscopic voids within the material. The fracture toughness, $K_{IC}$, is a measure of a material's resistance to ductile fracture. It is defined as: $$ K_{IC} = \sigma \sqrt{\pi a} $$ where $\sigma$ is the yield stress and $a$ is the crack length. The type of fracture that a material undergoes depends on several factors, including its microstructure, temperature, strain rate, and the presence of stress concentrators such as notches. For example, sharp-tipped V-shaped notches are often used in standard fracture toughness testing for ductile materials, while U-notches and keyhole notches are used for brittle materials. Understanding the type of fracture and the conditions under which it occurs is crucial in the design and analysis of materials and structures. In the following sections, we will delve deeper into the mechanisms of brittle and ductile fracture, and discuss the mathematical models that describe these phenomena. #### 14.1b Fracture toughness and critical stress intensity factor Fracture toughness, denoted as $K_{IC}$, is a critical property of materials that quantifies their resistance to fracture in the presence of a flaw or crack. It is a measure of the energy required to propagate a crack in a material and is typically measured using standardized test methods such as the Charpy impact test or three-point beam bending tests. The critical stress intensity factor, also known as the fracture toughness, is a material property that describes the ability of a material to resist fracture in the presence of a crack. It is denoted as $K_{IC}$ and is defined as: $$ K_{IC} = Y \sigma \sqrt{\pi a} $$ where $Y$ is a dimensionless constant, $\sigma$ is the applied stress, and $a$ is the crack length. The subscript $IC$ denotes that this is the critical value of the stress intensity factor, beyond which rapid fracture occurs. The value of $K_{IC}$ is determined experimentally and is typically reported in units of MPa m$^{0.5}$. Higher values of $K_{IC}$ indicate greater resistance to fracture. It is important to note that $K_{IC}$ is a material property and is therefore independent of the size and shape of the specimen or the loading conditions. The ASTM standard E1820 recommends three types of specimens for fracture toughness testing: the single-edge bending coupon [SE(B)], the compact tension coupon [C(T)], and the disk-shaped compact tension coupon [DC(T)]. The choice of specimen depends on the specific requirements of the test and the material being tested. The orientation of the material is also an important factor in fracture toughness testing due to the inherent non-isotropic nature of most engineering materials. The fracture toughness can vary significantly depending on the direction of crack propagation relative to the grain structure of the material. In summary, the fracture toughness and the critical stress intensity factor are key parameters in the study of the mechanical behavior of materials. They provide valuable insights into the resistance of materials to fracture, which is crucial in the design and analysis of structures and components in various engineering applications. #### 14.2a Brittle fracture in single crystals Brittle fracture is a common mode of failure in single crystals, particularly in materials with a high degree of atomic order and low ductility. This type of fracture occurs without any significant plastic deformation and is characterized by a rapid crack propagation. The fracture surfaces of brittle materials often exhibit a granular or faceted appearance, which is a reflection of the crystallographic planes along which the fracture occurred. The mechanical behavior of single crystals under stress can be understood in terms of the Schmid's law, which states that the resolved shear stress on a slip plane in a given direction is a critical parameter in determining the onset of plastic deformation. However, in the case of brittle fracture, the critical parameter is the resolved normal stress on the cleavage plane, which is the plane of easiest fracture. The resolved normal stress, $\sigma_n$, on a plane with a normal vector $\vec{n}$ under a stress state $\vec{\sigma}$ is given by: $$ \sigma_n = \vec{n} \cdot \vec{\sigma} \cdot \vec{n} $$ When $\sigma_n$ exceeds a critical value, $\sigma_{n,c}$, cleavage fracture occurs. This critical stress is a material property and is typically determined experimentally. It is important to note that $\sigma_{n,c}$ is a function of temperature and strain rate, reflecting the thermally activated nature of cleavage fracture. The orientation of the crystal lattice with respect to the loading direction can significantly affect the propensity for brittle fracture. This is due to the anisotropic nature of the crystal structure, which results in different planes having different cleavage strengths. Therefore, the orientation of the crystal can be a critical factor in determining the fracture toughness of a single crystal material. In summary, understanding the mechanical behavior of single crystals, particularly the mode of brittle fracture, requires a consideration of the crystallographic orientation, the applied stress state, and the material's inherent resistance to fracture, as characterized by the critical normal stress on the cleavage plane. This understanding can inform the design of materials and structures to prevent catastrophic brittle fracture. #### 14.2b Ductile fracture in polycrystalline materials Ductile fracture, unlike brittle fracture, is characterized by significant plastic deformation prior to failure. This type of fracture is common in polycrystalline materials, which are composed of numerous small crystals or grains. The mechanical behavior of these materials under stress can be complex due to the interactions between the grains and the presence of grain boundaries, which can serve as sites for the initiation of cracks. The process of ductile fracture in polycrystalline materials can be broadly divided into three stages: void initiation, void growth, and void coalescence. Void initiation occurs when the applied stress causes the nucleation of microscopic voids at inclusions, grain boundaries, or other stress concentrators within the material. The applied stress also causes dislocation movement, which can lead to the formation of dislocation pile-ups at grain boundaries. These pile-ups can result in localized stress concentrations, which can further promote void initiation. In the second stage, the applied stress causes the voids to grow. This growth is primarily driven by plastic deformation in the material surrounding the voids. The rate of void growth is influenced by several factors, including the applied stress, the strain rate, and the temperature. Finally, when the voids have grown sufficiently, they begin to interact with each other and eventually coalesce to form a crack. This crack then propagates through the material, leading to fracture. The ductile fracture behavior of polycrystalline materials can be described mathematically using the Rice and Tracey void growth model: $$ \frac{dV}{d\epsilon} = f(1 - f) $$ where $V$ is the void volume fraction, $\epsilon$ is the strain, and $f$ is a function that describes the rate of void growth. This model provides a quantitative description of the void growth process and can be used to predict the onset of ductile fracture in polycrystalline materials. In summary, understanding the mechanical behavior of polycrystalline materials, particularly the mode of ductile fracture, requires a consideration of the microstructural features of the material, the applied stress state, and the material's response to this stress in the form of plastic deformation and void growth. ### Section: 14.3 Fracture in amorphous materials: Amorphous materials, such as glasses, exhibit unique fracture behavior due to their lack of long-range order and crystalline structure. The fracture mechanisms in these materials are primarily governed by their atomic structure and the presence of residual stresses. #### 14.3a Fracture mechanisms in glasses Glasses, as a prime example of amorphous materials, are characterized by their brittle nature. The fracture in glasses is typically initiated by the formation of a crack, which then propagates rapidly through the material under the influence of an applied stress. This process is similar to the brittle fracture mechanism observed in crystalline materials, but with some key differences due to the amorphous structure of glasses. In glasses, the atomic structure lacks the regular, repeating pattern found in crystalline materials. This means that there are no specific planes of weakness (such as grain boundaries in polycrystalline materials) where a crack can easily initiate or propagate. Instead, the fracture process in glasses is more random and can occur anywhere within the material where a sufficient stress concentration exists. The presence of residual stresses can significantly influence the fracture behavior of glasses. As discussed in the context, compressive residual stresses can help to prevent brittle fracture by counteracting the tensile stresses at the crack tips. This is the principle behind the toughening of glass, where compressive stresses are intentionally induced on the surface of the glass to increase its resistance to crack initiation and propagation. However, if the compressive residual stress is overcome by an external tensile stress, the crack can propagate rapidly, leading to fracture. This is why toughened glass, despite its increased resistance to cracking, can still shatter into small shards when the outer surface is broken. The fracture behavior of glasses can be described mathematically using the Griffith's criterion for brittle fracture: $$ \sigma = \sqrt{\frac{2E\gamma}{\pi a}} $$ where $\sigma$ is the applied stress, $E$ is the Young's modulus, $\gamma$ is the surface energy, and $a$ is the half-length of the crack. This equation provides a quantitative description of the fracture process in glasses and can be used to predict the onset of fracture under a given set of conditions. #### 14.3b Fracture behavior of polymers Polymers, another class of amorphous materials, exhibit fracture behavior that is distinct from both metals and glasses. The fracture in polymers is influenced by their unique molecular structure, which consists of long chains of repeating units. These chains are held together by covalent bonds, with secondary van der Waals bonds providing additional stability. The fracture process in polymers involves the breaking of these bonds, which requires a significant amount of energy. The fracture behavior of polymers can be described using the principles of fracture mechanics. However, due to their ductile nature and time-independent, nonlinear behavior, the standard linear elastic fracture mechanics, which is often used for metals, is not always applicable. Instead, elastic-plastic fracture mechanics is often used to characterize the fracture behavior of polymers. Under elastic-plastic fracture mechanics, the initiation site for fracture in polymers can often occur at inorganic dust particles where the stress exceeds a critical value. This is due to the fact that these particles can act as stress concentrators, leading to the initiation of a crack. The Griffith's law, which is used to predict the amount of energy needed to create a new surface, can also be applied to polymers. According to this law, the fracture stress required as a function of crack length is given by: $$\sigma = \sqrt{\frac{2 \gamma E}{\pi a}}$$ where $\sigma$ is the fracture stress, $\gamma$ is the surface free energy per area, $E$ is the Young's modulus of the material, and $a$ is the crack length. However, it should be noted that the application of Griffith's law to polymers is not straightforward due to their complex molecular structure and the presence of secondary bonds. Therefore, additional factors such as the rate of loading, temperature, and the presence of residual stresses need to be taken into account when predicting the fracture behavior of polymers. In conclusion, the fracture behavior of polymers is a complex process that is influenced by a variety of factors. A thorough understanding of these factors is crucial for the design and application of polymeric materials in various industries. ### Conclusion In this chapter, we have delved into the complex world of continuum fracture, exploring the mechanical behavior of materials under various conditions. We have examined the fundamental principles that govern the fracture of materials, and how these principles can be applied to predict and prevent failure in real-world applications. We have seen that the continuum fracture theory is a powerful tool for understanding the behavior of materials under stress. It provides a framework for predicting the onset and progression of fractures, and for designing materials and structures that are resistant to fracture. However, we have also seen that the theory is not without its limitations. It is based on certain assumptions and simplifications that may not always hold true in practice. Therefore, it is important to use the theory judiciously, and to supplement it with experimental data and other analytical tools whenever possible. In the end, the study of continuum fracture is not just about understanding the failure of materials. It is also about harnessing this understanding to create materials and structures that are stronger, safer, and more durable. It is about pushing the boundaries of what is possible in materials science and engineering, and contributing to the advancement of technology and society. ### Exercises #### Exercise 1 Derive the Griffith's criterion for brittle fracture from first principles. Assume that the material is homogeneous and isotropic, and that the stress at the crack tip is much greater than the average stress in the material. #### Exercise 2 Consider a material with a known fracture toughness. Using the principles of continuum fracture theory, predict the critical crack size beyond which the material will fail under a given applied stress. #### Exercise 3 Discuss the limitations of the continuum fracture theory. How do these limitations affect the theory's applicability to real-world materials and structures? #### Exercise 4 Design a simple experiment to measure the fracture toughness of a material. Describe the materials and methods you would use, and how you would analyze the results. #### Exercise 5 Consider a material that exhibits both brittle and ductile behavior. How would the principles of continuum fracture theory apply to this material? Discuss the challenges and potential solutions for predicting and preventing fracture in such a material. ## Chapter: 15 - Fatigue ### Introduction The study of materials would be incomplete without an understanding of the phenomenon of fatigue. Fatigue, in the context of materials science, refers to the weakening of a material caused by repeatedly applied loads. It is a complex, multifaceted subject that is crucial to the design and longevity of numerous mechanical systems. This chapter, Chapter 15: Fatigue, aims to provide a comprehensive overview of this critical aspect of material behavior. Fatigue is a primary cause of failure in many engineering applications, from aircraft structures to biomedical implants. It is often a silent killer, causing catastrophic failures with little to no warning. Understanding the mechanisms of fatigue, its initiation, and propagation can help in designing more durable materials and structures. In this chapter, we will delve into the fundamental concepts of fatigue, starting with the definition and types of fatigue. We will explore the various stages of fatigue, including initiation, propagation, and final fracture. The chapter will also cover the factors influencing fatigue, such as stress concentration, material properties, and environmental conditions. We will also discuss the methods used to study fatigue, including experimental techniques and theoretical models. The chapter will introduce the concept of the S-N curve, a fundamental tool in fatigue analysis, which represents the relationship between stress amplitude and the number of cycles to failure. Finally, we will explore some of the strategies used to mitigate fatigue, including material selection, surface treatments, and design modifications. The chapter will conclude with a discussion on the future directions in fatigue research, highlighting the ongoing efforts to develop more fatigue-resistant materials and predictive models. By the end of this chapter, you should have a solid understanding of the mechanical behavior of materials under cyclic loading and the critical role of fatigue in material failure. This knowledge will be invaluable in your future studies and professional endeavors in the field of materials science and engineering. ### Section: 15.1 Fatigue #### Subsection: 15.1a Fatigue failure mechanisms Fatigue failure is a complex process that involves several stages and mechanisms. It is typically characterized by three main stages: initiation, propagation, and final fracture. Each of these stages involves different mechanisms and is influenced by various factors such as the material properties, loading conditions, and environmental factors. ##### Initiation The initiation stage of fatigue failure is where a microscopic crack or defect forms in the material. This usually occurs at locations of stress concentration, such as surface irregularities, inclusions, or grain boundaries. The initiation stage is influenced by the cyclic stress amplitude, the number of cycles, and the material's microstructure. The initiation of fatigue cracks can be described by the Coffin-Manson relation, which relates the strain amplitude to the number of cycles to failure: $$ \Delta \epsilon / 2 = \epsilon_f' (2N_f)^b $$ where $\Delta \epsilon$ is the strain range, $\epsilon_f'$ and $b$ are material constants, and $N_f$ is the number of cycles to failure. ##### Propagation Once a crack has initiated, it begins to propagate through the material under the influence of the cyclic loading. The rate of crack propagation is a function of the stress intensity factor range, $\Delta K$, which is a measure of the stress state near the crack tip. The Paris law describes the crack growth rate: $$ \frac{da}{dN} = C (\Delta K)^m $$ where $a$ is the crack length, $N$ is the number of cycles, and $C$ and $m$ are material constants. ##### Final Fracture The final stage of fatigue failure is when the crack has propagated to a critical size, and the remaining cross-section of the material can no longer support the applied load. This results in a sudden and often catastrophic failure of the component. The understanding of these mechanisms is crucial for the development of fatigue-resistant materials and for the prediction of fatigue life in engineering components. In the following sections, we will delve deeper into these mechanisms and discuss how they are influenced by various factors. #### Subsection: 15.1b Fatigue life prediction and testing Predicting the fatigue life of a material is a critical aspect of design engineering, particularly in ensuring the reliability and quality of a product. This prediction is typically based on the understanding of the thermal and mechanical failure mechanisms, as most fatigue failures can be attributed to thermo-mechanical stresses caused by differences in the coefficient of thermal and mechanical expansion. The fatigue life of a material is essentially the number of stress cycles that a material can withstand before failure. This is often determined through fatigue testing, where a sample is subjected to cyclic loading until it fails. The data obtained from these tests can then be used to predict the fatigue life of similar materials under similar conditions. ##### Fatigue Life Prediction The prediction of fatigue life is often based on the S-N curve (Stress vs Number of cycles), which is obtained from fatigue tests. The S-N curve typically has a high cycle region where the stress amplitude is low and the material can withstand a large number of cycles, and a low cycle region where the stress amplitude is high and the material fails after a relatively small number of cycles. The Basquin's law, also known as the power law, is often used to describe the S-N curve in the high cycle region: $$ \sigma_a = \sigma'f (2N_f)^b $$ where $\sigma_a$ is the stress amplitude, $\sigma'f$ and $b$ are material constants, and $N_f$ is the number of cycles to failure. In the low cycle region, the Coffin-Manson relation, which was discussed in the previous section, is often used. ##### Fatigue Testing Fatigue testing involves subjecting a material sample to cyclic loading until it fails. The loading can be in the form of tension-compression, bending, or torsion. The stress amplitude, mean stress, and frequency of the loading can be varied to simulate different service conditions. The data obtained from fatigue tests, such as the number of cycles to failure and the crack growth rate, are used to construct the S-N curve and to determine the material constants in the fatigue life prediction models. In addition to laboratory testing, field testing can also be conducted to validate the fatigue life predictions and to assess the performance of the material under actual service conditions. ##### Multi-objective Optimization and Robust Design In the design process, there are often multiple objectives that need to be optimized, such as cost, quality, and noise. This is where multi-objective optimization comes into play. It involves finding the design parameters that minimize all the criteria, taking into account the trade-offs between them. Robust design optimization is another important aspect of the design process. It involves designing the product to ensure its functionality despite the unavoidable variability and uncertainty in the material properties and loading conditions. This is particularly important in fatigue design, as the fatigue life of a material can be significantly affected by these factors. ### Conclusion In this chapter, we have delved into the complex and fascinating world of material fatigue. We have explored how repeated stress, even when well below the material's ultimate tensile strength, can lead to failure over time. This phenomenon, known as fatigue, is a critical consideration in the design and maintenance of many mechanical systems, from aircraft to bridges to medical implants. We have also examined the various factors that influence fatigue, including the type of material, the frequency and amplitude of the applied stress, and the presence of defects or stress concentrations. We have seen how these factors can be manipulated to improve a material's resistance to fatigue, through techniques such as shot peening or the use of surface treatments. Finally, we have discussed the methods used to study and predict fatigue, including the S-N curve and the Paris Law. These tools allow engineers to estimate the lifespan of a component under cyclic loading, and to design systems that are both safe and durable. In conclusion, understanding the mechanical behavior of materials under cyclic loading is crucial in many fields of engineering. While fatigue is a complex and multifaceted phenomenon, the principles and methods discussed in this chapter provide a solid foundation for further study and application. ### Exercises #### Exercise 1 Explain the phenomenon of fatigue in your own words. What factors contribute to fatigue, and why is it a concern in engineering? #### Exercise 2 Describe the S-N curve and how it is used in the study of material fatigue. What information can be gleaned from this curve? #### Exercise 3 Discuss the Paris Law and its application in predicting fatigue life. How does it relate to the S-N curve? #### Exercise 4 Consider a material that is subject to cyclic loading. What strategies could be used to improve its resistance to fatigue? #### Exercise 5 Imagine you are an engineer designing a bridge. How would you take into account the potential for material fatigue in your design process? ## Chapter: Chapter 16: Final Exam ### Introduction The final chapter of "Mechanical Behavior of Materials: A Comprehensive Guide" is designed to consolidate your understanding of the material covered throughout the book. This chapter, titled "Final Exam", is not a traditional chapter with new content, but rather a comprehensive assessment of the concepts, theories, and applications we have explored in the preceding chapters. The purpose of this final exam is to provide a holistic review of the mechanical behavior of materials, allowing you to reflect on the knowledge you have gained and apply it in a practical context. The questions will cover a wide range of topics, from the basic principles of material mechanics to the more complex theories and models. You will find questions that require you to apply mathematical formulas and equations. Remember to use the $ and $$ delimiters to insert math expressions in TeX and LaTeX style syntax. For example, if you need to express a mathematical equation, you would write it as `$$\Delta w = ...$$`. This format will ensure that your mathematical expressions are clear and correctly interpreted. This final exam is an opportunity to demonstrate your understanding and application of the mechanical behavior of materials. It will challenge you to think critically, solve complex problems, and make connections between different areas of study. In conclusion, this chapter is a culmination of your journey through the mechanical behavior of materials. It is a chance to reflect on what you have learned, apply your knowledge, and prepare for future studies or professional applications in this field. Good luck! ### Section: 16.1 Final exam: #### 16.1a Exam preparation strategies Preparing for the final exam of "Mechanical Behavior of Materials: A Comprehensive Guide" requires a comprehensive understanding of the material covered throughout the book. Here are some strategies to help you prepare effectively: 1. **Review the Material**: Start by revisiting the chapters and sections of the book. Pay special attention to the key concepts, theories, and applications discussed in each chapter. Make sure you understand the principles of material mechanics and the more complex theories and models. 2. **Practice Problems**: The best way to understand the mechanical behavior of materials is by solving problems. Try to solve the problems provided at the end of each chapter. This will not only help you understand the concepts better but also give you a feel for the type of questions you might encounter in the exam. 3. **Understand Mathematical Formulas and Equations**: The exam will require you to apply mathematical formulas and equations. Make sure you understand how to use the $ and $$ delimiters to insert math expressions in TeX and LaTeX style syntax. For example, if you need to express a mathematical equation, you would write it as `$$\Delta w = ...$$`. This format will ensure that your mathematical expressions are clear and correctly interpreted. 4. **Listen to Recordings**: If available, listen to recordings of lectures or discussions on the topics covered in the book. This can help reinforce your understanding of the material. 5. **Group Study**: Studying in a group can be beneficial. It allows you to discuss concepts, solve problems together, and learn from each other. 6. **Take Breaks**: While studying, remember to take regular breaks. This helps prevent fatigue and keeps your mind fresh. Remember, the goal of the final exam is not just to test your knowledge but also to help you consolidate your understanding of the mechanical behavior of materials. So, use this as an opportunity to reflect on what you have learned and how you can apply it in real-world situations. Good luck with your preparation! #### 16.1b Review of course material The final exam will cover all the material presented in the book, so it's crucial to review each chapter thoroughly. Here are some key points to focus on from each chapter: 1. **Chapter 1: Introduction to Material Mechanics**: Understand the basic principles of material mechanics, including stress, strain, and modulus of elasticity. Be able to define and explain these terms. For example, stress is defined as the force per unit area within materials that arises from externally applied forces, uneven heating, or permanent deformation, and is symbolized by the Greek letter sigma ($\sigma$). 2. **Chapter 2: Elasticity**: Review the concept of elasticity, the property of a material to return to its original shape after deformation when the stress causing it is removed. Understand Hooke's Law, which states that the strain in a solid is proportional to the applied stress within the elastic limit of that solid. The formula for Hooke's Law is given by: $$\sigma = E \cdot \varepsilon$$ where $\sigma$ is the stress, $E$ is the modulus of elasticity, and $\varepsilon$ is the strain. 3. **Chapter 3: Plasticity**: Understand the concept of plasticity, the property of a material to undergo permanent deformation after the removal of the stress that caused it. Be able to explain the yield strength and the difference between elastic and plastic deformation. 4. **Chapter 4: Hardness**: Review the different methods of measuring the hardness of a material, such as the Brinell and Rockwell hardness tests. Understand the relationship between hardness and other material properties. 5. **Chapter 5: Fracture Mechanics**: Understand the concepts of fracture mechanics, including fracture toughness, stress intensity factor, and crack propagation. Be able to explain the difference between brittle and ductile fracture. 6. **Chapter 6: Fatigue**: Review the phenomenon of fatigue, the weakening of a material caused by repeatedly applied loads. Understand the S-N curve and the factors that influence fatigue life. 7. **Chapter 7: Creep**: Understand the concept of creep, the tendency of a material to move or deform permanently over time under the influence of stresses below its yield strength. Be able to explain the stages of creep and the factors that influence it. 8. **Chapter 8: Viscoelasticity**: Review the concept of viscoelasticity, the property of materials that exhibit both viscous and elastic characteristics when undergoing deformation. Understand the principles of time-dependent deformation and recovery. Remember, the key to success in the final exam is a thorough understanding of these concepts and the ability to apply them to solve problems. So, review the material, practice problems, and don't hesitate to seek help if you're struggling with any concepts. ### Conclusion In this chapter, we have delved into the intricate world of the mechanical behavior of materials. We have explored the fundamental principles that govern the mechanical properties of materials, and how these properties can be manipulated and controlled to suit various applications. We have also examined the various factors that can influence the mechanical behavior of materials, such as temperature, stress, strain, and the microstructure of the material. We have also discussed the importance of understanding the mechanical behavior of materials in various fields, such as engineering, materials science, and manufacturing. The knowledge gained from this chapter will be invaluable in helping you to make informed decisions about the selection and use of materials in your future projects. Remember, the mechanical behavior of materials is a complex and multifaceted field. It requires a deep understanding of the underlying principles and a keen eye for detail. But with the knowledge and skills you have gained from this chapter, you are well-equipped to tackle any challenges that come your way. ### Exercises #### Exercise 1 Explain the relationship between stress and strain in materials. How does this relationship influence the mechanical behavior of materials? #### Exercise 2 Discuss the role of temperature in the mechanical behavior of materials. How does temperature affect the strength and ductility of a material? #### Exercise 3 Describe the influence of the microstructure of a material on its mechanical behavior. How can the microstructure be manipulated to improve the mechanical properties of a material? #### Exercise 4 Explain the concept of plastic deformation in materials. What factors can influence the onset and extent of plastic deformation? #### Exercise 5 Discuss the importance of understanding the mechanical behavior of materials in the field of engineering. How can this knowledge be applied in the design and manufacture of engineering components? ## Chapter 17: Material Properties ### Introduction The study of material properties is a fundamental aspect of understanding the mechanical behavior of materials. This chapter, "Material Properties," aims to provide a comprehensive overview of the various properties that define the behavior of materials under different conditions. Material properties are the characteristics that describe a material's response to forces, temperature changes, and other environmental conditions. These properties are crucial in determining a material's suitability for specific applications. They include mechanical properties such as strength, ductility, hardness, and toughness, as well as thermal properties like thermal conductivity and specific heat capacity. In addition to these, we will also delve into the electrical properties of materials, such as resistivity and dielectric constant, which are essential in the design and operation of electronic devices. Furthermore, we will explore the magnetic properties of materials, which are vital in applications such as data storage, transformers, and electric motors. This chapter will also discuss the importance of understanding the relationship between a material's microstructure and its properties. The arrangement and interaction of atoms and molecules within a material can significantly influence its behavior, and understanding this relationship is key to predicting and manipulating a material's properties. In the subsequent sections, we will delve into the details of these properties, discussing their definitions, how they are measured, and their significance in various applications. We will also explore how these properties can be altered through processes such as heat treatment, alloying, and mechanical working. By the end of this chapter, you should have a solid understanding of the fundamental properties of materials and how they influence a material's behavior under different conditions. This knowledge will provide a foundation for the more advanced topics covered in the subsequent chapters of this book. ### Section: 17.1 Mechanical Properties Mechanical properties are the characteristics that describe a material's response to mechanical loads, such as forces and stresses. These properties are crucial in determining a material's suitability for specific applications, particularly those that involve mechanical loading. They include properties such as strength, ductility, hardness, and toughness. In this section, we will delve into the details of these properties, discussing their definitions, how they are measured, and their significance in various applications. #### 17.1a Tensile Strength Tensile strength, also known as ultimate tensile strength (UTS), is one of the most fundamental mechanical properties of a material. It is defined as the maximum stress that a material can withstand while being stretched or pulled before necking, which is when the specimen's cross-section starts to significantly decrease. It is measured in units of force per unit area, typically in Pascals (Pa) or pounds per square inch (psi). Tensile strength is determined using a tensile test, which is a fundamental materials science and engineering test. In a tensile test, a sample is subjected to a controlled tension until failure. The sample, or tensile specimen, usually has a standardized sample cross-section, with two shoulders and a gauge section in between. The gauge section's smaller diameter allows the deformation and failure to occur in this area. Properties that are directly measured via a tensile test are ultimate tensile strength, breaking strength, maximum elongation, and reduction in area. From these measurements, the following properties can also be determined: Young's modulus, Poisson's ratio, yield strength, and strain-hardening characteristics. Uniaxial tensile testing is the most commonly used for obtaining the mechanical characteristics of isotropic materials, although some materials use biaxial tensile testing. Tensile testing serves a variety of purposes. It can be used to select a material for an application, for quality control, and to predict how a material will react under other types of forces. Furthermore, it can provide valuable data about a material's ductility, which is the degree to which a material can deform under tensile stress before failure. In the next subsection, we will discuss another important mechanical property, ductility, in more detail. #### 17.1b Compressive Strength Compressive strength is another fundamental mechanical property of materials. It is defined as the maximum stress that a material can withstand under compression before failure. Similar to tensile strength, it is measured in units of force per unit area, typically in Pascals (Pa) or pounds per square inch (psi). The compressive strength of a material is determined using a compression test. In a compression test, a sample is subjected to a controlled compression until failure. The sample, or compression specimen, is usually cylindrical or cubical in shape. The failure of the material under compression is typically due to buckling or crushing, depending on the material's ductility and the applied stress state. Properties that are directly measured via a compression test are ultimate compressive strength and deformation characteristics. From these measurements, other properties such as modulus of elasticity, yield strength, and strain-hardening characteristics can also be determined. Compressive strength is a critical property for materials used in structures that bear heavy loads, such as buildings, bridges, and dams. It is also important for materials used in applications where resistance to wear and impact is required, such as in the manufacturing of gears, bearings, and rail tracks. It is important to note that the compressive strength of a material is not always directly related to its tensile strength. For instance, brittle materials like ceramics and concrete typically have high compressive strengths but low tensile strengths due to their inability to deform plastically. On the other hand, ductile materials like metals usually have comparable tensile and compressive strengths. In the next section, we will discuss another important mechanical property - hardness, which is a measure of a material's resistance to localized deformation. #### 17.1c Elastic Modulus The Elastic Modulus, also known as Young's Modulus, is a fundamental mechanical property of materials that describes the relationship between stress (force per unit area) and strain (proportional deformation) in a material under the condition of uniaxial deformation. It is a measure of the stiffness of a material, or its resistance to elastic deformation. The Elastic Modulus is defined as the ratio of stress ($\sigma$) to strain ($\epsilon$) in the linear elastic region of a material's stress-strain curve, and is given by the formula: $$ E = \frac{\sigma}{\epsilon} $$ where: - $E$ is the Elastic Modulus, - $\sigma$ is the applied stress, and - $\epsilon$ is the resulting strain. The units of Elastic Modulus are the same as stress, typically Pascals (Pa) or pounds per square inch (psi). The value of the Elastic Modulus depends on the material and its crystal structure. For instance, metals and alloys, which have a regular arrangement of atoms, typically have high values of Elastic Modulus. On the other hand, polymers and elastomers, which have a more random arrangement of molecules, typically have lower values. The Elastic Modulus is an important property for materials used in applications where stiffness is critical, such as in the construction of buildings, bridges, and aircraft. It is also important for materials used in precision applications, such as in the manufacturing of microelectronic devices, where small deformations can lead to significant performance changes. It is important to note that the Elastic Modulus is a measure of a material's elastic behavior only. It does not provide information about a material's plastic behavior, which is the behavior under stress beyond the elastic limit. For this, other properties such as yield strength and ductility are considered. In the next section, we will discuss another important mechanical property - hardness, which is a measure of a material's resistance to localized deformation. #### 17.2a Thermal Expansion Thermal expansion is a fundamental property of materials that describes how a material's dimensions change in response to changes in temperature. This property is crucial in many engineering applications, as it can significantly affect the performance and reliability of materials and systems under varying thermal conditions. The thermal expansion of a material is quantified by its coefficient of thermal expansion ($\alpha$), which is defined as the fractional change in length (or volume) per degree change in temperature. It is typically expressed in units of per degree Celsius (1/°C) or per degree Kelvin (1/K). The linear coefficient of thermal expansion ($\alpha_L$) is given by the formula: $$ \alpha_L = \frac{1}{L} \frac{dL}{dT} $$ where: - $\alpha_L$ is the linear coefficient of thermal expansion, - $L$ is the original length of the material, and - $\frac{dL}{dT}$ is the rate of change of length with respect to temperature. Similarly, the volumetric coefficient of thermal expansion ($\alpha_V$) is given by the formula: $$ \alpha_V = \frac{1}{V} \frac{dV}{dT} $$ where: - $\alpha_V$ is the volumetric coefficient of thermal expansion, - $V$ is the original volume of the material, and - $\frac{dV}{dT}$ is the rate of change of volume with respect to temperature. The value of the coefficient of thermal expansion depends on the material and its crystal structure. For instance, metals and alloys, which have a regular arrangement of atoms, typically have high values of $\alpha$. On the other hand, polymers and elastomers, which have a more random arrangement of molecules, typically have lower values. Thermal expansion is an important property for materials used in applications where temperature changes are significant, such as in the construction of buildings, bridges, and aircraft, where materials can be subjected to a wide range of temperatures. It is also important for materials used in precision applications, such as in the manufacturing of microelectronic devices, where small changes in dimensions can lead to significant performance changes. In the next subsection, we will discuss another important thermal property - thermal conductivity, which is a measure of a material's ability to conduct heat. #### 17.2b Thermal Conductivity Thermal conductivity is another essential thermal property of materials. It quantifies the ability of a material to conduct heat and is denoted by the symbol $k$. The thermal conductivity of a material is defined as the amount of heat (in watts) transferred through a square meter of material, one meter thick, per degree of temperature gradient (in Kelvin), which can be mathematically represented as: $$ k = \frac{Q}{A \cdot \Delta T / d} $$ where: - $k$ is the thermal conductivity, - $Q$ is the amount of heat transferred, - $A$ is the area through which the heat is transferred, - $\Delta T$ is the temperature difference across the material, and - $d$ is the thickness of the material. The unit of thermal conductivity in the International System of Units (SI) is watts per meter per Kelvin (W/m·K). The thermal conductivity of a material depends on several factors, including its phase (solid, liquid, or gas), its temperature, and its atomic or molecular structure. For instance, metals, which have a regular arrangement of atoms and free electrons, typically have high thermal conductivity. In contrast, gases and insulating materials, which lack free electrons and have atoms or molecules spaced far apart, typically have low thermal conductivity. The thermal conductivity of a material is a crucial factor in many engineering applications. For example, in heat exchangers, materials with high thermal conductivity are desirable to facilitate the efficient transfer of heat. On the other hand, in thermal insulation applications, materials with low thermal conductivity are preferred to minimize heat transfer. The equation for entropy production, as derived in Section 49 of L.D. Landau and E.M. Lifshitz's "Course of Theoretical Physics", can be used to understand the thermal conductivity of materials. In the absence of thermal conduction and viscous forces, the equation for entropy production collapses to $Ds/Dt=0$, showing that ideal fluid flow is isentropic. This principle can be applied to measure the heat transfer and air flow in a domestic refrigerator, perform a harmonic analysis of regenerators, or understand the physics of glaciers. In the next section, we will explore the relationship between thermal conductivity and other material properties, such as specific heat and thermal diffusivity. #### 17.2c Specific Heat Capacity Specific heat capacity, often simply referred to as specific heat, is a measure of the amount of heat energy required to raise the temperature of a specific amount of a substance by a certain degree. It is denoted by the symbol $c$ and is defined mathematically as: $$ c = \frac{Q}{m \cdot \Delta T} $$ where: - $c$ is the specific heat capacity, - $Q$ is the amount of heat absorbed or released, - $m$ is the mass of the substance, and - $\Delta T$ is the change in temperature. The unit of specific heat capacity in the International System of Units (SI) is joules per kilogram per Kelvin (J/kg·K). The specific heat capacity of a material is a function of its atomic or molecular structure and its phase (solid, liquid, or gas). For instance, metals typically have lower specific heat capacities than non-metals due to their tightly packed atomic structure and the presence of free electrons, which can absorb and distribute heat energy more efficiently. The specific heat capacity of a material is a critical factor in many engineering applications. For example, in heat exchangers, materials with high specific heat capacities are desirable as they can absorb and release large amounts of heat without undergoing significant changes in temperature. Conversely, in applications where rapid heating or cooling is required, materials with low specific heat capacities are preferred. The equation for entropy production, as derived in Section 49 of L.D. Landau and E.M. Lifshitz's "Course of Theoretical Physics", can be used to understand the specific heat capacity of materials. In the absence of thermal conduction and viscous forces, the equation for entropy production collapses to $Ds/Dt=0$, showing that ideal fluid flow is isentropic. This implies that the specific heat capacity of an ideal fluid is constant, as any change in heat energy is entirely used to change the temperature of the fluid, with no energy lost to conduction or viscous forces. In the next section, we will explore the thermal expansion of materials, another important thermal property that is closely related to specific heat capacity. ### Conclusion In this chapter, we have delved into the fascinating world of material properties and their mechanical behavior. We have explored how these properties influence the performance of materials under different conditions and how they can be manipulated to achieve desired outcomes. We have also discussed the importance of understanding these properties in the design and manufacturing of various products and structures. The mechanical behavior of materials is a complex field that requires a deep understanding of the underlying principles of physics, chemistry, and engineering. It is a field that is constantly evolving, with new materials and technologies being developed all the time. As such, it is crucial for engineers and scientists to stay abreast of the latest developments and research in this area. In conclusion, the study of material properties and their mechanical behavior is a vital aspect of materials science and engineering. It provides the foundation for the design and manufacture of a wide range of products, from everyday items to advanced technological devices. By understanding these properties, we can create materials that are stronger, lighter, and more durable, thereby improving the quality of our lives and the sustainability of our planet. ### Exercises #### Exercise 1 Explain the relationship between the mechanical properties of a material and its performance under different conditions. Provide examples to illustrate your points. #### Exercise 2 Discuss the role of material properties in the design and manufacturing of products. How can understanding these properties help in improving the quality and performance of products? #### Exercise 3 Describe the process of manipulating the properties of a material to achieve desired outcomes. What are some of the techniques used in this process? #### Exercise 4 Research and write a brief report on the latest developments in the field of material properties and their mechanical behavior. What are some of the new materials and technologies being developed? #### Exercise 5 Discuss the importance of staying abreast of the latest research and developments in the field of material properties and their mechanical behavior. How can this knowledge benefit engineers and scientists in their work? ## Chapter: Material Testing ### Introduction Material testing is a critical aspect of materials science and engineering. It provides the necessary data and insights to understand the mechanical behavior of materials under various conditions. This chapter, "Material Testing," will delve into the fundamental concepts and methodologies involved in testing the mechanical properties of materials. The mechanical behavior of materials is a complex phenomenon that depends on a multitude of factors, including the material's composition, microstructure, and the environmental conditions to which it is subjected. Material testing is the process by which these behaviors are quantified, providing valuable information for design, manufacturing, and quality control purposes. Material testing can involve a variety of techniques, ranging from simple hardness tests to more complex fatigue and fracture tests. Each of these tests provides different insights into the material's behavior, and the choice of test often depends on the specific application and the properties of interest. In this chapter, we will explore the principles behind these tests, the types of data they generate, and how this data can be interpreted to gain a deeper understanding of a material's mechanical behavior. We will also discuss the limitations of these tests and the precautions that need to be taken to ensure accurate and reliable results. Whether you are a student, a researcher, or a professional in the field of materials science and engineering, this chapter will provide you with a comprehensive understanding of material testing. It will equip you with the knowledge and skills necessary to select the appropriate testing method for a given material and interpret the results in a meaningful way. As we delve into the world of material testing, remember that the ultimate goal is not just to gather data, but to use this data to make informed decisions about material selection, design, and manufacturing processes. With this in mind, let's embark on this journey of discovery and learning. ### Section: 18.1 Destructive Testing: Destructive testing, as the name suggests, involves testing materials to the point of failure to understand their mechanical behavior under extreme conditions. This form of testing is crucial in determining the material's mechanical properties such as tensile strength, compressive strength, shear strength, toughness, and hardness. While destructive testing may result in the loss of the material sample, the data obtained from these tests is invaluable in the design and manufacturing process, ensuring the safety and reliability of the final product. #### 18.1a Tensile Testing Tensile testing, also known as tension testing, is a fundamental method in destructive testing. It involves subjecting a sample to a controlled tension until failure. The primary properties directly measured via a tensile test are ultimate tensile strength, breaking strength, maximum elongation, and reduction in area. From these measurements, other properties can also be determined, such as Young's modulus, Poisson's ratio, yield strength, and strain-hardening characteristics. Uniaxial tensile testing is the most commonly used method for obtaining the mechanical characteristics of isotropic materials. However, some materials require biaxial tensile testing, with the main difference between these testing machines being how the load is applied on the materials. ##### Purposes of Tensile Testing Tensile testing serves a variety of purposes, including: 1. Determining the material's yield strength and ultimate tensile strength, which are critical for design and manufacturing processes. 2. Understanding the material's ductility, which is indicated by the amount of deformation that occurs before the material breaks. 3. Evaluating the material's strain-hardening characteristics, which can influence how the material responds to further deformation after yielding. 4. Providing a means to compare the mechanical properties of different materials, aiding in material selection for specific applications. ##### Tensile Specimen The preparation of test specimens is crucial and depends on the purposes of testing and the governing test method or specification. A tensile specimen usually has a standardized sample cross-section, consisting of two shoulders and a gauge (section) in between. The shoulders and grip section are generally larger than the gauge section by 33% for easy gripping. The smaller diameter of the gauge section allows the deformation and failure to occur in this area. The shoulders of the test specimen can be manufactured in various ways to mate with different grips in the testing machine. Each system has its advantages and disadvantages. For instance, shoulders designed for serrated grips are easy and cheap to manufacture, but the alignment of the specimen is dependent on the technician's skill. A pinned grip assures good alignment, while threaded shoulders and grips also assure good alignment, but the technician must know to thread each shoulder into the grip at least one diameter's length, otherwise, the threads can strip before the specimen fractures. In the following sections, we will delve deeper into the process of tensile testing, discussing the steps involved, the data obtained, and how to interpret this data to understand the material's mechanical behavior. #### 18.1b Hardness Testing Hardness testing is another crucial method in destructive testing. It measures the resistance of a material to deformation, particularly plastic deformation, indentation, or scratching. It is worth noting that hardness is not a fundamental property of a material, but rather a response to a specific test method. ##### Types of Hardness Tests There are several types of hardness tests, each with its unique method and scale. Some of the most common include: 1. **Brinell Hardness Test:** This test involves applying a known load to a hardened steel or carbide ball of known diameter. The diameter of the resulting indentation in the test material is then measured. The Brinell hardness number (BHN) is calculated using the formula: $$ BHN = \frac{2P}{\pi D(D - \sqrt{D^2 - d^2})} $$ where $P$ is the applied load in kilograms, $D$ is the diameter of the indenter in millimeters, and $d$ is the diameter of the indentation in millimeters. 2. **Rockwell Hardness Test:** The Rockwell test measures the permanent depth of indentation produced by a force/load on an indenter. The Rockwell hardness number is determined from the depth of penetration when a major load is applied after a minor load. 3. **Vickers Hardness Test:** The Vickers test, also known as the Diamond Pyramid Hardness test, uses a square-based diamond pyramid as an indenter. The hardness is determined by the measurement of the diagonal length of the indentation left by the indenter. ##### Applications of Hardness Testing Hardness testing is used for a variety of purposes, including: 1. Determining the suitability of a material for a specific application. 2. Assessing the effect of heat treatment on a material. 3. Comparing the hardness of different materials. 4. Predicting other material properties, such as tensile strength. It's important to note that while hardness testing is a quick and relatively inexpensive method of determining a material's mechanical properties, it should not be used as a standalone test for material selection. Other tests, such as tensile and impact tests, should also be conducted to provide a comprehensive understanding of the material's behavior under various conditions. #### 18.1c Impact Testing Impact testing is a destructive testing method used to determine the energy absorbed by a material during fracture. This absorbed energy is a measure of a material's toughness, which is a property of paramount importance in applications where resistance to sudden applied loads is significant. ##### Types of Impact Tests There are several types of impact tests, each with its unique method and purpose. Some of the most common include: 1. **Charpy Impact Test:** This test involves striking a standard notched specimen with a controlled weight pendulum swung from a set height. The energy absorbed by the specimen to fracture is measured and is an indication of the material's toughness. The Charpy impact test is commonly used to evaluate the toughness of metals, and it is particularly useful in studying the effects of temperature on material toughness. 2. **Izod Impact Test:** Similar to the Charpy test, the Izod impact test also measures the energy absorbed by a material during fracture. The difference lies in the way the test is set up: in the Izod test, the specimen stands upright as a cantilever beam, whereas in the Charpy test, the specimen is held horizontally between two supports. 3. **Drop Weight Test:** This test measures the resistance of a material to failure under a high-energy sudden impact. It is particularly useful for testing the toughness of plastics and ceramics. ##### Applications of Impact Testing Impact testing is used for a variety of purposes, including: 1. Determining the toughness of a material and its suitability for use in applications where it may be subjected to sudden loads. 2. Assessing the effect of temperature on a material's toughness. 3. Comparing the toughness of different materials. 4. Predicting the behavior of materials under impact loading conditions. It's important to note that while impact testing provides valuable information about a material's behavior under sudden loads, it is a destructive testing method and therefore not suitable for testing materials in service. Furthermore, the results of impact tests are highly dependent on the specific test setup and conditions, and therefore should be interpreted with caution. #### 18.2a Ultrasonic Testing Ultrasonic testing (UT) is a non-destructive testing technique that utilizes the propagation of ultrasonic waves in the material being tested. This method is commonly used to detect internal flaws or to characterize materials. The ultrasonic pulse-waves used in most UT applications have center frequencies ranging from 0.1-15 MHz, and occasionally up to 50 MHz. A common application of UT is ultrasonic thickness measurement, which is used to monitor the thickness of the test object, such as pipework corrosion. ##### Principle of Ultrasonic Testing The fundamental principle of ultrasonic testing is the transmission of high-frequency sound waves into a material to detect imperfections or changes in the material properties. The sound waves travel through the material with some attenuation due to material characteristics and are reflected at interfaces. The reflected wave signal is transformed into an electrical signal by the transducer and is displayed in a graphical manner. The time it takes for the ultrasonic wave to travel through the material is directly related to the distance that the wave has traveled. If there are no discontinuities in the wave path, the wave will travel at a constant speed; however, if there is a discontinuity (such as a crack), some of the wave will be reflected back to the transducer. By knowing the speed of the sound through the material and the time taken for the wave to return to the transducer, the flaw position can be determined. ##### Applications of Ultrasonic Testing Ultrasonic testing is widely used in various industries including steel and aluminium construction, metallurgy, manufacturing, aerospace, automotive, and other transportation sectors. It is often performed on steel and other metals and alloys, though it can also be used on concrete, wood, and composites, albeit with less resolution. 1. **Flaw Detection:** UT is commonly used to detect internal and surface flaws such as cracks, voids, and porosity. It is particularly useful for detecting flaws that are deep within the material or in complex geometries. 2. **Material Characterization:** UT can be used to determine material properties such as grain size, anisotropy, and elastic constants. This is particularly useful in materials research and development. 3. **Thickness Measurement:** UT is often used to measure the thickness of a material, particularly in situations where only one side of the material is accessible. This is commonly used in the inspection of pipes and tanks. ##### History of Ultrasonic Testing The first efforts to use ultrasonic testing to detect flaws in solid material occurred in the 1930s. On May 27, 1940, U.S. researcher Dr. Floyd Firestone of the University of Michigan applied for a U.S. invention patent for the first practical ultrasonic testing method. The patent was granted on April 21, 1942, as U.S. Patent No. 2,280,226, titled "Flaw Detecting Device and Measuring Instrument". This marked the beginning of the use of ultrasonic waves for nondestructive testing. In the next section, we will explore another non-destructive testing method, Radiographic Testing, and its applications in the field of material science. #### 18.2b Radiographic Testing Radiographic testing (RT) is another non-destructive testing method that uses either x-rays or gamma rays to view the internal structure of a component. In much the same way as medical radiography, industrial radiography provides a permanent record of the product being inspected. This method is commonly used to inspect welded joints for hidden cracks, porosity, or other discontinuities. ##### Principle of Radiographic Testing The fundamental principle of radiographic testing involves the exposure of a test object to penetrating radiation. The radiation will pass through the object being inspected and onto a detector or a piece of film. The resulting image on the detector or film will show any changes in the amount of radiation caused by material thickness variations or by the presence of internal flaws or inclusions. The amount of energy absorbed by the object depends on the object's thickness and density. Thicker and denser areas will absorb more energy, resulting in less radiation reaching the detector, and therefore, darker areas on the film. Conversely, thinner and less dense areas will absorb less energy, resulting in more radiation reaching the detector, and therefore, lighter areas on the film. ##### Applications of Radiographic Testing Radiographic testing is widely used in a variety of industries, including the petrochemical, nuclear, aerospace, and automotive industries. It is particularly useful for inspecting welds on pipes and pressure vessels, and it can also be used to inspect non-metal components such as ceramics. 1. **Inspection of Welds:** Radiographic testing is commonly used to inspect the quality of welds on piping, pressure vessels, high-capacity storage containers, pipelines, and some structural welds. The beam of radiation must be directed to the middle of the section under examination and must be normal to the material surface at that point. The length of weld under examination for each exposure should be such that the thickness of the material at the diagnostic extremities does not exceed the actual thickness at that point by more than 6%. 2. **Inspection of Non-Metal Components:** Non-metal components such as ceramics used in the aerospace industries are also regularly tested using radiographic testing. Theoretically, industrial radiographers could radiograph any solid, flat material or any hollow cylindrical or spherical object. 3. **Inspection of Corrosion or Mechanical Damage:** Radiographic testing can also be used to inspect for anomalies due to corrosion or mechanical damage in plate metal or pipewall. In the next section, we will discuss another non-destructive testing method, Magnetic Particle Testing. #### 18.2c Magnetic Particle Testing Magnetic Particle Testing (MPT), also known as Magnetic Particle Inspection (MPI), is a non-destructive testing method that is used to detect surface and near-surface discontinuities in ferromagnetic materials. This method is widely used in various industries such as aerospace, automotive, and petrochemical for the inspection of castings, forgings, and weldments. ##### Principle of Magnetic Particle Testing The fundamental principle of magnetic particle testing involves the application of a magnetic field to the test object. This can be achieved either by direct or indirect magnetization. In direct magnetization, the electric current is passed through the test object, inducing a magnetic field. In indirect magnetization, the magnetic field is applied from an external source. When the magnetic field is applied, the presence of a surface or near-surface discontinuity in the material causes a leakage field. When iron particles are applied to the surface of the test object, they are attracted to the leakage field, forming an indication that is visible to the inspector. The pattern formed by the particles can provide information about the size, shape, and orientation of the discontinuity. ##### Applications of Magnetic Particle Testing Magnetic particle testing is widely used in various industries for the inspection of ferromagnetic materials. Some of the common applications include: 1. **Inspection of Welds:** MPT is commonly used to inspect welds for surface and near-surface discontinuities. It is particularly useful for the inspection of welds in pipelines, pressure vessels, and structural components. 2. **Inspection of Castings and Forgings:** MPT is used to inspect castings and forgings for surface and near-surface discontinuities that may have formed during the manufacturing process. 3. **Inspection of Aerospace Components:** In the aerospace industry, MPT is used to inspect engine components, landing gear, and other critical parts for surface and near-surface discontinuities. It is important to note that while MPT is a powerful inspection tool, it can only be used on ferromagnetic materials and is only capable of detecting surface and near-surface discontinuities. For deeper discontinuities, other methods such as ultrasonic testing or radiographic testing may be required. ### Conclusion In this chapter, we have delved into the fundamental aspects of material testing, a critical process in understanding the mechanical behavior of materials. We have explored various testing methods, each with its unique approach and purpose, and how they contribute to the overall understanding of a material's mechanical properties. Material testing is a crucial step in the design and manufacturing process, as it provides valuable data about a material's strength, ductility, hardness, and other mechanical properties. This information is vital in making informed decisions about material selection for specific applications, ensuring safety, reliability, and efficiency in the final product. We have also discussed the importance of accurate and precise testing, as well as the potential implications of errors or inaccuracies in test results. It is essential to understand that while testing provides valuable data, it is only as reliable as the testing methods and equipment used. Therefore, continuous advancements in testing technologies and methodologies are necessary to improve the accuracy and reliability of test results. In conclusion, material testing is a complex but essential process in the field of materials science. It provides the foundation for understanding the mechanical behavior of materials, enabling engineers and scientists to design and manufacture products that meet specific performance requirements. As technology continues to advance, so too will the methods and techniques used in material testing, further enhancing our understanding of materials and their mechanical behavior. ### Exercises #### Exercise 1 Discuss the importance of material testing in the field of materials science. How does it contribute to our understanding of the mechanical behavior of materials? #### Exercise 2 Compare and contrast the different material testing methods discussed in this chapter. What are the advantages and disadvantages of each method? #### Exercise 3 Explain the potential implications of errors or inaccuracies in material testing results. How can these be minimized or prevented? #### Exercise 4 Discuss the role of technology in material testing. How has it improved the accuracy and reliability of test results? #### Exercise 5 Imagine you are tasked with selecting a material for a specific application. How would you use the data obtained from material testing to make your decision? ## Chapter: Material Selection ### Introduction The process of material selection is a critical aspect in the field of materials science and engineering. It involves the careful consideration of various factors such as the mechanical properties of materials, their cost, availability, and the specific requirements of the application at hand. This chapter, "Material Selection," aims to provide a comprehensive understanding of the principles and methodologies involved in this process. The mechanical behavior of materials plays a significant role in their selection. The strength, ductility, toughness, hardness, and other mechanical properties of a material can greatly influence its suitability for a particular application. For instance, a material with high strength and hardness might be chosen for applications that require resistance to deformation and wear, such as in the manufacturing of machine parts. However, the mechanical properties are not the only factors to consider. The cost and availability of materials are also crucial. A material might have excellent mechanical properties, but if it is too expensive or not readily available, it may not be a practical choice. Therefore, a balance must be struck between the desired properties and the practical considerations. Moreover, the specific requirements of the application also play a vital role in material selection. For example, in the aerospace industry, materials must not only have high strength and toughness, but also low weight. Similarly, in the biomedical field, materials must be biocompatible and non-toxic. In this chapter, we will delve into these considerations in detail, providing you with the knowledge and tools to make informed decisions in the selection of materials. We will also discuss various material selection methodologies and strategies, helping you navigate the complex landscape of materials science and engineering. Whether you are a student, a researcher, or a professional in the field, this chapter will serve as a valuable guide in your journey. ### Section: 19.1 Material Selection Criteria: #### 19.1a Cost The cost of a material is a critical factor in the selection process. It is not enough for a material to possess the desired mechanical properties; it must also be economically viable for the intended application. The cost of a material can be influenced by various factors, including the cost of raw materials, processing costs, and market demand and supply dynamics. For instance, consider the case of the AAL1gator-32 developed by PMC-Sierra. While it may have desirable properties, the cost of production and licensing could make it an impractical choice for certain applications. Similarly, the drug Selexipag, despite its potential benefits, has a high cost that could limit its accessibility to patients. In the field of real estate, the cost of a property is often determined by factors such as location and the potential for development, as seen in the case of King Oil's properties. The cost per drilling site can significantly influence the overall cost of the property. In the shipping industry, the cost of services is often determined by the agency fee. This fee can vary depending on the services provided and the specific requirements of the client. In the case of consumer products like condoms, the cost is typically low, making them widely accessible. However, even in this case, the cost can be influenced by factors such as brand, quality, and market competition. In summary, the cost of a material is a complex factor that depends on a variety of elements. It is crucial to consider not only the upfront cost of the material but also the long-term costs associated with its use, maintenance, and disposal. A comprehensive understanding of these costs can help in making informed decisions in the material selection process. #### 19.1b Availability The availability of a material is another crucial factor in the material selection process. This refers to the ease with which a material can be sourced and the extent to which it is available for use. The availability of a material can be influenced by factors such as geographical location, production capacity, and market demand and supply dynamics. For instance, consider the BTR-4, a multi-purpose armored personnel carrier. It is available in multiple configurations, which means that the materials used in its construction must be available in sufficient quantities to meet the production requirements of each configuration. Similarly, the Illumos operating system is available in various distributions, which implies that the materials (in this case, software components) used in its construction must be readily available. In the case of obsolete products like the PowerBook G4 and the MacBook Air (Intel-based), the materials used in their construction may no longer be readily available. This could pose challenges for repair and maintenance activities and could influence the decision to discontinue these products. In the field of astronomy, the availability of data on celestial bodies like Kepler-7b can influence the materials used in the construction of telescopes and other observational equipment. If data on a particular celestial body is not readily available, it may not be feasible to design equipment specifically for observing that body. In the case of consumer products like the Acer Aspire One and the Newton OS, the availability of the product is often determined by the manufacturer's production capacity and the market demand. If a product is not readily available, it may not be a viable choice for consumers, regardless of its other properties. In summary, the availability of a material is a critical factor that can influence the feasibility of a product or project. It is essential to consider not only the current availability of a material but also its projected availability in the future. A comprehensive understanding of these factors can help in making informed decisions in the material selection process. #### 19.1c Performance Requirements Performance requirements are a critical aspect of material selection. These requirements are often dictated by the intended function of the product or system in which the material will be used. Performance requirements can encompass a wide range of factors, including mechanical properties, thermal properties, electrical properties, and more. For instance, in the case of the MacBook Air (Intel-based), the performance requirements of the materials used in its construction would have included factors such as thermal conductivity (to manage heat dissipation), electrical conductivity (for the various electronic components), and mechanical strength (for the casing and keyboard). Similarly, the AMD Radeon Pro 5000M series graphics processing units would have required materials with high electrical conductivity and thermal conductivity to ensure efficient operation. In the automotive industry, performance requirements can be particularly stringent. The Acura TL, for example, would require materials that can withstand high temperatures, resist corrosion, and provide sufficient strength and rigidity. The materials used in the construction of the Radical RXC, a high-performance sports car, would need to meet even more demanding performance requirements, including high strength-to-weight ratios and excellent fatigue resistance. In the realm of software, performance requirements can also play a crucial role in material selection. For instance, the DevEco Studio IDE requires certain system resources to function optimally. These "materials" in the context of software could include processing power, memory capacity, and storage space. The HarmonyOS SDK and HarmonyOS Emulator would have similar requirements. In summary, performance requirements are a critical factor in material selection. They dictate the properties that a material must possess to function effectively in a given application. Therefore, understanding these requirements is crucial in the material selection process. It is also important to note that these requirements can vary significantly depending on the specific application, and therefore a material that is suitable for one application may not be suitable for another. #### 19.2a Identifying Design Requirements The first step in the material selection process is identifying the design requirements. These requirements are the constraints and conditions that the selected material must meet to ensure the successful operation of the product or system. Design requirements can be categorized into two types: functional and non-functional requirements. Functional requirements are directly related to the specific functions that the material must perform. These can include mechanical properties such as strength, ductility, and hardness, thermal properties such as thermal conductivity and thermal expansion, and electrical properties such as electrical conductivity and resistivity. For example, a material used in the construction of a jet engine must have high strength and temperature resistance to withstand the extreme conditions inside the engine. Non-functional requirements, on the other hand, are not directly related to the specific functions of the material but are still important considerations in the material selection process. These can include cost, availability, and environmental impact. For instance, a material used in the construction of a consumer product must be cost-effective and readily available to ensure the product can be produced and sold at a reasonable price. All design requirements must be verifiable. This means that they must be able to be tested or otherwise confirmed to ensure that they have been met. Verification methods can include testing, analysis, demonstration, inspection, or review of design. For example, the strength of a material can be verified through tensile testing, while its thermal conductivity can be verified through thermal analysis. It's important to note that some requirements, due to their nature, may not be verifiable. These include requirements that the system must "never" or "always" exhibit a particular property. In such cases, these requirements must be rewritten to be verifiable. For instance, a requirement that a material must "never" fail can be rewritten as a requirement that the material must have a specific minimum strength or fatigue life. In summary, identifying design requirements is a crucial first step in the material selection process. These requirements dictate the properties that a material must possess to be suitable for a given application. Therefore, understanding these requirements is essential to making an informed and effective material selection. #### 19.2b Evaluating Material Properties After identifying the design requirements, the next step in the material selection process is evaluating the properties of potential materials. This involves comparing the properties of different materials to the design requirements to determine which materials are suitable for the application. Material properties can be divided into several categories, including mechanical, thermal, electrical, and chemical properties. Each of these categories contains specific properties that can be evaluated. ##### Mechanical Properties Mechanical properties describe a material's response to mechanical forces. These properties include strength, ductility, hardness, and toughness. For example, β-Carbon nitride, a material with a predicted hardness equal to or above that of diamond, could be a suitable choice for applications requiring extreme hardness[^1^]. ##### Thermal Properties Thermal properties describe a material's response to changes in temperature. These properties include thermal conductivity, thermal expansion, and specific heat capacity. For instance, austenitic stainless steels, known for their excellent thermal properties, are often used in high-temperature applications[^2^]. ##### Electrical Properties Electrical properties describe a material's response to electric fields. These properties include electrical conductivity, resistivity, and dielectric constant. The electronegativity of elements, which can be found in the periodic table, is a key factor in determining these properties[^3^]. ##### Chemical Properties Chemical properties describe a material's reactivity with other substances. These properties include corrosion resistance, chemical stability, and reactivity. Interstitial defects, for example, can modify the physical and chemical properties of materials[^4^]. To efficiently evaluate these properties, materials databases (MDBs) can be used. MDBs store experimental, computational, standards, or design data for materials in a way that they can be retrieved efficiently by humans or computer programs[^5^]. These databases can provide fast access to a wide range of material properties, aiding in the material selection process. In conclusion, evaluating material properties is a crucial step in the material selection process. It allows engineers to identify materials that meet the design requirements and perform well under the expected operating conditions. [^1^]: Β-Carbon nitride. (n.d.). In Wikipedia. Retrieved from https://en.wikipedia.org/wiki/Β-Carbon_nitride [^2^]: Austenitic stainless steel. (n.d.). In Wikipedia. Retrieved from https://en.wikipedia.org/wiki/Austenitic_stainless_steel [^3^]: Electronegativities of the elements (data page). (n.d.). In Wikipedia. Retrieved from https://en.wikipedia.org/wiki/Electronegativities_of_the_elements_(data_page) [^4^]: Interstitial defect. (n.d.). In Wikipedia. Retrieved from https://en.wikipedia.org/wiki/Interstitial_defect [^5^]: Materials database. (n.d.). In Wikipedia. Retrieved from https://en.wikipedia.org/wiki/Materials_database #### 19.2c Comparing Alternatives Once the properties of potential materials have been evaluated, the next step in the material selection process is comparing the alternatives. This involves a detailed analysis of the advantages and disadvantages of each material, considering the design requirements and constraints. ##### Cost Analysis Cost is a significant factor in material selection. It includes not only the initial cost of the material but also the costs associated with processing, fabrication, and maintenance. For instance, while titanium alloys may offer superior strength and corrosion resistance, their high cost and difficulty in processing may make them less suitable for certain applications[^5^]. ##### Performance Analysis Performance analysis involves comparing how well each material meets the design requirements. This includes considering the material's mechanical, thermal, electrical, and chemical properties. For example, while copper has excellent electrical conductivity, it may not be suitable for applications requiring high strength or corrosion resistance[^6^]. ##### Environmental Impact Analysis The environmental impact of a material is an increasingly important factor in material selection. This includes the material's impact on the environment during its production, use, and disposal. For example, while plastics may be cost-effective and versatile, their environmental impact, particularly when not properly disposed of, can be significant[^7^]. ##### Risk Analysis Risk analysis involves considering the potential risks associated with each material. This includes risks related to supply chain disruptions, changes in material prices, and potential health and safety issues. For instance, materials that are sourced from politically unstable regions or that are associated with health risks may be less desirable[^8^]. In conclusion, comparing alternatives in material selection is a complex process that requires considering a wide range of factors. It is essential to balance the benefits and drawbacks of each material to select the most suitable one for the specific application. [^5^]: Ashby, M. F. (2011). Materials selection in mechanical design (4th ed.). Butterworth-Heinemann. [^6^]: Callister, W. D., & Rethwisch, D. G. (2018). Materials Science and Engineering: An Introduction (10th ed.). Wiley. [^7^]: Thompson, R. C., Moore, C. J., vom Saal, F. S., & Swan, S. H. (2009). Plastics, the environment and human health: current consensus and future trends. Philosophical Transactions of the Royal Society B: Biological Sciences, 364(1526), 2153–2166. [^8^]: Graedel, T. E., & Allenby, B. R. (2010). Industrial ecology and sustainable engineering. Prentice Hall. ### Conclusion In this chapter, we have delved into the intricate process of material selection, a critical aspect in the field of materials science and engineering. We have explored how the mechanical behavior of materials plays a significant role in determining their suitability for specific applications. The understanding of these behaviors, such as elasticity, plasticity, and fracture, is crucial in making informed decisions about material selection. We have also discussed the importance of considering the operating environment and the expected performance of the material. Factors such as temperature, pressure, and corrosive conditions can significantly affect a material's mechanical properties. Therefore, a comprehensive understanding of these factors is essential in the material selection process. In conclusion, the mechanical behavior of materials is a complex field that requires a deep understanding of the material's properties, the operating conditions, and the desired performance. The selection of the right material can significantly impact the efficiency, durability, and overall performance of the system or product. Therefore, it is crucial to make informed decisions based on a comprehensive understanding of the mechanical behavior of materials. ### Exercises #### Exercise 1 Discuss the role of elasticity, plasticity, and fracture in the mechanical behavior of materials. How do these properties influence the material selection process? #### Exercise 2 Describe a scenario where the operating environment significantly affects the mechanical properties of a material. How would you approach the material selection process in this scenario? #### Exercise 3 Explain the importance of understanding the desired performance of a material in the material selection process. Provide an example where the desired performance of a material significantly influences its selection. #### Exercise 4 Discuss the impact of material selection on the efficiency, durability, and overall performance of a system or product. Provide an example to support your discussion. #### Exercise 5 Based on your understanding of the mechanical behavior of materials, propose a material for a specific application. Justify your selection based on the material's properties, the operating conditions, and the desired performance. ## Chapter: Material Failure ### Introduction Material failure is a critical aspect of materials science and engineering. It is the point at which a material ceases to function due to stress, strain, or other external factors. This chapter, "Material Failure," will delve into the various aspects of material failure, providing a comprehensive understanding of the topic. Understanding the mechanical behavior of materials is crucial in predicting their performance under different conditions. This includes the ability to anticipate and mitigate the risk of material failure. Material failure can occur due to a variety of reasons, including mechanical overload, fatigue, creep, and environmental degradation. Each of these failure modes has unique characteristics and mechanisms, which we will explore in detail in this chapter. We will begin by defining material failure and discussing the factors that contribute to it. We will then delve into the different types of material failure, providing examples and discussing the mechanisms behind each. We will also discuss the methods used to analyze and predict material failure, including stress analysis and fracture mechanics. Throughout this chapter, we will use mathematical equations to describe the relationships between different variables related to material failure. For instance, we might use the equation `$\sigma = F/A$` to describe the relationship between stress (`$\sigma$`), force (`$F$`), and cross-sectional area (`$A$`). We will also use graphical representations to illustrate these concepts, making them easier to understand. By the end of this chapter, you should have a solid understanding of material failure, including its causes, types, and methods of analysis. This knowledge will be invaluable in your future studies and work in materials science and engineering. ### Section: 20.1 Failure Modes: Material failure can occur in a variety of ways, each with its unique characteristics and mechanisms. In this section, we will delve into the different types of material failure, providing examples and discussing the mechanisms behind each. #### 20.1a Fatigue Failure Fatigue failure is one of the most common types of material failure. It occurs when a material is subjected to repeated loading and unloading, which leads to the initiation and propagation of cracks within the material. Over time, these cracks can grow and coalesce, leading to catastrophic failure of the material. The process of fatigue failure can be divided into three stages: crack initiation, crack propagation, and final fracture. Crack initiation occurs at stress concentrations, such as notches, holes, or surface irregularities. Once a crack has initiated, it will propagate through the material under the action of cyclic stresses. The rate of crack propagation is influenced by the magnitude of the stress range, the material properties, and the environmental conditions. The final fracture occurs when the crack has propagated to a critical size, at which point the remaining cross-sectional area of the material is unable to support the applied load. The fatigue life of a material, denoted as `$N_f$`, is the number of stress cycles that a material can withstand before failure. It can be represented by the S-N curve (also known as the Wöhler curve), which plots the stress range against the logarithm of the number of cycles to failure. The S-N curve is typically obtained through laboratory testing of material specimens under controlled conditions. The fatigue behavior of materials is complex and depends on a variety of factors, including the material properties, the stress range, the mean stress, the stress ratio, the loading frequency, and the environmental conditions. Therefore, predicting fatigue failure requires a thorough understanding of these factors and their interactions. In the next subsection, we will discuss another common type of material failure: creep failure. #### 20.1b Creep Failure Creep failure is another common mode of material failure, particularly in materials subjected to high temperatures and constant stress over extended periods. Creep is the time-dependent deformation of a material under a constant load or stress. It is a slow process of material degradation that can lead to catastrophic failure if not properly managed. The creep behavior of a material can be divided into three stages: primary creep, secondary creep, and tertiary creep. In the primary creep stage, the strain rate decreases over time due to the hardening of the material. This stage is characterized by a relatively small and increasing creep strain. The secondary creep stage, also known as the steady-state creep, is the longest stage. Here, the strain rate reaches a minimum and remains constant. The material deforms at a steady rate due to the balance between strain hardening and recovery processes. The tertiary creep stage is the final stage before failure. The strain rate accelerates due to the initiation and growth of internal cracks, leading to material rupture. The creep life of a material, denoted as `$t_f$`, is the time that a material can withstand under a specific stress and temperature before failure. It can be represented by the creep curve, which plots the creep strain against time. The creep curve is typically obtained through laboratory testing of material specimens under controlled conditions. The creep behavior of materials is influenced by a variety of factors, including the material properties, the applied stress, the operating temperature, and the time of exposure. Therefore, predicting creep failure requires a thorough understanding of these factors and their interactions. Creep can be mitigated through material selection, design modifications, and operating condition adjustments. For instance, materials with high creep resistance, such as certain high-temperature alloys, can be used in applications where creep is a concern. Design modifications can include reducing the stress concentrations and operating at temperatures below the material's creep range. In the next section, we will discuss another type of material failure - brittle fracture. #### 20.1c Corrosion Failure Corrosion failure is a common mode of material failure, particularly in materials exposed to harsh environments such as sea or river water. Corrosion is the gradual destruction of materials by chemical reactions with their environment. It is a slow process of material degradation that can lead to catastrophic failure if not properly managed. Corrosion can occur in various forms, including uniform corrosion, pitting corrosion, crevice corrosion, intergranular corrosion, and stress corrosion cracking. The type of corrosion that a material experiences depends on a variety of factors, including the material properties, the environmental conditions, and the time of exposure. In the context of austenitic stainless steel used in surface condensers, corrosion can occur on both the cooling water side and the steam side of the condenser. On the cooling water side, the tubes, the tube sheets, and the water boxes may be made up of materials having different compositions and are always in contact with circulating water. This water, depending on its chemical composition, will act as an electrolyte between the metallic composition of tubes and water boxes, giving rise to electrolytic corrosion. Sea water based condensers, in particular when sea water has added chemical pollutants, have the worst corrosion characteristics. The corrosive effect of sea or river water has to be tolerated and remedial methods have to be adopted. One method is the use of sodium hypochlorite, or chlorine, to ensure there is no marine growth on the pipes or the tubes. On the steam side of the condenser, the concentration of undissolved gases is high over air zone tubes. Therefore, these tubes are exposed to higher corrosion rates. Sometimes these tubes are affected by stress corrosion cracking, if original stress is not fully relieved during manufacture. As the tube ends get corroded there is the possibility of cooling water leakage to the steam side contaminating the condensed steam or condensate, which is harmful to steam generators. The other parts of water boxes may also get affected in the long run requiring repairs or replacements involving long duration shut-downs. Cathodic protection is typically employed to overcome this problem. Cathodic protection is a technique used to control the corrosion of a metal surface by making it the cathode of an electrochemical cell. This can be achieved by attaching a sacrificial anode, which is more reactive than the material of the structure. The sacrificial anode will corrode first, protecting the structure from corrosion. The corrosion behavior of materials is influenced by a variety of factors, including the material properties, the environmental conditions, and the time of exposure. Therefore, predicting corrosion failure requires a thorough understanding of these factors and their interactions. Corrosion can be mitigated through material selection, design modifications, and operating condition adjustments. For instance, materials with high corrosion resistance, such as certain grades of stainless steel, can be used in applications where corrosion is a concern. Additionally, regular inspections and maintenance can help detect and address corrosion issues early, preventing catastrophic failure. ### Section: 20.2 Failure Analysis: Failure analysis is a systematic approach to understanding the root cause of a material failure. It involves identifying the failure mode, investigating the failure mechanism, and determining the factors that contributed to the failure. This process is crucial in preventing future failures, improving material performance, and ensuring the safety and reliability of materials in service. #### 20.2a Identifying the Failure Mode The first step in failure analysis is identifying the failure mode. This involves determining how the material failed, such as whether it was due to fatigue, corrosion, overload, or some other mechanism. The failure mode can often be identified through visual inspection, but may also require more detailed analysis such as microscopic examination or chemical analysis. For example, in the case of corrosion failure discussed in the previous section, the failure mode can be identified by the presence of corrosion products on the material surface, pitting or crevice corrosion patterns, or evidence of stress corrosion cracking. Once the failure mode has been identified, the next step is to investigate the failure mechanism. This involves understanding the physical and chemical processes that led to the failure. For instance, in the case of corrosion, the failure mechanism could involve electrochemical reactions between the material and its environment, leading to the gradual degradation of the material. The failure mechanism can often be inferred from the failure mode, but may also require further analysis such as laboratory testing or computational modeling. Understanding the failure mechanism is crucial in determining the factors that contributed to the failure and in developing strategies to prevent future failures. #### 20.2b Failure Effects Analysis After identifying the failure mode and investigating the failure mechanism, the next step is to perform a failure effects analysis. This involves determining the consequences of the failure and assessing the risk associated with the failure. The risk associated with a failure can be quantified as the product of the probability of the failure occurring, the severity of the consequences if the failure occurs, and the detectability of the failure. This can be expressed mathematically as: $$ Risk = Probability \times Severity \times Detectability $$ The probability of a failure occurring can be estimated based on historical failure data or through reliability analysis. The severity of the consequences can be assessed based on the potential harm to people, property, or the environment. The detectability of the failure can be determined based on the effectiveness of inspection and monitoring procedures. By performing a failure effects analysis, it is possible to prioritize risks and develop strategies to mitigate the highest risks. This can involve improving material design, enhancing inspection and monitoring procedures, or implementing redundancy or fail-safe mechanisms. #### 20.2b Determining the Cause of Failure The final step in failure analysis is determining the cause of failure. This involves identifying the factors that contributed to the failure and understanding how they interacted to cause the material to fail. These factors can include material properties, environmental conditions, loading conditions, and manufacturing defects, among others. One useful tool for determining the cause of failure is a check sheet. This is a structured, prepared form for collecting and analyzing data. A check sheet is a document that is used to collect data in real time at the location where the data is generated. The data it captures can be quantitative or qualitative. When the information is quantitative, the check sheet is sometimes called a tally sheet. The process of using a check sheet typically involves actively observing a process and recording the presence of defects. Each time a defect is observed, it is categorized and a symbol corresponding to that category is added to the check sheet. At the end of the observation period, the categories with the most symbols are investigated to identify the sources of variation that produce the defects. For example, if a material is failing due to corrosion, the check sheet might include categories for different types of corrosion (e.g., pitting, crevice, stress corrosion cracking), different environmental conditions (e.g., humidity, temperature, presence of corrosive substances), and different material properties (e.g., alloy composition, grain size, surface finish). By tallying the occurrence of each type of defect under different conditions, the check sheet can help identify the factors that are contributing to the failure. In addition to check sheets, other tools such as cause-and-effect diagrams and checklists can also be used to help determine the cause of failure. A cause-and-effect diagram, also known as a fishbone diagram or Ishikawa diagram, can be used to systematically identify and organize the potential causes of a problem. A checklist, on the other hand, can be used as a mistake-proofing aid when carrying out multi-step procedures, particularly during the checking and finishing of process outputs. By combining these tools with a thorough understanding of the failure mode and mechanism, it is possible to identify the root cause of a material failure and develop strategies to prevent future failures. This process of failure analysis is crucial in improving the safety, reliability, and performance of materials in service. #### 20.2c Recommending Corrective Actions Once the cause of failure has been determined, the next step is to recommend corrective actions. These actions are aimed at preventing the recurrence of the failure and improving the overall performance and reliability of the material. The recommended actions can be broadly classified into two categories: immediate corrective actions and long-term corrective actions. Immediate corrective actions are those that can be implemented quickly to address the immediate cause of the failure. These actions are typically aimed at mitigating the impact of the failure and preventing further damage. For example, if a material is failing due to corrosion, an immediate corrective action might be to apply a protective coating to the material or to replace the corroded parts. Long-term corrective actions, on the other hand, are aimed at addressing the root cause of the failure. These actions often involve changes to the design, manufacturing process, or maintenance procedures of the material. For instance, if a material is failing due to stress corrosion cracking, a long-term corrective action might be to change the alloy composition of the material or to modify the manufacturing process to reduce residual stresses. The recommended corrective actions should be based on a thorough understanding of the cause of failure and the factors contributing to it. This understanding can be gained through the use of tools such as check sheets, cause-and-effect diagrams, and checklists, as discussed in the previous section. In addition to recommending corrective actions, it is also important to monitor the implementation of these actions and evaluate their effectiveness. This can be done through regular inspections, tests, and audits. The results of these evaluations should be documented and used to continuously improve the material's performance and reliability. In conclusion, failure analysis is a critical process in understanding the mechanical behavior of materials. It involves identifying the cause of failure, determining the contributing factors, recommending corrective actions, and monitoring their implementation. By following this process, engineers can improve the performance and reliability of materials and prevent future failures. # NOTE - THIS TEXTBOOK WAS AI GENERATED This textbook was generated using AI techniques. While it aims to be factual and accurate, please verify any critical information. The content may contain errors, biases or harmful content despite best efforts. Please report any issues. # NOTE - THIS TEXTBOOK WAS AI GENERATED This textbook was generated using AI techniques. While it aims to be factual and accurate, please verify any critical information. The content may contain errors, biases or harmful content despite best efforts. Please report any issues. # Mechanical Behavior of Materials: A Comprehensive Guide": ## Foreward In the ever-evolving world of materials engineering, the need for a comprehensive understanding of the mechanical behavior of materials is paramount. This book, "Mechanical Behavior of Materials: A Comprehensive Guide", is designed to provide you with a thorough understanding of the subject matter, drawing from a wealth of experimental, computational, standards, and design data. The field of materials engineering is a vast and complex one, with a multitude of materials and applications to consider. From austenitic stainless steels, as explored by Jean H. Decroix et al, to the ubiquitous carbon steel, the range of materials and their respective properties is vast. This book aims to provide a comprehensive overview of these materials, their properties, and their applications. The importance of materials databases (MDBs) in the field of materials engineering cannot be overstated. These databases, which began to significantly develop in the 1980s, provide a centralized location for the storage and retrieval of materials data. They are essential tools for research, design, and manufacturing teams, allowing for fast access and exchange of materials data across different sites worldwide. In the age of the Internet, the capabilities of MDBs have increased exponentially. Web-enabled MDBs provide even faster access to materials data, making the dissemination of public research results more efficient. This book will delve into the different types of MDBs, their uses, and their importance in the field of materials engineering. This guide is not just a collection of facts and figures. It is a comprehensive exploration of the mechanical behavior of materials, designed to provide you with the knowledge and understanding you need to excel in the field of materials engineering. Whether you are a student, a researcher, or a professional in the field, this book will serve as an invaluable resource in your journey. As we delve into the world of materials and their mechanical behavior, we invite you to join us on this journey of discovery and learning. We hope that this book will not only provide you with the knowledge you seek but also inspire you to push the boundaries of what is possible in the field of materials engineering.
Textbooks
Time-frequency feature extraction for classification of episodic memory Rachele Anderson1 & Maria Sandsten1 This paper investigates the extraction of time-frequency (TF) features for classification of electroencephalography (EEG) signals and episodic memory. We propose a model based on the definition of locally stationary processes (LSPs), estimate the model parameters, and derive a mean square error (MSE) optimal Wigner-Ville spectrum (WVS) estimator for the signals. The estimator is compared with state-of-the-art TF representations: the spectrogram, the Welch method, the classically estimated WVS, and the Morlet wavelet scalogram. First, we evaluate the MSE of each spectrum estimate with respect to the true WVS for simulated data, where it is shown that the LSP-inference MSE optimal estimator clearly outperforms other methods. Then, we use the different TF representations to extract the features which feed a neural network classifier and compare the classification accuracies for simulated datasets. Finally, we provide an example of real data application on EEG signals measured during a visual memory encoding task, where the classification accuracy is evaluated as in the simulation study. The results show consistent improvement in classification accuracy by using the features extracted from the proposed LSP-inference MSE optimal estimator, compared to the use of state-of-the-art methods, both for simulated datasets and for the real data example. The analysis of electroencephalography (EEG) signals is one of the main methodological tools for understanding how the electrical activity of the brain supports cognitive functions [1]. One of the main reasons for the importance of the EEG is the outstanding temporal resolution. Additionally, the procedure of measuring EEG on the scalp surface is non-invasive and low-cost, which makes it excellent both for clinical practice and for cognition studies. Extensive literature supports the possibility to investigate the cognitive functions, such as episodic memory, through neural activity recordings [2–4]. In cognitive psychology, event-related potentials are indicators of cognitive processes. For the transient responses which are not specifically time-locked, the time-frequency (TF) images of EEG signals have become one of the more popular techniques of today's research. The TF images are often used for extracting the features to feed a neural network classifier, and most TF methods are based on the short-time Fourier transform (spectrogram) and the wavelet transform (scalogram). The quality of the TF representation is crucial for the extraction of robust and relevant features [1, 5–7], thus leading to the demand for high-performing spectral estimators. The wavelet transform, using the Morlet wavelet, is the most popular TF method today [8–11]. Recently, it has been argued that the sine waves and the wavelets of the spectrogram and scalogram should be replaced with physiologically defined waveform shapes which could be related to an individual-based physical model [12, 13]. With more sophisticated techniques and advanced applications, the increase in the individual-based tailoring of the methods becomes even more critical. We consider an individual-based stochastic model for the signals, based on the definition of locally stationary processes (LSPs), introduced by Silverman in [14]. LSPs are characterized by a covariance function that is the modulation in time of an ordinary stationary covariance function. The optimal kernel for the estimation of the Wigner-Ville spectrum (WVS) for a certain class of LSPs is obtained in [15]. Based on this result, we derive the mean square error (MSE) optimal WVS for our model covariance. The use of a proper time-frequency kernel offers a valuable approach to address the fundamental problem in stochastic time-frequency analysis, which is the estimation of a reliable time-frequency spectrum from only a few realizations of a stochastic process. The derivation of the optimal time-frequency kernels of stochastic models has been a research field of signal processing interest for a long time [16, 17]. An efficient implementation is based on an equivalent optimal multitaper spectrogram [18–20]. The kernels are parameter-dependent, and thanks to the HAnkel-Toeplitz Separation (HATS) inference method introduced in [21], we are able to estimate the MSE optimal WVS from measured data. The derivation of the corresponding multitapers for the kernel is of great interest in the signal processing community, as their main advantages include computational efficiency of spectral properties and real-time use. In a simulation study, we compare the MSE optimal WVS with state-of-the-art TF representations. Some preliminary results are presented in a previous conference paper [22]. Here, we extend the study by including the classical estimator of the WVS and the wavelet transform among the TF representations compared. Since the wavelet transform is one of the most popular representations used for EEG signal classification, its inclusion among the state-of-the-art methods considered in the comparison is especially relevant for evaluating the advantages offered by the proposed approach. Additionally, we extend the quantitative comparison in terms of MSE with the evaluation of the classification accuracy of simulated signals based on the TF features extracted with the different methods. The controlled setting of a simulation study allows evaluating how the quality of the TF estimate affects the classification accuracy of classes of signals indistinguishable in the time-domain but having distinct TF spectra. The classifier implemented is a multilayer perceptron neural network, which has been previously used for the classification of EEG signals based on TF features, e.g., [5, 23, 24]. The real data example considered consists of three categories of EEG signals measured during a memory encoding task [8, 25, 26]. The paper is structured as follows. In Section 2, we present the different TF representations, the model of LSPs used both in the simulation study and in the real data case study, the expression for the MSE optimal WVS, and the neural network used for classification. The results for the simulation studies and for the real data case are presented in Section 3. In Section 3.1, the performance of the derived spectral estimator is evaluated in terms of MSE, whereas in Section 3.2, the performance is evaluated as classification accuracy in a simulated study. In Section 3.3, we present the results of the classification of EEG signals collected within a study on human memory encoding and retrieval. Final comments and directions for further research are given in Section 4. This section is dedicated to the exposition of the theory and methods. First, we present the different TF representations considered. Then, the definition of LSP [14] is recalled, and the inference problem discussed. We proceed by presenting the MSE optimal kernel for the general LSP case, based on [15, 27], together with the MSE optimal kernel derived for the parametric model used in this paper. Finally, we specify the neural network classification approach. Time-frequency representations The spectrogram is a natural extension of the periodogram for non-stationary signals. The spectrogram of a zero-mean signal x(t), \(t \in \mathbb {R}\), is calculated as: $$ S_{x}(t,\omega)= \left|\int_{-\infty}^{\infty} x(s)h^{*}(s-t)e^{-i \omega s}d s\right|^{2}, $$ where h(t) is a window function, commonly a Hanning window or Gaussian window centered around zero. The Welch method [28], first introduced for stationary spectral estimation, aims at reducing the variance of the spectral estimate by calculating the average of the spectrum estimates obtained on shorter segments of the data, possibly overlapping. It can be implemented in TF analysis for computing the spectral estimates of each shorter sub-sequence of the spectrogram. We can write any TF representation belonging to Cohen's class, including the spectrogram, as: $$ W^{C}_{x}(t, \omega) = \int \int \int x\left(u+ \frac{\tau}{2} \right) x^{*} \left(u- \frac{\tau}{2} \right) \phi(\theta,\tau) e^{i (\theta t-\omega \tau - \theta u)} du \,d\tau\, d\theta, $$ where all integrals run from −∞ to ∞ and ϕ(θ,τ) is an ambiguity kernel [29]. With ϕ(θ,τ)=1 for all values of θ and τ, the classical Wigner-Ville distribution is received as: $$ W_{x}(t,\omega) = \int_{-\infty}^{\infty} x\left(t+\frac{\tau}{2} \right) x^{*}\left(t-\frac{\tau}{2} \right) e^{-i \tau \omega} d\tau. $$ An extension of the uncertainty principle holds in TF analysis, implying a trade-off between frequency and temporal resolution. The wavelet transform is an approach to address the resolution limits posed from the uncertainty principle, by prioritizing the time resolution for high-frequency events and the frequency resolution for low-frequency events. The continuous wavelet transform is based on the correlation between the signal and the wavelet translated at time τ and scaled by a. The scalogram is defined as: $$ \mathcal{S}_{x}(t,a)= \left| \frac{1}{\sqrt{|a|}} \int_{-\infty}^{\infty} x(u) \psi^{*} \left(\frac{u-t}{a}\right) du \right|^{2}, $$ where the unite energy function ψ should fulfill the admissibility condition: $$ c_{\psi}= \int_{-\infty}^{\infty} \frac{\lvert \mathcal{F}\psi(\omega) \rvert^{2}}{\lvert \omega \rvert} d\omega < \infty, $$ with \(\mathcal {F}\) representing the Fourier transform. Larger scale values offer higher frequency resolution and are used for determining the frequency of long-term events, such as baseline oscillations. Smaller scales provide a high temporal resolution, necessary for determining short-time events, such as spikes and transients. A scale-to-frequency conversion allows recovering a time-frequency representation from the scalogram, where the constant \(\beta = \frac {a}{\omega }\) depends on the wavelet. Locally stationary processes The stochastic model proposed for episodic memory data and used for the simulation study belongs to the class of locally stationary processes in Silverman's sense [14]. A zero mean stochastic process X(t) is a locally stationary process (LSP) in the wide sense if its covariance: $$ C(s,t)=\mathbb{E} [X(s)X(t)^{*}] $$ can be written as: $$ C(s, t)= q \left(\frac{s+t}{2} \right) \cdot r \left(s-t \right)= q \left(\eta \right) \cdot r \left(\tau \right), $$ with \(s,t \in [T_{0},T_{f}] \subseteq \mathbb {R}\), where q(η) is a non-negative function and r(τ) is a stationary covariance function. When q(η) is a constant, (7) reduces to a stationary covariance. The proposed model is determined by the functions q(η) and r(τ) as: $$\begin{array}{*{20}l} q(\eta) &= L + a_{q} \cdot \exp \left(- c_{q} \left(\eta-b_{q} \right)^{2}/2 \right) \text{with}\ \eta= \frac{t+s}{2}, \\ r(\tau) &= \exp \left(-\frac{c_{r}}{8} \cdot \tau^{2} \right) \text{with}\ \tau= t-s, \end{array} $$ and parameters L>0,aq>0, bq∈[T0,Tf], cr>cq>0. The latter assumption is necessary to assure that the resulting covariance is positive semi-definite. This choice of functions is motivated by the case study presented in Section 3.3. In Fig. 1, we illustrate how different parameter settings affect the LSP realizations obtained from the model covariance (8). Each set of realizations presented has power centered at time bq=0.20 s, but different parameters determining the functions q and r result in a slowly varying behavior in "a" and "b," compared to a much faster variation in "c" given by a larger value of cr. The models generating the realizations shown in "a" and "b" differ only for the parameter L, representing the minimum energy level. A higher parameter L, as in "b" and "c," is more realistic and illustrates the possibility of using the level L to include the additional on-going spontaneous EEG activity. Simulated realizations. Example of simulated realizations with model covariance defined by (8) and parameters (L,aq,bq,cq,cr) equal to a (0, 500, 0.20, 800, 15,000), b (120, 500, 0.20, 800, 15,000), and c (120, 500, 0.20, 800, 50,000) For the practical use of the LSP model, the parameters defining the covariance need to be estimated. A maximum likelihood approach is not feasible since a closed-form expression of the process distribution is not known. A natural approach is the least square fitting of the model covariance to the classical sample covariance matrix (SCM). Unfortunately, the SCM is not a reliable estimator when the sample size is smaller than the number of elements to be estimated [30]. Additionally, even when the sample size would be sufficient to produce reliable estimates, the initialization of the starting point for the optimization problem is computationally expensive. The inference method HATS [21], based on the separation of the two factors defining the LSP covariance function in order to take advantage of their individual structure, allows to overcome the mentioned disadvantages of the least square fitting to the SCM. Mean square error optimal kernel The Wigner-Ville spectrum (WVS) is the stochastic version of the classical Wigner-Ville distribution in (3) [29], defined as: $$ W(t,\omega) = \int_{-\infty}^{\infty} \mathbb{E} \left[ X \left(t+\frac{\tau}{2} \right) X^{*} \left(t-\frac{\tau}{2} \right) \right] e^{-i \tau \omega} d\tau. $$ The traditional estimate of the WVS based on one realization of the process is equivalent to estimating the Wigner-Ville distribution (WV) (3). For LSPs, the WVS assumes the advantageous expression: $$ W(t,\omega)= q(t)\cdot \mathcal{F}r(\omega) $$ and the ambiguity spectrum, defined as: $$ A(\theta,\tau) = \int_{-\infty}^{\infty} \mathbb{E} \left[ X \left(t+\frac{\tau}{2} \right) X^{*} \left(t-\frac{\tau}{2} \right) \right] e^{-i t \theta} dt, $$ is given by: $$ A(\theta,\tau)= \mathcal{F}q(\theta) \cdot r(\tau) $$ [14, 15]. Any TF representation member of Cohen's class can be expressed in terms of the ambiguity spectrum and kernel as: $$ W^{C}(t,\omega) = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} A(\theta,\tau) \phi(\theta,\tau)e^{-i (\tau \omega - t \theta)} d\tau \,d\theta. $$ The MSE optimal kernel for LSP having Gaussian functions as factors of the covariance is expressed as: $$ \phi_{0} (\theta, \tau) = \frac{|\mathcal{F}q(\theta)|^{2}|r(\tau)|^{2}}{|\mathcal{F}q(\theta)|^{2}|r(\tau)|^{2} + (\mathcal{F}|r|^{2}(\theta)) (\mathcal{F}^{-1}|\mathcal{F}q|^{2}(\tau))} $$ Thanks to (14), we are able to compute the parameter-dependent optimal kernel ϕ0(θ,τ) for the introduced model (8), as: $$\phi_{0} (\theta, \tau) = \frac{|A(\theta, \tau)|^{2} }{|A(\theta, \tau)|^{2} + B(\theta, \tau)}, $$ $$\begin{array}{*{20}l} |A(\theta, \tau)|^{2} &= |\mathcal{F}q(\theta)|^{2}|r(\tau)|^{2} \\ &= \left(L^{2} \delta_{0}(\theta) + \frac{2 \pi a_{q}^{2} }{c_{q}} e^{- \frac{\theta^{2}}{c_{q} }} + 2 a_{q} L \sqrt{ \frac{2 \pi }{c_{q}}} \delta_{0}(\theta) e^{- \frac{\theta^{2}}{2c_{q}}} \right) e^{-\frac{c_{r}\tau^{2}}{4}} \end{array} $$ $$\begin{array}{*{20}l} B(\theta, \tau) &= (\mathcal{F}|r|^{2}(\theta)) (\mathcal{F}^{-1}|\mathcal{F}q|^{2}(\tau)) \\ &= \left(2 \sqrt{ \frac{\pi}{c_{r}} } e^{-\frac{\theta^{2}}{c_{r} }} \right) \left(\frac{L^{2}}{2 \pi} +a_{q}^{2} \sqrt{ \frac{ \pi }{c_{q}} } e^{- \frac{c_{q}\tau^{2}}{4} }+ a_{q} L \sqrt{ \frac{2 \pi}{ c_{q}}} \right), \end{array} $$ where δ0 denotes the Dirac delta function. The efficient implementation and estimation are based on multitapers, i.e., a weighted sum of windowed spectrograms, as: $$ W^{C}(t,\omega) = \mathbb{E} \left[ \sum_{k=1}^{K} \alpha_{k} \left| \int_{-\infty}^{\infty} X(s) h_{k}^{*}(t-s) e^{-i\omega s}ds \right|^{2} \right], $$ with weights αk and windows hk(t), k=1…K [18–20]. The weights and windows are derived from the solution of the eigenvalue problem: $$ \int_{-\infty}^{\infty} \Psi^{rot}(s,t)h(s)ds=\alpha h(t), $$ where the rotated time-lag kernel is Hermitian and defined as: $$ \Psi^{rot}(s,t)=\Psi \left(\frac{s+t}{2},s-t \right), $$ $$ \Psi(t,\tau)=\int_{-\infty}^{\infty}\phi(\theta,\tau)e^{i t \theta} d\theta. $$ The multitaper spectrogram solution is an efficient solution from implementation aspects since only a few αk differ significantly from zero. In the implementation for real signals, the corresponding analytic signal is used. Pattern recognition neural networks Pattern recognition networks are feed-forward networks, i.e., networks in which the information flow is only directed forward, which can be trained to classify inputs according to target classes [31]. The input and target vectors are divided into three separate sets for training, validation, and testing. After the training of the neural network, the validation phase is necessary to ensure that the network is generalizing and not overfitting, and the testing phase consists of a completely independent test of the network. As in standard networks used for pattern recognition, in this study, we consider a multilayer perceptron, with the input layer where the TF features are inserted, two hidden layers each consisting of 20 neurons, and an output layer for classification of the signals into three categories. The network structure is exemplified in Fig. 2. The activation function of the nodes in the hidden layers is the logistic function, also called sigmoid function, defined as: $$ a_{\text{sigmoid}}(z) = \frac{1}{1+ e^{-z}}. $$ Feed-forward network with an input layer where the TF features are inserted, two hidden layer consisting of several hidden nodes, and an output layer for classification into three categories As usual for multiclass classification, the output node j converts its total input zj into a class probability pj by using the softmax function, defined as: $$ a_{\text{softmax}}(z_{j}) = p_{j} = \frac{e^{z_{j}}}{ \sum_{k=1}^{N_{c}} e^{z_{k}}}, $$ where Nc is the total number of classes. The cost function is the categorical cross-entropy between the target probabilities pk and the outputs of the softmax qk: $$ C = - \sum_{k=1}^{N_{c}} p_{k} \text{log}(q_{k}), $$ and the learning technique is back-propagation [32, 33]. In this section, first we present the results of the evaluation of the proposed method in a simulation study where the true WVS, as given in (9), is known, and the MSE of the estimators can be calculated. The second part of the simulation study is the classification of simulated datasets based on features extracted from the different TF estimators. The same approach is then used in a real data application to classify the EEG signals measured during a memory encoding task. Mean square error evaluation The first part of this simulation study focuses on evaluating the proposed method performance in terms of MSE, in comparison with state-of-the-art estimators. We simulate M=100 realizations of an LSP with covariance function defined by (8), sampled with fs=512 Hz in 256 equidistant points during the time interval [ T0,Tf] = [0, 0.5] seconds. The model parameters for simulating the data are (L,aq,bq,cq,cr) = (120, 500, 0.20, 800, 15,000), and a few realizations from this parameter setting can be seen in Fig. 1b. The parameters (L,aq,bq,cq,cr) are estimated with the inference method HATS. Based on the parameter estimates, the MSE optimal ambiguity kernel and corresponding multitapers according to (15) are calculated. In Fig. 3, the MSE optimal multitapers and weights for this parameter set are shown. a MSE optimal multitapers and b weights evaluated for the model parameters (L,aq,bq,cq,cr) = (120, 500, 0.20, 800, 15,000) The state-of-the-art estimators considered for comparison are the single Hanning window spectrogram (HANN), calculated as in (1); the Welch 50% overlapped segment averaging with Hanning windows (WOSA); the classical Wigner-Ville spectrum estimate (WV); and finally the continuous wavelet transform using the Morlet wavelet (CWT). To allow the comparison with the other TF methods, the scales for evaluating the scalogram (4) are computed as \(a = \frac {\beta }{f}\), where f denotes the frequency in hertz and the constant β is the center frequency of the Morlet wavelet [34]. Each method is optimized to evaluate it at its best performance in terms of MSE. For HANN, the window length Nw∈{16,32,64,128,256} is optimized, while for WOSA, the optimized parameter is the number of windows K∈{2,4,8,12,16}, where the total length of all included windows is the total data length of 256 samples. For WV and CWT, a spectral magnitude scaling parameter is used to adjust the estimate magnitude to the true spectrum magnitude. The expected value of the MSE, or mean MSE (mMSE), is computed approximated as the average of 100 independent realizations. The boxplots of the MSE achieved with the different methods in the 100 simulations are presented in Fig. 4. The mMSE value for the MSE optimal estimator with the true parameters (LSP) is 1.606 and with parameters estimated from the 100 realizations with HATS (LSP-HATS) is 1.655. Not only the spectral estimate obtained using LSP-based MSE optimal kernels achieves the best mMSE as expected, but using the true parameters or those estimated with HATS leads to almost the same result. Boxplots of the MSE on 100 simulations for the spectral estimators considered, all with parameters optimized. Average MSE is mMSE 3.438 for HANN (Nw=32), 2.061 for WOSA (K=8), 4.434 for WV, 1.606 for the true LSP parameter optimal estimator, 1.655 for LSP-HATS, and 2.431 for CWT The optimal mMSE for HANN and WOSA is 3.438 and 2.061, respectively, obtained with Nw=32 for HANN and K=8 for WOSA. The worst mMSE of 4.434 is, as expected, obtained with WV. The CWT performance is in between HANN and WOSA, with a mMSE of 2.431. In order to establish how the number of realizations used to estimate the model parameters with HATS affects the results in the TF domain, we study the decrease of the MSE of LSP-HATS based on the parameter estimates obtained using an increasing number of realizations M∈{1,5,10,25,50,100}. The mMSE, computed as average on 100 independent simulations, is plotted as function of the number of realizations used with corresponding 95% confidence interval in Fig. 5. The values are reported in Table 1. The mMSE is extremely low even with only M=10 realizations. However, the number of realizations required for a reliable estimate in real data cases might be higher especially in the case of a low signal-to-noise ratio. The mMSE for LSP-HATS as function of the number of realizations used to produce the model parameter estimates with HATS. Red lines are 95% confidence intervals Table 1 mMSE and standard errors (SE) for LSP-HATS with an increasing number of realizations M used Classification of simulated stochastic signals Three classes of stochastic signals are simulated from the model (8), using slightly different parameters and center frequency f0, reported in Table 2. The parameters L and cr are the same for the three classes, whereas different parameters define the function q describing the instantaneous power, plotted in Fig. 6. The center frequency f0 for each class is uniformly distributed around a mean \(\bar {f_{0}}\), with a jitter frequency from the mean of maximum 2 Hz, i.e., \(f_{0} \sim \mathcal {U}(\bar {f_{0}}-2,\bar {f_{0}}+2)\). A few random realizations from the three classes and their true WVS are presented in Figs. 7 and 8, respectively. Even though the classes cannot be distinguished by looking at the time-domain realizations, their true WVS is different. Functions q for the three classes of simulated stochastic signals Examples of realizations from the three classes of simulated stochastic signals True WVS for the three classes of simulated stochastic signals Table 2 Parameters for the three classes of simulated signals. The simulated dataset consists of 100 realizations for each class. A random partition in 80 realizations for training and 20 realizations for testing the neural network classifier is repeated 10 times. The LSP parameters are re-estimated from the 20 realizations used for independent testing and used in the computation of their LSP-HATS spectral estimates. The use of 20 realizations assures reliable estimates as shown in the simulation study, see Fig. 1, and it is compatible with the typical scenario in studies on physiological signals as EEG, where multiple trials of an experiment are available. The classification is repeated with the 10 different independent random sets of testing realizations. The neural network classifier described in Section 2.4 is fed with TF features extracted with HANN (Nw=32), WOSA (K=8), WV, LSP-HATS, and CWT, where each feature is the spectral power at each TF point in the time interval [0, 0.5] seconds and frequency up to 40 Hz. For computing the LSP-HATS kernels, the model parameters L,aq,bq,cq,cr were inferred from training and testing realizations independently. The total classification accuracies of each method are summarized in Table 3, from which is evident how the LSP-HATS outperforms the state-of-the-art estimators with an accuracy of 86%. Similar performance is achieved with HANN and CWT, with accuracy of 65.0% and 69.7%, respectively. The methods with the worst classification performance are WOSA and WV, with an accuracy just above 50%. The confusion matrices for each method are reported in Tables 4, 5, 6, 7, and 8. The signals correctly classified appear on the diagonal of each confusion matrix; therefore, an ideal classifier would have 200 on each diagonal entry and 0 otherwise. Table 3 Total classification accuracy obtained using features extracted with the different TF estimators in the simulation study, obtained on 10 independent simulations Table 4 Confusion matrix using features obtained with HANN (Nw=32). Total classification accuracy is 65.0% Table 5 Confusion matrix using features obtained with WOSA (K=8). Total classification accuracy is 50.7% Table 6 Confusion matrix using features obtained with WV. Total classification accuracy is 54.8% Table 7 Confusion matrix using features obtained with LSP-HATS. Total classification accuracy is 86.0% Table 8 Confusion matrix using features obtained with CWT. Total classification accuracy is 69.7% Application: classification of EEG signals The data considered have been collected within a study on human memory retrieval, conducted at the Department of Psychology of Lund University, Sweden, during the spring of 2015. The EEG signals have been measured from one subject participating in the experiment. The encoding task consisted in associating a non-related word with a target picture belonging to one of three categories ("Faces," "Landmarks," "Objects"). A total of 180 trials were performed, 60 for each class. The EEG measurements were recorded from channel O1 according to the International 10–20 system, as primary visual areas can be found below the occipital lobes, and sampled with fs=512 Hz. Analogously to the simulation study, each time series has 256 equidistant samples during the time interval [0, 0.5] seconds. A few signals from each class are shown in Fig. 9. Three random EEG signals, after 70 Hz low-pass filtering, corresponding to three different trials of a memory task, from each category: a "Faces," b "Objects," and c "Landmarks" Similarly to the simulation study, the LSP model parameters L,aq,bq,cq,cr were inferred from 40 randomly selected realizations out of the total 60 for each class. The estimated parameters are used to compute the LSP-HATS kernels and corresponding optimal multitapers to be used for computing the training data TF spectra. The multitapers and weights for the three classes, estimated from all available trials, are illustrated in Fig. 10. The spectral estimates obtained with LSP-HATS, HANN (Nw=128), WOSA (K=8), WV, and CWT are used to extract TF features, where each feature is the spectral power at each TF point in the time interval [0, 0.5] seconds and frequency up to 40 Hz. Example of LSP-based MSE optimal multitapers and weights for a random set of realizations from the three categories: a multitapers and b weights for "Faces," c multitapers and d weights for "Objects," and e multitapers and f weights for "Landmarks" The remaining 20 realizations are used for independent testing of the network. The LSP parameters are re-estimated from these realizations and used in the computation of their LSP-HATS spectral estimates. The random partition in 40 realizations for training and 20 for testing is repeated 10 times, and the test is repeated with the different random sets of testing realizations. Classification accuracy is based on the 10 independent splits of the testing data. The total classification accuracy of each method is reported in Table 9. The use the TF features obtained with the proposed MSE optimal LSP-HATS estimator has resulted in significantly higher classification accuracy, compared to the use of the other estimators. Table 9 Total classification accuracy of the memory encoding EEG signals of one subject, obtained on 10 independent different partition of the data As a note of caution, despite that the proposed approach is, in theory, optimal in terms of MSE, the performance in real applications depends both on how appropriate is the model for the data and on the purpose of the application. Nevertheless, in our case study, the higher quality of the TF representation has improved the accuracy of classification. The purpose of this paper is to show how the MSE optimal WVS offers a significant improvement in practical applications, leading to a higher classification accuracy thanks to the greater quality of the TF features extracted with the proposed approach. The estimation of the model parameters thanks to a novel inference method allows the explicit computation of the corresponding MSE optimal kernel. The kernel is transformed into a robust and computationally efficient multitaper spectrogram, and a complete procedure for the LSP-inference MSE optimal spectral estimator from data is achieved. In a simulation study evaluating the performance of the method in terms of MSE, the spectral estimate obtained using the optimal kernel achieves the best average MSE compared to state-of-the-art TF estimators, such as the Hanning spectrogram, the Welch method, the classical Wigner-Ville spectrum estimator, and the Morlet wavelet-based scalogram, as expected. More critical, computing the MSE optimal kernel from the parameters estimated with the novel HAnkel-Toeplitz Separation (HATS) inference method gives a similar result, which holds for parameter estimation from the small number of 10 realizations. The performance of the proposed estimator is also evaluated in a classification simulation study, where it outperforms state-of-the-art estimators in terms of classification accuracy. The higher classification accuracy is also achieved in the episodic memory case study, where the parameter estimation on a suitable LSP model for EEG signals allows the extraction of improved features for classification. Extensions of this research could consider a multidimensional model for extracting features from multichannel measurements. The dataset used in the example are not publicly available as they are part of a larger study outside the scope of this paper but are available from the corresponding author on reasonable request. CWT: Continuous wavelet transform using the Morlet wavelet EEG: HANN: Single Hanning window spectrogram Hankel-Toeplitz separation Mean MSE MSE: Mean square error LSP: Locally stationary process LSP-HATS: LSP with parameters estimated with HATS Time-frequency WOSA: Welch 50% overlapped segment averaging with Hanning windows WV: Classical Wigner-Ville spectrum estimate WVS: Wigner-Ville spectrum M. X. Cohen, R. Gulbinaite, Review: Five methodological challenges in cognitive electrophysiology. NeuroImage. 85(Part 2), 702–710 (2014). X. Sun, C. Qian, Z. Chen, Z. Wu, B. Luo, G. Pan, Remembered or forgotten?—an EEG-based computational prediction approach. PLoS ONE. 11(12), 1–20 (2016). S. Michelmann, B. P. Staresina, H. Bowman, S. Hanslmayr, Speed of time-compressed forward replay flexibly changes in human episodic memory. Nat. Hum. Behav.3(2), 143–154 (2019). Z. Kurth-Nelson, G. Barnes, D. Sejdinovic, R. Dolan, P. Dayan, Temporal structure in associative retrieval. ELife. 4:, 1–18 (2015). K. Samiee, P. Kovacs, M. Gabbouj, Epileptic seizure classification of EEG time-series using rational discrete short-time Fourier transform. IEEE Trans. Biomed. Eng.62(2), 541–552 (2015). J. Meng, L. M. Merino, N. B. Shamlo, S. Makeig, K. Robbins, Characterization and robust classification of EEG signal from image RSVP events with independent time-frequency features. PLoS ONE. 7(9), 1–13 (2012). F. Kai, Q. Jianfeng, C. Yi, D. Yong, Classification of seizure based on the time-frequency image of EEG signals using HHT and SVM. Biomed. Sign. Process. Control. 13:, 15–22 (2014). I. Bramão, M. Johansson, Neural pattern classification tracks transfer-appropriate processing in episodic memory. eNeuro. 5(4) (2018). https://doi.org/10.1523/ENEURO.0251-18.2018. A. Jafarpour, L. Fuentemilla, A. J. Horner, W. Penny, E. Duzel, Replay of very early encoding representations during recollection. J. Neurosci.34(1), 242–248 (2014). A. Jafarpour, A. J. Horner, L. Fuentemilla, W. D. Penny, E. Duzel, Decoding oscillatory representations and mechanisms in memory. Neuropsychologia. 51:, 772–780 (2013). M. Navarrete, J. Pyrzowski, J. Corlier, M. Valderrama, M. Le Van Quyen, Original research paper: Automated detection of high-frequency oscillations in electrophysiological signals: methodological advances. J Phys Paris. 110(Part A), 316–326 (2016). S. R. Cole, B. Voytek, Review: Brain oscillations and the importance of waveform shape. Trends Cogn. Sci.21:, 137–149 (2017). M. X. Cohen, Opinion: Where does EEG come from and what does it mean?Trends Neurosci.40(4), 208–218 (2017). R. Silverman, Locally stationary random processes. IRE Trans. Inf. Theory. 3(3), 182 (1957). P. Wahlberg, M. Hansson, Kernels and multiple windows for estimation of the Wigner-Ville spectrum of Gaussian locally stationary processes. IEEE Trans. Sig. Process. 55(1), 73–84 (2007). L. Stanković, D. Mandić, M. Daković, M. Brajović, Time-frequency decomposition of multivariate multicomponent signals. Signal Process. 142:, 468–479 (2018). G. Matz, F. Hlawatsch, IEEE Trans. Inf. Theory. 52(3), 1067–1086 (2006). G. S. Cunningham, W. J. Williams, Kernel decomposition of time-frequency distributions. IEEE Trans. Signal Process. 42:, 1425–1442 (1994). F. Cakrak, P. J. Loughlin, Multiple window time-varying spectral analysis. IEEE Trans. Signal Process. 49(2), 448–453 (2001). M. Hansson-Sandsten, Optimal multitaper Wigner spectrum estimation of a class of locally stationary processes using Hermite functions. EURASIP J. Adv. Signal Process.2011:, 980805 (2011). R. Anderson, M. Sandsten, Inference for time-varying signals using locally stationary processes. J. Comput. Appl. Math.347:, 24–35 (2019). https://doi.org/10.1016/j.cam.2018.07.046. R. Anderson, M. Sandsten, in 2018 26th European Signal Processing Conference, EUSIPCO 2018, vol. 2018-September. Classification of EEG signals based on mean-square error optimal time-frequency features (EUSIPCO European Signal Processing ConferenceRome, Italy, 2018), pp. 106–110. https://doi.org/10.23919/EUSIPCO.2018.8553130. J. Yang, H. Singh, E. L. Hines, F. Schlaghecken, D. D. Iliescu, M. S. Leeson, N. G. Stocks, Channel selection and classification of electroencephalogram signals: an artificial neural network and genetic algorithm-based approach. Artif. Intell. Med.55(2), 117–126 (2012). https://doi.org/10.1016/j.artmed.2012.02.001. A. Craik, Y. He, J. L. Contreras-Vidal, Deep learning for electroencephalogram (EEG) classification tasks: a review. J. Neural Eng.16(3), 031001 (2019). https://doi.org/10.1088/1741-2552/ab0ab5. R. Hellerstedt, M. Johansson, Competitive semantic memory retrieval: temporal dynamics revealed by event-related potentials. PLoS ONE. 11(2), 0150091 (2016). I. Bramão, M. Johansson, Benefits and costs of context reinstatement in episodic memory: an ERP study. J. Cogn. Neurosci.29(1), 52–64 (2017). A. M. Sayeed, D. L. Jones, Optimal kernels for nonstationary spectral estimation. IEEE Trans Signal Process. 43(2), 478–491 (1995). P. D. Welch, The use of fast Fourier transform for the estimation of power spectra: a method based on time averaging over short, modified periodograms. IEEE Trans. Audio Electroacoustics. AU-15(2), 70–73 (1967). L. Cohen, Time-Frequency Analysis (Prentice-Hall, Upper Saddle River, 1995). S. T. Smith, Covariance, subspace, and intrinsic Cramér-Rao bounds. IEEE Trans. Signal Process. 53(5), 1610–1630 (2005). C. M. Bishop, Pattern Recognition and Machine Learning. Information science and statistics (Springer, Springer-Verlag Berlin, Heidelberg, 2006). G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, B. Kingsbury, Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Proc. Mag.29(6), 82–97 (2012). https://doi.org/10.1109/MSP.2012.2205597. I. Goodfellow, Y. Bengio, A. Courville, Deep Learning (MIT Press, Cambridge, USA, 2016). H. -G. Stark, Wavelets and Signal Processing - An Application-Based Introduction (Springer, Springer-Verlag Berlin, Heidelberg, 2005). The data in the example was collected by Ines Bramao and Mikael Johansson, Department of Psychology, Lund University. The work has been funded by the eSSENCE Strategic Research Program. Open access funding provided by Lund University. Division of Mathematical Statistics, Centre for Mathematical Sciences, Lund University, Sölvegatan 18, Lund, 221 00, Sweden Rachele Anderson & Maria Sandsten Rachele Anderson Maria Sandsten The idea for the paper was conceived jointly. RA investigated, implemented, and had the main responsibility for writing the manuscript. During the whole process, MS has provided feedback and guidance. Both authors read and approved the final manuscript. Correspondence to Rachele Anderson. The procedures followed to collect the data were in accordance with the Helsinki Declaration of 1975, as revised in 2000 and 2008. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. Anderson, R., Sandsten, M. Time-frequency feature extraction for classification of episodic memory. EURASIP J. Adv. Signal Process. 2020, 19 (2020). https://doi.org/10.1186/s13634-020-00681-8 Time-frequency features Non-stationary signals EEG signals Optimal spectral estimation
CommonCrawl
\begin{definition}[Definition:Cullen Prime] A '''Cullen prime''' is a Cullen number: :$n \times 2^n + 1$ which is also prime. \end{definition}
ProofWiki
\begin{document} \title{On solving systems of random linear disequations} \author{ {G\'abor Ivanyos}\thanks{ {Computer and Automation Research Institute of the Hungarian Academy of Sciences, Kende u. 13-17, H-1111 Budapest, Hungary.} {E-mail: \tt [email protected].} {Research partially supported by the DIAMANT mathematics cluster in the Netherlands and the NWO visitor's grant Algebraic Aspects of Quantum Computing. Part of research was conducted during the author's visit at the Technical University of Eindhoven in fall 2006. A sketch containing some of the ideas presented in this paper appeared in the appendix of \cite{FIMSS-stoc}. } } } \maketitle \begin{abstract} An important subcase of the hidden subgroup problem is equivalent to the shift problem over abelian groups. An efficient solution to the latter problem would serve as a building block of quantum hidden subgroup algorithms over solvable groups. The main idea of a promising approach to the shift problem is reduction to solving systems of certain random disequations in finite abelian groups. The random disequations are actually generalizations of linear functions distributed nearly uniformly over those not containing a specific group element in the kernel. In this paper we give an algorithm which finds the solutions of a system of $N$ random linear disequations in an abelian $p$-group $A$ in time polynomial in $N$, where $N=\log^{O(q)}\size{A}$, and $q$ is the exponent of $A$. \end{abstract} \section{Introduction} In \cite{FIMSS-stoc,FIMSS-journal} the following computational problem emerged as an important ingredient of quantum algorithms for the hidden subgroup problem in solvable groups. Below $A$ stands for an abelian group and $c$ is a real number at least $1$. \vbox{\begin{quote} {\textsc{Random Linear Disequations}}$(A,c)$ - search version\\ \textit{Oracle input:} Sample from a distribution over characters of the finite abelian group $A$ which is nearly uniform with tolerance $c$ on characters not containing a fixed element $u$ in their kernels. \\ \textit{Output:} The set of elements $u$ with the property above. \end{quote}} A character of $A$ is a homomorphism $\chi$ from $A$ to the multiplicative group of the complex numbers. The kernel $\ker\chi$ of $\chi$ is the set of the group elements on which $\chi$ takes value $1$. The characters of $A$ form a group $A*$ where the multiplication is defined by taking the product of function values. It is known that $A*$ is actually isomorphic to $A$. By near uniformity we mean that the distribution deviates from the uniform one within a constant factor expressed by the parameter $c$. The formal definition is the following. We say that a distribution over a finite set $S$ is nearly uniform with a real tolerance parameter $c\leq 1$ over a subset $S'\subseteq S$ if $\mathop{\mathsf{Pr}}(s)=0$ if $s\in S\setminus S'$ and $1/c\size{S'}\leq \mathop{\mathsf{Pr}}(s) \leq c/\size{S'}$ for $s\in S'$. If $u$ is in the expected output than so is $u^t$ where $t$ is relatively prime to $u$ -- these are the elements which generate the same cyclic subgroup as $u$. The output can be represented by any of such elements. The input is a sequence of random characters drawn independently according to the distribution. For an algorithm working with this kind of input we can interpret an access to an input character as a query. We assume that group elements and characters are represented by strings of $O(\log\size{A})$ bits. Note that it is standard to identify $A*$ with $A$ using a duality between $A$ and $A*$ obtained from fixing a basis of $A$ as well as choosing appropriate roots of unity. We may assume that characters are given that way. The name {\textsc{Random Linear Disequations}}{} is justified by the following. Assume that $A={\mathbb Z}_p^n$ where $p$ is a prime number. Then fixing a $p^{\mbox{th}}$ root of unity gives a one-to one correspondence between the characters of $A$ and homomorpisms from $A$ to the group ${\mathbb Z}_p$. If we consider $A$ as a vector space over ${\mathbb Z}_p$ (considered as a field) then these homomorphisms are actually the linear functions from $A$ to ${\mathbb Z}_p$. The task is to find the elements of $A$ which fail to satisfy any the homogeneous linear equations corresponding to the functions. We will show that search problem {\textsc{Random Linear Disequations}}$(A,c)$ is in time $\mathrm{poly}(\log{\size{A}}+\exp{(A)})$ reducible to the following decision version -- over subgroups $A'$ of $A$ and with slightly bigger tolerance parameter $c'=2c$. \vbox{\begin{quote} {\textsc{Random Linear Disequations}}$(A',c')$ - decision version \\ \textit{Oracle input:} Sample from a distribution over $A'^*$ which is \\ - either nearly uniform on characters not containing a fixed element $u$ in their kernels. \\ - or nearly uniform on the whole $A'^*$. \\ \textit{Task:} Decide which is the case. \end{quote}} The reduction is based on the following. If $A'$ is a subgroup of $A$ and we restrict characters of $A$ to $A'$ then we obtain a nearly uniform distribution characters of $A'$ not containing $u$ in their kernels. If $u\not\in A'$ this is a nearly uniform distribution over all characters of $A'$. A possible solution of the decision problem could follow the lines below. If the distribution is uniform over all characters then the kernels of the characters from a sufficiently large sample will cover the whole $A'$. Therefore a possible way to distinguish between the two cases is to collect a sufficiently large sample of characters and to check if their kernels cover the whole group $A'$. Unfortunately, this test is coNP-complete already for $A'={\mathbb Z}_3^n$. Indeed there is a straightforward reduction for non-colorability of graphs by 3 colors to this problem. In this paper we propose a classical randomized algorithm solving {\textsc{Random Linear Disequations}}{} in $p$-groups. The method is based on replacing the covering condition with a stronger but much more easily testable one which is still satisfied by not too many uniformly chosen characters. The running time is polynomial in $\log\size{A}$ if the exponent of $A$ is constant and apart from the random input the algorithm dies not require any further random bits. The structure of this paper is the following. In Section~\ref{bg-sect} we briefly summarize the relationship between {\textsc{Random Linear Disequations}}{} and certain quantum hidden subgroup algorithms. Readers not interested in quantum algorithms may skip this part. In Section~\ref{red-sect} we prove that the search version in general abelian groups is reducible to the decision problem in groups of the form ${\mathbb Z}_m^n$. We describe an algorithm for $p$-groups in Section~\ref{alg-sect}. We conclude with some open questions in Section~\ref{concl-sect}. \section{Background} \label{bg-sect} One of the most important challenges in quantum computing is determining the complexity of the so-called hidden subgroup problem (HSP). This paradigm includes as special cases finding orders of group elements (e.g., in the multiplicative group of the integers modulo a composite number as an important factorization tool), computing discrete logarithms and finding isomorphisms between graphs. Shor's seminal work \cite{Shor} gives solutions to the first two problems and essentially the same method is applicable to the commutative case of the HSP. For the HSP in non-commutative groups (this includes the third problem mentioned above), there are only a few results. Roughly speaking, all the groups in which hidden subgroups can be found efficiently by present algorithms are very close to abelian ones. In \cite{FIMSS-stoc,FIMSS-journal} we showed that an efficient solution to the following algorithmic problem can be used as an important tool for finding hidden subgroups in solvable groups. \vbox{\begin{quote} \mbox{\textsc{Hidden Shift}}\\ \textit{Oracle input:} Two injective functions $f_{0},f_{1}$ from the abelian group $A$ to some finite set $S$ such that there is an element $0\neq u\in A$ satisfying $f_1(x)=f_0(x+u)$ for every $x\in A$.\\ \textit{Output:} $u$. \end{quote}} Here the oracles for $f_i$ are given by unitary operations $U_i$ which, on input $\ket{x}\ket{0}$ return $\ket{x}\ket{f_i(x)}$. We note that \mbox{\textsc{Hidden Shift}}{} on $A$ is equivalent to the most interesting subcase of the hidden subgroup problem in the semidirect product $A\rtimes {\mathbb Z}_2$, where the non-identity element of ${\mathbb Z}_2$ acts on $A$ as flipping signs and the hidden subgroup is a conjugate of ${\mathbb Z}_2$. We refer the reader interested in this connection to \cite{Rotman} for the definition of semidirect products. The semidirect products of the form above include the dihedral groups $D_n$ of order $2n$: these are the semidirect products of the cyclic groups ${\mathbb Z}_n$ by ${\mathbb Z}_2$. In~\cite{eh00} a two-step procedure is proposed for solving the dihedral hidden subgroup problem. The procedure consists of a polynomial time (in $\log n$) quantum part and an exponential classical post-processing phase without queries. The current best dihedral hidden subgroup algorithm \cite{kup05} has both query and computational complexity exponential in $\sqrt{\log n}$. In \cite{dhi06} variants of the hidden shift problems with not necessarily injective functions are considered. Some special cases -- related to multiplicative number theoretic characters -- are shown to be solvable in polynomial time while the most general case has exponential quantum query complexity. This is not the case for our definition of the hidden shift problem as it is equivalent to a hidden subgroup problem which has polynomial query complexity by \cite{ehk04}. In \cite{FIMSS-stoc,FIMSS-journal} the following approach is proposed for solving \mbox{\textsc{Hidden Shift}}{} in certain special cases. It is based on the following procedure which is actually a version of the usual Fourier sampling in the group $A\times {\mathbb Z}_2$ (rather then in $A\rtimes {\mathbb Z}_2$). See \cite{FMSS} for a description of quantum Fourier sampling in abelian groups. \textsc{Half-Fourier sampling} \begin{enumerate} \setlength{\itemsep}{0mm} \item Create state $$\frac{1}{\sqrt{2\size{A}}} \sum_{x\in A,i\in \{0,1\}}\ket{x}\ket{i}\ket{0}_{S}.$$ \item By querying $f_i$, create state $$\frac{1}{\sqrt{2\size{A}}} \sum_{x\in A,i\in \{0,1\}}\ket{x}\ket{i}\ket{f_i(x)}.$$ \item Measure the third register. If the measured value is $f_0(x)$, the sate of the first two registers is $$\frac{1}{\sqrt{2}}\left(\ket{x}\ket{0}+\ket{x+u}\ket{1}\right).$$ \item By computing the quantum Fourier transform of ${A\times {\mathbb Z}_2}$, obtain state $$\frac{1}{2\sqrt{A}}\sum_{\chi\in A^*} \left((\chi(x)+\chi(x+u))\ket{\chi}\ket{0} +(\chi(x)-\chi(x+u))\ket{\chi}\ket{1}\right).$$ \item Measure and output the first register if the second register contains bit $1$. Otherwise abort. \end{enumerate} The probability of obtaining character $\chi$ as result of their procedure is \begin{equation} \label{prob-sampling-eq} \frac{1}{\size{A}^2} \sum_{x\in A}\frac{|\chi(x)-\chi(x+u)|^2}{4}= \frac{|1-\chi(u)|^2}{4\size{A}}. \end{equation} Note that the probability of that the procedure does not abort is \begin{equation*} \sum_{\chi\in A^*} \frac{|1-\chi(u)|^2}{4\size{A}}= \frac{1}{4\size{A}}\sum_{\chi\in A^*} (2-\chi_u-\overline{\chi(u)})=\frac{1}{2}, \end{equation*} where the last equality follows from the orthogonality relations (for the columns of the character table of $A$) which give $\sum_{\chi\in A^*}\chi(u)=0$ as $u\neq 0$. Obviously, the probability given by (\ref{prob-sampling-eq}) is nonzero if and only if $u$ is not contained in the kernel of the character $\chi$. The strategy for finding $u$ is determining the subgroup generated by $u$ first from the characters obtained by the procedure above. This reduces \mbox{\textsc{Hidden Shift}}{} to an instance where the Abelian group is cyclic. This special instance is in turn equivalent with the dihedral hidden subgroup problem which we can solve by an exhaustive search or even with Kuperberg's more efficient approach. (Note, however, that the complexity of our present method for finding the subgroup generated by $u$ dominates the complexity of the whole procedure in both cases.) Actually we only notice the subgroup of $A^*$ generated by the characters $\chi$ observed. Equivalently, we can equalize the probability of characters that generate equal subgroups of $A^*$ as follows. If character $\chi$ occurs as a result of the procedure then we draw uniformly a number $0<j<m$ which is prime to the exponent $m$ of $A$ and replace $\chi$ with $\chi^j$. We show below that we obtain a distribution which is nearly uniform on the characters $\chi$ such that $\chi(u)\neq 1$. \begin{lemma} \label{sieve-lemma} Let $\omega$ be a primitive $m_0^{\text{th}}$ root of unity, let $m$ be a multiple of $m_0$ and let $m_1$ be the product of the prime divisors of $m$. Then $$\sum_{0<j<m, (j,m)=1}\omega^j =\left\{\begin{array}{ll} \mu(m_0)\frac{m}{m_1}\phi(\frac{m_1}{m_0}), & \mbox{if $m_0|m_1$} \\ 0 & \mbox{otherwise,} \end{array}\right. $$ where $\phi$ is Euler's totient function and $\mu$ is the M\"obius function. \end{lemma} \begin{proof} For $k|m$ we define $f(k)=\sum_{1\leq j\leq k,(j,k)=1}\omega^{\frac{m}{k}j}$. Then for every $k|m$ we have $\sum_{j=1}^k\omega^{\frac{m}{k}j}=\sum_{d|k}f(d)$. (This follows from the fact that every positive integer $j\leq k$ can be uniquely written in the form $j=\frac{k}{d}\times j'$ where $d|k$, $1\leq j'\leq d$ and $(j',d)=1$.) Let $F(k)=\sum_{d|k}f(d)$ for $k|m$. Then, by the M\"obius inversion formula, $f(m)=\sum_{d|m}\mu(\frac{m}{d})F(d)$. We know that $F(d)=d$ if $\omega^\frac{m}{d}=1$ and $F(d)=0$ otherwise. Hence the product $\mu(\frac{m}{d})F(d)$ is nonzero if and only if $m_0|\frac{m}{d}|m_1$. Therefore $f(m)=\sum_{\frac{m}{m_1}|d|\frac{m}{m_0}}\mu(\frac{m}{d})d = \frac{m}{m_1} \sum_{d'|\frac{m_1}{m_0}}\mu(\frac{m_1}{d'})d' = \mu(m_0)\frac{m}{m_1} \sum_{d|\frac{m_1}{m_0}}\mu(\frac{m_1/m_0}{d})d$, if $m_0|m_1$ and $f(m)=0$ otherwise. We conclude by observing that if $\ell=p_1\cdots p_r$ where the $p_i$s are pairwise distinct primes then $\sum_{d|\ell}\mu(\frac{\ell}{d})d= \sum_{I\subseteq \{1,\ldots,r\}}(-1)^{\ell-|I|}\prod_{i\in I}p_i =\prod_{i=1}^{r}(p_i-1)=\phi(\ell)$. \end{proof} \begin{lemma} \label{almost-uni-lemma} Let $1\neq \omega$ be an $m^{\text{th}}$ root of unity. Then $$\frac{1}{2}\leq \frac{1}{2\phi(m)}\sum_{0<j\leq m,(m,j)=1}|1-\omega^j|^2 \leq 2.$$ \end{lemma} \begin{proof} Let $m_0$ be the order of $\omega$ and let $m_1$ be the product of the prime divisors of $m$. Observe that $|1-\omega^j|^2=2-\omega^j-\omega^{-j}$. Therefore $\frac{1}{2\phi(m)}\sum_{0<j\leq m,(m,j=1)}|1-\omega^j|^2= 1-\frac{1}{\phi(m)}\sum_{0<j\leq m,(m,j=1)}\omega^j$. By Lemma~\ref{sieve-lemma}, the sum on the right hand side is zero unless $m_0|m_1$. If $m_0|m_1$ then that sum has absolute value $\frac{1}{\phi(m)}\frac{m}{m_1}\phi(\frac{m_1}{m_0})$. The assertion for $m_0>2$ follows from $\phi(m)=\frac{m}{m_1}\phi(m_1)= \frac{m}{m_1}\phi(m_0)\phi(\frac{m_1}{m_0}) \geq 2\frac{m}{m_1}\phi(\frac{m_1}{m_0})$. If $m_0=2$ then $\omega=-1$ and the sum is $2$. \end{proof} From Lemma~\ref{almost-uni-lemma} we immediately obtain the following. \begin{proposition} \label{almost-uni-prop} Let $f_0,f_1: A\rightarrow S$ be an instance of \mbox{\textsc{Hidden Shift}}{} in a finite abelian group $A$ with solution $u$. Then, if we follow \textsc{Half-Fourier sampling} by raising the resulting character to $j^\text{th}$ power where $j$ is a random integer prime to the exponent of $A$ we obtain an instance of {\textsc{Random Linear Disequations}}$(A,2)$. \end{proposition} \begin{proof} Let $m$ stand for the exponent of $A$. Then by (\ref{prob-sampling-eq}), the probability of $\chi$ in the resulting distribution is $$\frac{1}{2\phi(m)\size{A}} \sum_{(j,m)=1}|1-\chi(u)^j|^2.$$ By Lemma~\ref{almost-uni-lemma}, this probability is between $\frac{1}{2\size{A}}$ and $\frac{2}{\size{A}}$. \end{proof} \section{Reductions} \label{red-sect} In this section we show that the search version of {\textsc{Random Linear Disequations}}{} is reducible to its decision version in abelian groups of the form ${\mathbb Z}_m^n$. For a finite abelian group $A$ we denote by $A^*$ its character group. Assume that $H$ is a subgroup of $A$. Then taking restrictions of characters of $A$ to $H$ gives a homomorphism form $A^*$ onto $H^*$. The kernel of this map is the set of characters which contain $H$ in their kernels. This set can be identified with the character group $(G/H)^*$. It follows that every character of $H$ has exactly $\size{(G/H)^*}$ extensions to $A$. It follows that if a distribution is nearly uniform on characters of $A$ then restriction to $H$ results in a nearly uniform distribution over characters of $H$ with the same tolerance parameter. The same holds in the reverse direction: taking uniformly random extensions of characters of $H$ to $A$ transforms a nearly uniform distribution over $H^*$ to a nearly uniform distribution over $A^*$ with the same parameter. And a similar statement holds for distributions nearly uniform on the characters of $H$ which do not contain a specific $u\in H$ in their kernels. For restricting characters of $A$ not containing the element $u\in A$ in their kernel we have the following. \begin{lemma} \label{ext1-lemma} Let $H$ be subgroup of a finite abelian group $A$, let $\chi$ be a character of $H$ and let $u\in A$. Then the number of characters of $G$ extending $\chi$ such that $\chi(u)\neq 1$ is $$ \left\{\begin{array}{ll} \size{G:H}(k-1)/k & \mbox{if $k_0=k$} \\ \size{G:H} & \mbox{if $k_0<k$}, \end{array}\right. $$ where $k$ is the smallest positive integer such that $k\cdot u\in H$ and $\chi(k\cdot u)=1$ and $k_0$ is the smallest integer such that $k_0\cdot u\in H$. \end{lemma} \begin{proof} If $k_0<k$ then $\chi(k_0u)\neq 1$ therefore $\psi(u)\neq 1$ for every $\psi$ extending $\chi$ to $G$. Assume that $k_0=k$. Let $A'$ be the subgroup of $A$ generated by $H$ and $u$ and let $K=\{x\in H\mid \chi(x)=1\}$. Then every character of $G$ extending $\chi$ takes value 1 on $K$, therefore it is sufficient to consider the characters of $A'/K$ extending the characters of $H/K$. Equivalently, we may assume that $K=1$, and $k$ is the order of $u$. Then $A'$ is the direct product of the cyclic group generated by $u$ and $H$. In this case there exists exactly one character of $G$ extending $\chi$ which take value $1$ on $u$. Thus there are $\frac{k-1}{k}\size{A'/H}$ characters of $A'$ with the desired property extending $\chi$ and each of them has $\size{A/A'}$ extensions to $A$. \end{proof} Assume that we have an instance of the search version of {\textsc{Random Linear Disequations}}$(A,c)$ with solution $u\in A$. Then, by the lemma above, restricting characters of $A$ to $H$ gives an instance of the search version {\textsc{Random Linear Disequations}}$(H,2c)$. This gives rise to the following. \begin{proposition} Let $A$ be an abelian group and let $p$ be the largest prime factor of $|A|$. Then, for every number $c\geq 1$, the search version of {\textsc{Random Linear Disequations}}$(A,c)$ is reducible to $O(p\cdot \mathrm{polylog}\size{A})$ instances of the decision version of {\textsc{Random Linear Disequations}}$(H,2c)$ over subgroups $H$ of $A$ in time $\mathrm{poly}(p\cdot \log\size{A})$. \end{proposition} \begin{proof} The first step of the reduction is a call to the decision version of {\textsc{Random Linear Disequations}}$(A,c)$. If it returns that the distribution is nearly uniform over the whole $A*$ then we are done. Otherwise there is an element $u\in A$ such that the probability of drawing $\chi\in A^*$ is zero if and only if $\chi(u)=1$. We perform an iterative search for the subgroup generated by $u$ using {\textsc{Random Linear Disequations}}{} over certain subgroups $U$ of $A$. Initially set $U=A$ Assume first that $U$ is not cyclic. Then we can find a prime $q$ such that the $q$-Sylow subgroup $Q$ of $U$ (the subgroup consisting of elements of $U$ of $q$-power order) is not cyclic. But then the factor group $Q/qQ$ is not cyclic either and we can find two subgroups $M_1$ and $M_2$ of $Q$ of index $q$ in $Q$ such that the index the intersection $M=M_1\cap M_2$ in $Q$ is $q^2$. This implies $Q/M\cong {\mathbb Z}_q^2$. Let $Q'$ be the complement of $Q$ in $G$. (Recall that $Q'$ consists of the elements of $G$ of order prime to $q$.) Let $N=M+Q'$. Then $M=N\cap Q$ and $G/N\cong Q/(N\cap Q)=Q/M\cong {\mathbb Z}_q^2$. The group ${\mathbb Z}_q^2$ has $q+1$ subgroups of order $q$: these are the lines through the origin in the finite plane ${\mathbb Z}_q^2$. As a consequence, there are exactly $q+1$ subgroups $U_1,\ldots,U_{q+1}$ with index $q$ in $G$ containing $N$. Furthermore, we can find these subgroups in time polynomial in $\log\size{G}$ and $q$. Note that $G=U_1\cup\ldots\cup U_{q+1}$. Therefore, by an exhaustive search, using the decision version of {\textsc{Random Linear Disequations}}$(U_i)$ for $i=1,\ldots,q+1$, we find an index $i$ such that $u\in U_i$. Then we proceed with $U_i$ in place of $U$. In at most $\log\size{G}$ rounds we arrive at a cyclic subgroup $U$ containing the desired elements $u$. If $U$ is cyclic then the maximal subgroups of $U$ are $U_1,\ldots,U_l$ where the prime factors of $\size{U}$ are $p_1,\ldots,p_l$ and $U_i=p_iU$. Again using the decision version of {\textsc{Random Linear Disequations}}$(U_i)$ for $i=1,\ldots,l$, we either find a proper subgroup $U_i$ containing the solutions $u$ or find that the solutions cannot be contained in any proper subgroup of $U$. In the latter case the required subgroup is $U$. \end{proof} Finally, for the decision problem we have the following. \begin{proposition} \label{distrext-prop} Let $A={\mathbb Z}_{m_1}\oplus\ldots\oplus Z_{m_n}$ be a finite abelian group of exponent $m$. (So $m$ is the least common multiple of $m_1,\ldots,m_n$.) Then, for every real number $c\geq 1$, {\textsc{Random Linear Disequations}}$(A,c)$ is reducible to {\textsc{Random Linear Disequations}}$({\mathbb Z}_m^n,c)$ in time $\mathrm{poly}\log{A}$. \end{proposition} \begin{proof} We can embed $A$ into ${{A'}}={\mathbb Z}_m^n$ as $\frac{m}{m_1}{\mathbb Z}_m\oplus\ldots\oplus\frac{m}{m_n}Z_m$. We replace a character of $A$ with a random extension to $A'$. As every character of $A$ has $\size{A'/A}$ extensions, this transforms an instate of {\textsc{Random Linear Disequations}}$(A,c)$ to {\textsc{Random Linear Disequations}}$(A',c)$. \end{proof} \section{Algorithms for $p$-groups} \label{alg-sect} In this section we describe an algorithm which solves the decision version of {\textsc{Random Linear Disequations}}{} in polynomial time over groups of the form ${\mathbb Z}_{p^{k}}^n$, for every fixed prime power $p^k$. For better understanding of the main ideas it will be convenient to start with a brief description of an algorithm which works in the case $k=1$. This case is -- implicitly -- also solved in \cite{FIMSS-journal} and in Section 3 of \cite{FIMSS-stoc}. Here we present a method similar to the above mentioned solutions. The principal difference is that here we use polynomials rather than tensor powers. This -- actually slight -- modification of the approach makes it possible to generalize the algorithm to the case $k>1$. For the next few paragraphs we assume that $k=1$, i.e., we are working on an instance of {\textsc{Random Linear Disequations}}{} over the group $A={\mathbb Z}_p^n$. We choose a basis of $A$, and fix a primitive $p^{\mbox{th}}$ root of unity $\omega$. Then characters of $A$ are of the form $\chi_x$, where $x\in G$ and for $y\in A$ the value $\chi_x(y)$ is $\omega^{x\cdot y}$, where $x\cdot y=\sum_{i=1}^n x_iy_i$. (Here $x_i$ and $y_i$ are the coordinates of $x$ and $y$, respectively, in terms of the chosen basis. Note that, as $\omega^p=1$, it is meaningful to consider $x\cdot y$ as an element of ${\mathbb Z}_p$.) Using this description of characters, we may -- and will -- assume that the oracle returns the index $x$ rather than the character $\chi_x$ itself. We also consider $A$ as an $n$-dimensional vector space over the finite field ${\mathbb Z}_p$ equipped with the scalar product $x\cdot y$ above. The algorithm will distinguish between a nearly uniform distribution over the whole group $A$ and an arbitrary distribution where the probability of any vector orthogonal to a fixed vector $0\neq u$ is zero. We claim that in the case of a distribution of the latter type there exists a polynomial $Q\in {\mathbb Z}_p[x_1,\ldots,x_n]$ of degree $p-1$. such that for every $x$ which occur with nonzero probability we have $Q(x)=0$. Indeed, for any fixed $u$ with the property above, $(\sum u_ix_i)^{p-1}-1$ is such a polynomial by Fermat's little theorem. On the other hand, if the distribution is nearly uniform over the whole group then, for sufficiently large sample size $N$, with high probability there is no nonzero polynomial $Q\in {\mathbb Z}_p[x_1,\ldots,x_n]$ of degree at most $p-1$ such that $Q(a^{(i)})=Q(a_1^{(i)},\ldots,a_n{(i)})=0$ for every vector $a^{(i)}$ from the sample $a^{(1)},\ldots,a^{(N)}$. This can be seen as follows. Let us consider the vector space $W$ of polynomials of degree at most $p-1$ in $n$ variables over the field ${\mathbb Z}_p$. Substituting a vector $a=(a_1,\ldots,a_n)$ into polynomials $Q$ is obviously a linear function on $W$. Therefore for any $N_1\leq N$, the polynomials vanishing at $a^{(1)},\ldots,a^{(N_1)}$ is a linear subspace $W_{N_1}$ of $W$. Furthermore, by the Schwartz--Zippel lemma \cite{Schwartz,Zippel}, the probability of that a uniformly drawn vector $a$ from ${\mathbb Z}_p^n$ is a zero of a particular nonzero polynomial of degree $p-1$ (or less) is at most $(p-1)/p$. This implies that with probability proportional to $1/cp$, the subspace $W_{N_1+1}$ is strictly smaller than $W_{N_1}$ unless $W_{N_1}$ is zero. This implies that, if the sample size $N$ is proportional to $p\cdot \dim W$ then with high probability, $W_N$ will be zero. Also, we can compute $W_N$ by solving a system of $N$ linear equations over ${\mathbb Z}_p$ in $\dim W=\binom{n+p-1}{n}=n^{O(p)}$ variables. Note that the key ingredient of the argument above -- the Schwartz-Zippel bound on the probability of hitting a nonzero of a polynomial -- is also known from coding theory. Namely we can encode such a polynomial $Q(x)=Q(x_1,\ldots,x_n)$ with the vector consisting of all the values $P(a)=P(a_1,\ldots,a_n)$ taken at all the vectors $a=(a_1,\ldots,a_n)$ in ${\mathbb Z}_p^n$. This is a linear encoding of $W$ and the image of $W$ under such an encoding is a well known generalized Reed--Muller code. The relative distance of this code is $(p-1)/p$. We turn to the general case: below we present an algorithm solving {\textsc{Random Linear Disequations}}{} in the group $A={\mathbb Z}_{p^k}^n$ where $k$ is a positive integer. Like in the case $k=1$, the characters of the group $A={\mathbb Z}_{p^k}^n$ can be indexed by elements of $A$ when we fix a basis of $A$ and a primitive ${p^k}^{\mbox{th}}$ root of unity $\omega$: $\chi_x(y)=\omega^{x\cdot y}$, where $x\cdot y$ is the sum of the product of the coordinates of $x$ and $y$ in terms of the fixed basis. Again, we can consider $x\cdot y$ as an element of ${\mathbb Z}_{p^k}$. In view of this, it is sufficient to present a method that distinguishes between a nearly uniform distribution over ${\mathbb Z}_{p^k}^n$, and an arbitrary one where vectors which are orthogonal to a fixed vector $u\neq 0$ have zero probability. The method is based on the idea outlined above for the case $k=1$ combined with an encoding of elements of ${\mathbb Z}_{p^k}$ by $k$-tuples of elements of ${\mathbb Z}_p$. The encoding is the usual base $p$ expansion, that is, the bijection $\delta:\sum_{j=0}^{k-1} a_j p^j\mapsto (a_0,\ldots,a_{k-1}) $. We can extend this map to a bijection between ${\mathbb Z}_{p^k}^n$ and ${\mathbb Z}_p^{kn}$ in a natural way. Obviously the image under $\delta$ of a nearly uniform distribution over ${\mathbb Z}_{p^k}^n$ is nearly uniform over ${\mathbb Z}_p^{kn}$. In the next few lemmas we are going to show that for every $0\neq u\in {\mathbb Z}_{p^k}^n$ there is a polynomial $Q$ of "low" degree in $kn$ variables such that for every vector $a\in {\mathbb Z}_{p^k}^n$ not orthogonal to $u$, the codeword $\delta(a)$ is a zero of $Q$. We begin with a polynomial expressing the {\em carry term} of addition of two base $p$ digits. \begin{lemma} There is a polynomial $C(x,y) \in {\mathbb Z}_p[x,y]$ of degree at most $2p-2$ such that for every pair of integers $a,b \in \{0,\ldots,p-1\}$, $C(a,b)=0$ if $a+b<p$ and $C(a,b)=1$ otherwise. \end{lemma} \begin{proof} For $i\in\{0,\ldots,p-1\}$, let $L_i(z) \in {\mathbb Z}_p[z]$ denote the Lagrange polynomial $\prod_{0 \leq j < p: j \neq i}(z-j)/(i-j)$. We have $L_i(i) = 1$ and $L_i(j) = 0$ for $j \neq i$. Define $C(x,y) = \sum_{0 \leq i,j < p: i+j \geq p}L_i(x)L_j(y)$. \end{proof} Using the carry polynomial $C(x,y)$ we can also express the base $p$ digits of sums by polynomials. \begin{lemma} \label{add-lem} For every integer $T\geq 1$, there exist polynomials $Q_i$ from the polynomial ring ${\mathbb Z}_p[y_{1,0},\ldots,y_{1,k-1},\ldots,y_{T,0},\ldots,y_{T,k-1}]$, ($i=0,\ldots,k{-}1$) with $\deg Q_i\leq (2p-2)^i$ such that $$\delta\left({\sum_{t=1}^T a_t \;\mod{p^k}}\right) = \left(Q_0(\delta(a_1),\ldots,\delta(a_T)),\ldots, Q_{k-1}(\delta(a_1),\ldots,\delta(a_T))\right)$$ for every $a_1,\ldots,a_T \in {\mathbb Z}_{p^k}$. \end{lemma} \begin{proof} The proof is accomplished by induction on $k$. For $k=1$ the statement is obvious: we can take $Q_0=\sum_{t=1}^T y_{t,0}$. Now let $k > 1$. Again set $Q_0=\sum_{t=1}^T y_{t,0}$ and for $t=2,\ldots,T$ set $C_t=C\left((\sum_{j=1}^{t-1} y_{j,0}), y_{t,0}\right)$. Then for every $a_1,\ldots,a_T \in {\mathbb Z}_{p^k}$, the digits $s_0,\ldots,s_{k-1}$ of the sum $s=\sum_{t=1}^T a_t \mod{p^k}$ satisfy \begin{eqnarray*} s_0&=&Q_0({a}_{1,0},\ldots,{a}_{n,0}) \mod{p},\\ \sum_{j=1}^{k-1}s_jp^{j-1} &=& \sum_{t=1}^T \lint{a_t/p} + \sum_{t=2}^T c_t \mod{p^{k-1}}, \end{eqnarray*} where $c_t=C_t({a}_{1,0},\ldots,{a}_{t,0})$. In other words, the $0^{\text{th}}$ digit of the sum $s$ is a linear polynomial in $a_{t,0}$, and, for $1\leq j\leq k-1$, the $j^{\text{th}}$ digit is the $(j{-}1)^{\text{th}}$ digit in the RHS term of the second equation. There we have a sum of $2T-1$ terms and each digit of each term is a polynomial of degree at most $2p{-}2$ in the $a_{t,j}$. Therefore we can conclude using the inductive hypothesis applied to that (longer) sum. \end{proof} Recall that we extend $\delta$ to ${\mathbb Z}_{p^k}^n$ in the natural way. To be specific, for $a=(a_1,\ldots,a_n)\in{\mathbb Z}_{p^k}^n$ we define $\delta(a)\in {\mathbb Z}_p^{kn}$ as the vector $(a_{1,0},\ldots,a_{n,k{-}1})\in{\mathbb Z}_p^{kn}$ where $a_{i,j}$ is the $j^{\text{th}}$ coordinate of $\delta(a_i) \in {\mathbb Z}_p^k$. We can express the digits of the scalar products of a vector from ${\mathbb Z}_{p^k}^n$ with a fixed one as follows. \begin{lemma} \label{dotprod-lem} For every $u\in{\mathbb Z}_{p^k}^n$, there exist polynomials $Q_i\in{\mathbb Z}_p[x_{1,0},\ldots,x_{n,m{-}1}]$ of total degree at most $(2p-2)^i$, for $i=0,\ldots,k-1$, such that $\delta({a\cdot u})=(Q_0(\delta({a})),\ldots,Q_{k-1}(\delta({a})))$ for every $a\in{\mathbb Z}_{p^k}^n$. \end{lemma} \begin{proof} The statement follows from Lemma~\ref{add-lem} by repeating $u_i$ times the coordinate $x_i$, and taking the sum of all the terms obtained this way modulo $p^k$. \end{proof} In order to simplify notation, for the rest of this section we set $x_{jp+i}=x_{i,j}$ ($j=0,\ldots,k-1,\,i=1,\ldots,n$). For every positive integer $D$, let ${\mathbb Z}_p^D[x_1,\ldots,x_{nk}]$ be the linear subspace of polynomials of ${\mathbb Z}_p[x_1,\ldots,x_{nk}]$ whose total degree is at most $D$ and partial degrees are at most $p{-}1$ in each variable. W Together with Fermat's little theorem, the previous lemma implies a polynomial characterization over ${\mathbb Z}_p$ of vectors in ${\mathbb Z}_{p^k}^n$ that are not orthogonal to a fixed vector $u\in{\mathbb Z}_{p^k}^n$. \begin{lemma} \label{nonfull-lem} Let $D=\frac{(p-1)((2p-2)^k-1)}{2p-3}$. For every $u\in{\mathbb Z}_{p^k}^n$, there exists a polynomial $Q_u\in{\mathbb Z}_p^D[x_1,\ldots,x_{nk}]$ such that for every $a\in{\mathbb Z}_{p^k}^n$, $a \cdot u \neq 0 \mod{p^k}$ if and only if $L_{\delta({a})}\cdot Q_u= 0 $. \end{lemma} \begin{proof} Let $Q=\prod_{j=0}^{k-1} (Q_j^{p-1}-1)$, where the polynomials $Q_j$ come from Lemma~\ref{dotprod-lem}. This polynomial has the required total degree. To ensure that partial degrees are less than $p{-}1$, we replace $x_i^{p}$ terms with $x_i$ until every partial degree is at most $p-1$. Let $Q_u$ be the polynomial obtained this way. Then $Q_u$ and $Q$ encode the same function over ${\mathbb Z}_p^{nk}$. Therefore, since $L_{\delta({a})}\cdot Q_u=Q_u(\delta({a}))$, the polynomial $Q_u$ satisfies the required conditions. \end{proof} It remains to show that if $N$ is large then with high probability, for a sample $a_1,\ldots,a_N$ taken accordingly to a nearly uniform distribution over ${\mathbb Z}_p^{nk}$, there is no nonzero polynomial in ${\mathbb Z}_p^D[x_1,\ldots,x_{nk}]$ vanishing at all the points $a_1,\ldots,a_N$ where $D$ is as in Lemma~\ref{nonfull-lem}. Furthermore, we also need an efficient method for demonstrating this. To this end, for every $a\in{\mathbb Z}_p^{nk}$, we denote by $\ell_{a}$ the linear function over polynomials in ${\mathbb Z}_p^D[x_1,\ldots,x_{nk}]$ that satisfies $\ell_{a}(Q)=Q({a})$. Deciding whether the zero polynomial is the the only polynomial in ${\mathbb Z}_p^D[x_1,\ldots,x_{nk}]$ such that $\ell_{a_i}(Q)=0$ amounts to determining the rank of the the $N\times \Delta$ matrix whose entries are $\ell_{a_i}(M)$ where $M$ runs over the monomials in ${\mathbb Z}_p^D[x_1,\ldots,x_{nk}]$. Here $\Delta$ stands for the dimension of ${\mathbb Z}_p^D[x_1,\ldots,x_{nk}]$. Note that $\Delta\leq \binom{kn+D-1}{kn}$. The image of the space ${\mathbb Z}_p^D[x_1,\ldots,x_{nk}]$ under the linear map $L: Q\mapsto (\ell_a(Q))_{a\in {\mathbb Z}_p^{nk}}$ is known as a generalized Reed--Muller code with minimal weight at least $(p-s)p^{nk-r-1}\leq p^{nk-\cint{D/(p-1)}}$, where $r,s$ are integers such that $0\leq s<p-1$ and $\mathop{\mathsf{Max}}\{D,(p-1)nk\}=r(p-1)+s$ cf. \cite{ak98}. For $N_1\leq N$, let $W_{N_1}$ stand for the subspace of polynomials in ${\mathbb Z}_p^D[x_1,\ldots,x_{nk}]$ vanishing at all the points $a_1,\ldots,a_{N_1}$. The minimal weight bound above gives that for $N_1<N$, $$\mathop{\mathsf{Pr}}(W_{N_1+1}<W_{N_1}|W_{N_1}\neq 0)\geq \frac{1}{c} \cdot p^{-\cint{D/(p-1)}}.$$ Here $c$ is the parameter of near uniformity. The formula above implies that if $$N=O(cp^{\cint{D/p-1}}\dim {\mathbb Z}_p^D[x_1,\ldots,x_{nk}])= c(pnk)^{O(2p)^k},$$ then with probability at least $2/3$, $W_N$ will be zero - provided that we have a nearly uniform distribution with parameter $c$. (In the second bound we have used that $D=\frac{(p-1)((2p-2)^k-1)}{2p-3}=O((2p)^k)$. Together with the remark on rank computation this gives the following. \begin{theorem} \label{main-thm} {\textsc{Random Linear Disequations}}$({\mathbb Z}_{p^k}^n,c)$ can be solved in time $c(pnk)^{O((2p)^k)}$ with (one-sided) error $1/3$. In particular, for every fixed prime power $p^k$, and for every fixed constant $c$, {\textsc{Random Linear Disequations}}$({\mathbb Z}_{p^k}^n,c)$ can be solved in time polynomial in $n$. \end{theorem} \qed \section{Concluding remarks} \label{concl-sect} We have shown that for any fixed prime power $p^k$, the problem {\textsc{Random Linear Disequations}}{} over the group ${\mathbb Z}_{p^k}^n$ can be solved in time which is polynomial in the rank $n$. Actually if we let the exponent $p^k$ grow as well then our method runs in time polynomial in the rank $n$ but exponential in the exponent $p^k$. Note that a brute force algorithm which takes a sample of size $O(knp^k\log p)$ (the kernels that many random characters cover the whole group with high probability) and performs exhaustive search over all the the elements of ${\mathbb Z}_{p^k}^n$ runs in time $(p^{kn})^{O(1)}$ which is polynomial in the exponent $p^k$ and exponential in $n$. It would be interesting to know if there exists a method which solves {\textsc{Random Linear Disequations}}{} in time polynomial in both $n$ and $p^k$. Also, the method of this paper exploits seriously that the exponent of the group is a prime power. Existence of an algorithm for {\textsc{Random Linear Disequations}}{} in ${\mathbb Z}_m^n$ of complexity polynomial in $n$ for fixed $m$ having more than one prime divisors appears to be open, even in the smallest case $m=6$. \end{document}
arXiv
Rybicki Press algorithm The Rybicki–Press algorithm is a fast algorithm for inverting a matrix whose entries are given by $A(i,j)=\exp(-a\vert t_{i}-t_{j}\vert )$, where $a\in \mathbb {R} $[1] and where the $t_{i}$ are sorted in order.[2] The key observation behind the Rybicki-Press observation is that the matrix inverse of such a matrix is always a tridiagonal matrix (a matrix with nonzero entries only on the main diagonal and the two adjoining ones), and tridiagonal systems of equations can be solved efficiently (to be more precise, in linear time).[1] It is a computational optimization of a general set of statistical methods developed to determine whether two noisy, irregularly sampled data sets are, in fact, dimensionally shifted representations of the same underlying function.[3][4] The most common use of the algorithm is in the detection of periodicity in astronomical observations, such as for detecting quasars.[4] The method has been extended to the Generalized Rybicki-Press algorithm for inverting matrices with entries of the form $A(i,j)=\sum _{k=1}^{p}a_{k}\exp(-\beta _{k}\vert t_{i}-t_{j}\vert )$.[2] The key observation in the Generalized Rybicki-Press (GRP) algorithm is that the matrix $A$ is a semi-separable matrix with rank $p$ (that is, a matrix whose upper half, not including the main diagonal, is that of some matrix with matrix rank $p$ and whose lower half is also that of some possibly different rank $p$ matrix[2]) and so can be embedded into a larger band matrix (see figure on the right), whose sparsity structure can be leveraged to reduce the computational complexity. As the matrix $A\in \mathbb {R} ^{n\times n}$ has a semi-separable rank of $p$, the computational complexity of solving the linear system $Ax=b$ or of calculating the determinant of the matrix $A$ scales as ${\mathcal {O}}\left(p^{2}n\right)$, thereby making it attractive for large matrices.[2] The fact that matrix $A$ is a semi-separable matrix also forms the basis for celerite[5] library, which is a library for fast and scalable Gaussian process regression in one dimension[6] with implementations in C++, Python, and Julia. The celerite method[6] also provides an algorithm for generating samples from a high-dimensional distribution. The method has found attractive applications in a wide range of fields, especially in astronomical data analysis.[7][8] See also • Invertible matrix • Matrix decomposition • Multidimensional signal processing • System of linear equations References 1. Rybicki, George B.; Press, William H. (1995), "Class of fast methods for processing Irregularly sampled or otherwise inhomogeneous one-dimensional data", Physical Review Letters, 74 (7): 1060–1063, arXiv:comp-gas/9405004, Bibcode:1995PhRvL..74.1060R, doi:10.1103/PhysRevLett.74.1060, PMID 10058924, S2CID 17436268 2. Ambikasaran, Sivaram (2015-12-01). "Generalized Rybicki Press algorithm". Numerical Linear Algebra with Applications. 22 (6): 1102–1114. arXiv:1409.7852. doi:10.1002/nla.2003. ISSN 1099-1506. S2CID 1627477. 3. Rybicki, George B.; Press, William H. (October 1992). "Interpolation, realization, and reconstruction of noisy, irregularly sampled data". The Astrophysical Journal. 398: 169. Bibcode:1992ApJ...398..169R. doi:10.1086/171845. 4. MacLeod, C. L.; Brooks, K.; Ivezic, Z.; Kochanek, C. S.; Gibson, R.; Meisner, A.; Kozlowski, S.; Sesar, B.; Becker, A. C. (2011-02-10). "Quasar Selection Based on Photometric Variability". The Astrophysical Journal. 728 (1): 26. arXiv:1009.2081. Bibcode:2011ApJ...728...26M. doi:10.1088/0004-637X/728/1/26. ISSN 0004-637X. S2CID 28219978. 5. "celerite — celerite 0.3.0 documentation". celerite.readthedocs.io. Retrieved 2018-04-05. 6. Foreman-Mackey, Daniel; Agol, Eric; Ambikasaran, Sivaram; Angus, Ruth (2017). "Fast and Scalable Gaussian Process Modeling with Applications to Astronomical Time Series". The Astronomical Journal. 154 (6): 220. arXiv:1703.09710. Bibcode:2017AJ....154..220F. doi:10.3847/1538-3881/aa9332. ISSN 1538-3881. S2CID 88521913. 7. Foreman-Mackey, Daniel (2018). "Scalable Backpropagation for Gaussian Processes using Celerite". Research Notes of the AAS. 2 (1): 31. arXiv:1801.10156. Bibcode:2018RNAAS...2...31F. doi:10.3847/2515-5172/aaaf6c. ISSN 2515-5172. S2CID 102481482. 8. Parviainen, Hannu (2018). "Bayesian Methods for Exoplanet Science". Handbook of Exoplanets. Springer, Cham. pp. 1–24. arXiv:1711.03329. doi:10.1007/978-3-319-30648-3_149-1. ISBN 9783319306483. External links • Implementation of the Generalized Rybicki Press algorithm • celerite library on GitHub
Wikipedia
MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up. Holomorphic h-principle for compact manifolds The Oka principle for Stein manifolds says (roughly) that the only obstructions for "things" are topological obstructions (for instance every smooth complex vector bundle admits a holomorphic structure, etc). Is there a similar principle (atleast in some cases) for compact complex manifolds? Or atleast some version of a h-principle for compact manifolds? complex-geometry h-principle Willie Wong VamsiVamsi $\begingroup$ Hi Vamsi, As Johannes states, of course the answer is "no" in general. However, for fiber bundles of very special type, and up to replacing holomorphic section by meromorphic section, there are some such results in the compact case. The basic example is Tsen's theorem: a $\mathbb{P}^1$-bundle over a Riemann surface always has a holomorphic section. $\endgroup$ – Jason Starr I don't think you get an $h$-principle for compact complex manifolds. Example: Given a complex line bundle $L \to M$, it admits a holomorphic structure iff the image of its Chern class in $H^2 (M;\mathcal{O})$ is zero. Similarly, the group of holomorphic line bundles which are topologically trivial is the cokernel of the homomorphism $H^1 (M; \mathbb{Z}) \to H^1 (M;\mathcal{O})$. Complex tori show that Oka's priniciple fails for compact complex manifolds. The Gromov-Phillips h-principle for closed manifolds is false as well, immersions are the only special case which applies to closed manifolds I am aware of. All other versions (e.g. submersions, symplectic structures, positively or negatively curved metrics) fail, and each of them fails in a fairly spectacular manner. There are some exceptions to the rule, but in general I would say that one needs noncompactness to push away all possible obstructions to infinity. Johannes EbertJohannes Ebert Thanks for contributing an answer to MathOverflow! Not the answer you're looking for? Browse other questions tagged complex-geometry h-principle or ask your own question. Almost Complex Structure approach to Deformation of Compact Complex Manifolds From Topological to Smooth and Holomorphic Vector Bundles On holomorphic vector bundles over compact Kahler surfaces Analogue of Infinitesimal Schwarzian for holomorphic $(G,X)$-manifolds Orbit type stratification of a holomorphic symplectic manifold. Do holomorphic symplectic manifolds admit (high codimension) embeddings in some standard space?
CommonCrawl
\begin{document} \title{Time reversal of Volterra processes driven stochastic differential equations} \author{L. Decreusefond} \address{Institut TELECOM - TELECOM ParisTech - CNRS LTCI\\ Paris, France} \email{[email protected]} \date{} \keywords{fractional Brownian motion, Malliavin calculus, stochastic differential equations, time reversal.} \subjclass[2010]{60G18, 60H10, 60H07, 60H05} \thanks{The author would like to thank the anonymous referees for their thorough reviews, their comments and suggestions significatively contributed to improve the quality of this contribution.} \begin{abstract} We consider stochastic differential equations driven by some Volterra processes. Under time reversal, these equations are transformed into past dependent stochastic differential equations driven by a standard Brownian motion. We are then in position to derive existence and uniqueness of solutions of the Volterra driven SDE considered at the beginning. \end{abstract} \maketitle{} \markboth{Time reversal of fBm driven SDEs}{L. Decreusefond} \section{Introduction} \label{sec_introduction} Fractional Brownian motion (fBm for short) of Hurst index $H\in [0,\, 1]$ is the Gaussian process which admits the following representation: For any $t\ge 0$, \begin{equation*} B^H(t)=\int_0^t K_H(t,\, s)\text{ d} B(s) \end{equation*} where $B$ is a one dimensional Brownian motion and $K_H$ is a triangular kernel, i.e. $K_H(t,s)=0$ for $s>t$, the definition of which is given in \eqref{defdekh}. Fractional Brownian motion is probably the first process which is not a semi-martingale and for which it is still interesting to develop a stochastic calculus. That means we want to define a stochastic integral and solve stochastic differential equations driven by such a process. From the very beginning of this program, two approaches do exist. One approach is based on the Hölder continuity or the finite $p$-variation of the fBm sample-paths. The other way to proceed relies on the gaussiannity of fBm. The former is mainly deterministic and was initiated by Zähle \cite{zahle98}, Feyel, de la Pradelle \cite{feyel96} and Russo, Vallois \cite{russo95,russo96}. Then, came the notion of rough paths introduced by Lyons \cite{lyons98}, whose application to fBm relies on the work of Coutin, Qian\cite{MR1883719}. These works have been extended in the subsequent works \cite{MR2319719,DN:flots-06,MR2144230,MR2257130,MR2144234,MR2228694,MR2348055,MR2359060,MR2138713,MR2268434,MR2236631}. A new way of thinking came with the independent but related works of Feyel, de la Pradelle \cite{EJP2006-35} and Gubinelli \cite{MR2091358}. The integral with respect to fBm was shown to exist as the unique process satisfying some characterization (analytic in the case of \cite{EJP2006-35}, algebraic in \cite{MR2091358}). As a byproduct, this showed that almost all the existing integrals throughout the literature were all the same as they all satisfy these two conditions. Behind each approach but the last too, is a construction of an integral defined for a regularization of fBm, then the whole work is to show that under some convenient hypothesis, the approximate integrals converge to a quantity which is called the stochastic integral with respect to fBm. The main tool to prove the convergence is either integration by parts in the sense of fractional deterministic calculus, either enrichment of the fBm by some iterated integrals proved to exist independently or by analytic continuation \cite{Unterberger:2009rt,Unterberger:2009yq}. In the probabilistic approach \cite{MR2000m:60059,alos00,decreusefond02,MR1956051,decreusefond03.1,decreusefond96_2,MR2397797,MR1893308,MR2493996}, the idea is also to define an approximate integral and then prove its convergence. It turns out that the key tool is here the integration by parts in the sense of Malliavin calculus. In dimension greater than one, with the deterministic approach, one knows how to define the stochastic integral and prove existence and uniqueness of fBm driven SDEs for fBm with Hurst index greater than $1/4$. Within the probabilistic framework, one knows how to define a stochastic integral for any value of $H$ but one cannot prove existence and uniqueness of SDEs whatever the value of $H$. The primary motivation of this work is to circumvent this problem. In \cite{decreusefond03.1,decreusefond96_2}, we defined stochastic integrals with respect to fBm as a ``damped-Stratonovitch'' integral with respect to the underlying standard Brownian motion. This integral is defined as the limit of Riemann-Stratonovitch sums, the convergence of which is proved after an integration by parts in the sense of Malliavin calculus. Unfortunately, this manipulation generates non-adaptiveness: Formally the result can be expressed as \begin{equation*} \int_0^t u(s)\circ \text{ d} B^H(s)=\delta({\mathcal K}^*_t u)+\operatorname{trace}({\mathcal K}^*_t \nabla u), \end{equation*} where ${\mathcal K}$ is defined by \begin{equation*} {\mathcal K} f(t)=\frac{d}{dt}\int_0^t K_H(t,\,s)f(s) \text{ d} s \end{equation*} and ${\mathcal K}^*_t$ is the adjoint of ${\mathcal K}$ in ${\mathcal L}^2([0,\, t],\, {\mathbf R}}\def\N{{\mathbf N})$. In particular, there exists $k$ such that \begin{equation*} {\mathcal K}^*_tf(s)=\int_s^t k(t,u) f(u)\text{ d} u \end{equation*} for any $f\in {\mathcal L}^2([0,\, t],\, {\mathbf R}}\def\N{{\mathbf N})$ so that even if $u$ is adapted (with respect to the Brownian filtration), the process $(s\mapsto {\mathcal K}^*_tu(s))$ is anticipative. However, the stochastic integral process $(t\mapsto \int_0^t u(s)\circ \text{ d} B^H(s))$ remains adapted, hence, the anticipative aspect is, in some sense, artificial. The motivation of this work is to show, that up to time reversal, we can work with adapted process and Itô integrals. The time-reversal properties of fBm were already studied in \cite{MR2346508} in a different context: It was shown there that the time-reversal of the solution of an fBm-driven SDE of the form \begin{equation*} dY(t)=u(Y(t))\text{ d} t + \text{ d} B^H(t) \end{equation*} is still a process of the same form. With a slight adaptation of our method to fBm-driven SDEs with drift, one should recover the main theorem of \cite{MR2346508}. In what follows, there is no restriction about the dimension but we need to assume that any component of $B^H$ is an fBm of Hurst index greater than $1/2.$ Consider that we want to solve the equation \begin{equation} \label{eq:1} X_t = x + \int_0^t \sigma(X_s)\circ \text{ d} B^H(s), \, 0\le t \le T \end{equation} where $\sigma$ is a deterministic function whose properties will be fixed below. It turns out that it is essential to investigate the more general equations: \begin{equation} \label{eq:A} \tag{A} X_{r,\, t}=x+\int_r^t \sigma(X_{r,\, s})\circ \text{ d} B^H(s), \ 0\le r\le t\le T. \end{equation} The strategy is then the following: We will first consider the reciprocal problem: \begin{equation} \label{eq:B} \tag{B} Y_{r,\, t} =x-\int_r^t \sigma(Y_{s,\, t})\circ \text{ d} B^H(s), \ 0\le r\le t\le T. \end{equation} The first critical point is that when we consider $\{Z_{r,\, t}:=Y_{t-r,\, t}, \, r\in [0,\, t]\},$ this process solves an adapted, past dependent, stochastic differential equation with respect to a standard Brownian motion. Moreover, because $K_H$ is lower-triangular and sufficiently regular, the trace term vanishes in the equation defining $Z$. We have then reduced the problem to an SDE with coefficients dependent on the past, a problem which can be handled by the usual contraction methods. We do not claim that the results presented are new (for instance see the brilliant monograph \cite{MR2604669} for detailed results obtained via rough paths theory) but it seems interesting to have purely probabilistic methods which show that fBm driven SDEs do have strong solutions which are homeomorphisms. Moreover, the approach given here shows the irreducible difference between the case $H<1/2$ and $H>1/2$ : The trace term only vanishes in the latter situation, so that such an SDE is merely a usual SDE with past-dependent coefficients. This representation may be fruitful for instance, to analyze the support and prove the absolute continuity of solutions of \eqref{eq:1}. This paper is organized as follows: After some preliminaries on fractional Sobolev spaces, often called Besov-Liouville space, we address, in Section \ref{sec_stoch-integr}, the problem of Malliavin calculus and time reversal. This part is interesting in its own since stochastic calculus of variations is a framework oblivious to time. Constructing such a notion of time is achieved using the notion of resolution of the identity as introduced in \cite{ustunel95_1,MR1071539}. We then introduce the second key ingredient which is the notion of strict causality or quasi-nilpotence, see \cite{zakai92} for a related application. In Section~\ref{sec_time-reversal}, we show that solving Equation \eqref{eq:B} reduces to solve a past dependent stochastic differential equation with respect to a standard Brownian motion, see Equation \eqref{eq:C} below. Then, we prove existence, uniqueness and some properties of this equation. Technical lemmas are postponed to Section \ref{sec:proofs}. \section{Besov-Liouville Spaces} \label{sec_preliminaries} Let $T>0$ be fix real number. For a measurable function $f\, :\, [0,\, T]\to {\mathbf R}}\def\N{{\mathbf N}^n$, we define $\tau_{\scriptscriptstyle{T}} f$ by \begin{equation*} \tau_{\scriptscriptstyle{T}} f(s)=f(T-s) \text{ for any } s\in [0,\, T]. \end{equation*} For $t\in [0,\, T]$, $e_tf$ will represent the restriction of $f$ to $[0,\, t]$, i.e., $e_tf=f{\mathbf 1}_{[0,\, t]}.$ For any linear map $A,$ we denote by $A^*_{T},$ its adjoint in ${\mathcal L}^2([0,T];\, {\mathbf R}}\def\N{{\mathbf N}^n).$ For $\eta\in (0,1],$ the space of $\eta$-Hölder continuous functions on $[0,\, T]$ is equipped with the norm \begin{equation*} \| f\|_{\operatorname{Hol}(\eta)}=\sup_{0<s<t<T}\frac{|f(t)-f(s)|}{|t-s|^\eta}+\| f\|_\infty. \end{equation*} Its topological dual is denoted by $\operatorname{Hol}(\eta)^*.$ For $f\in {\mathcal L}^1([0,T];\, {\mathbf R}}\def\N{{\mathbf N}^n;\text{ d} t),$ (denoted by ${\mathcal L}^1$ for short) the left and right fractional integrals of $f$ are defined by~: \begin{align*} (I_{0^+}^{\gamma}f)(x) & = \frac{1}{\Gamma(\gamma)}\int_0^xf(t)(x-t)^{\gamma-1}\text{ d} t\ ,\ x\ge 0,\\ (I_{T^-}^{\gamma}f)(x) & = \frac{1}{\Gamma(\gamma)}\int_x^Tf(t)(t-x)^{\gamma-1}\text{ d} t\ ,\ x\le T, \end{align*} where $\gamma>0$ and $I^0_{0^+}=I^0_{T^-}=\operatorname{Id}.$ For any $\gamma\ge 0$, $p,q\ge 1,$ any $f\in {\mathcal L}^p$ and $g\in {\mathcal L}^q$ where $p^{-1}+q^{-1}\le \gamma$, we have~: \begin{equation} \label{int_parties_frac} \int_0^T f(s)(I_{0^+}^\gamma g)(s)\text{ d} s = \int_0^T (I_{T^-}^\gamma f)(s)g(s)\text{ d} s. \end{equation} The Besov-Liouville space $I^\gamma_{0^+}({\mathcal L}^p):= {\mathcal I}_{\gamma,p}^+$ is usually equipped with the norm~: \begin{equation} \label{normedansIap} \| I^{\gamma}_{0^+}f \| _{ {\mathcal I}_{\gamma,p}^+}=\| f\|_{{\mathcal L}^p}. \end{equation} Analogously, the Besov-Liouville space $I^\gamma_{T^-}({\mathcal L}^p):= {\mathcal I}_{\gamma,p}^-$ is usually equipped with the norm~: \begin{equation*} \| I^{-\gamma}_{T^-}f \| _{ {\mathcal I}_{\gamma,p}^-}=\| f\|_{{\mathcal L}^p}. \end{equation*} We then have the following continuity results (see \cite{feyel96,samko93})~: \begin{prop} \label{prop:proprietes_int_RL} \begin{enumerate}[i.] \item \label{inclusionLpLq} If $0<\gamma <1,$ $1< p <1/\gamma,$ then $I^\gamma_{0^+}$ is a bounded operator from ${\mathcal L}^p$ into ${\mathcal L}^q$ with $q=p(1-\gamma p)^{-1}.$ \item For any $0< \gamma <1$ and any $p\ge 1,$ ${\mathcal I}_{\gamma,p}^+$ is continuously embedded in $\operatorname{Hol}(\gamma- 1/p)$ provided that $\gamma-1/p>0.$ \item For any $0< \gamma< \beta <1,$ $\operatorname{Hol} (\beta)$ is compactly embedded in ${\mathcal I}_{\gamma,\infty}.$ \item For $\gamma p<1,$ the spaces ${\mathcal I}_{\gamma,p}^+$ and ${\mathcal I}_{\gamma,p}^{-}$ are canonically isomorphic. We will thus use the notation ${\mathcal I}_{\gamma,p}$ to denote any of this spaces. \end{enumerate} \end{prop} \section{Malliavin calculus and time reversal} \label{sec_stoch-integr} Our reference probability space is $\Omega={\mathcal C}_0([0,T],\, {\mathbf R}}\def\N{{\mathbf N}^n),$ the space of ${\mathbf R}}\def\N{{\mathbf N}^n$-valued, continuous functions, null at time $0$. The Cameron-Martin space is denoted by ${\mathbf H}$ and is defined as \begin{math} {\mathbf H}=I^1_{0^+}({\mathcal L}^2([0,T])). \end{math} In what follows, the space ${\mathcal L}^2([0,\, T])$ is identified with its topological dual. We denote by $\kappa$ the canonical embedding from ${\mathbf H}$ into $\Omega$. The probability measure ${\mathbf P}$ on $\Omega$ is such that the canonical map $W\, :\, \omega \mapsto (\omega(t), \, t\in [0,\, T])$ defines a standard $n$-dimensional Brownian motion. A mapping $\phi$ from $\Omega$ into some separable Hilbert space ${\mathfrak H}$ is called cylindrical if it is of the form $\phi(w)=\sum_{i=1}^df_i(\<v_{i,1},w\rangle,\cdots,\<v_{i,n},w\rangle) x_i$ where for each $i,$ $f_i\in {\mathcal C}_0^\infty ({\mathbf R}^n, {\mathbf R})$ and $(v_{i,j},\, j=1,\, \cdots,\, n)$ is a sequence of $\Omega^*.$ For such a function we define $\nabla^{\text{\tiny W}} \phi$ as $$ \nabla^{\text{\tiny W}} \phi(w)=\sum_{i,j=1} \partial_j f_i(\<v_{i,1},w\rangle,\cdots,\<v_{i,n},w\rangle){\tilde{v}}_{i,j}\otimes x_i, $$ where $\tilde{v}$ is the image of $v\in \Omega^*$ by the map $(I^1_{0^+}\circ \kappa)^*.$ From the quasi-invariance of the Wiener measure \cite{ustunel_book}, it follows that $\nabla^{\text{\tiny W}} $ is a closable operator on $L^p(\Omega;{\mathfrak H})$, $p\geq 1$, and we will denote its closure with the same notation. The powers of $\nabla^{\text{\tiny W}} $ are defined by iterating this procedure. For $p>1$, $k\in \N$, we denote by ${\mathbb D}_{p,k}({\mathfrak H})$ the completion of ${\mathfrak H}$-valued cylindrical functions under the following norm $$ \|\phi\|_{p,k}=\sum_{i=0}^k \|(\nabla^{\text{\tiny W}}) ^i\phi\|_{L^p(\Omega;\ {\mathfrak H}\otimes {\mathcal L}^p([0,1])^{\otimes i})}\,. $$ We denote by ${\mathbb L}_{p,1}$ the space ${\mathbb D}_{p,1}({\mathcal L}^p([0,\, T];\, {\mathbf R}}\def\N{{\mathbf N}^n)).$ The divergence, denoted $\delta^{\text{\tiny W}}$ is the adjoint of $\nabla^{\text{\tiny W}} $: $v$ belongs to $\operatorname{Dom}_p \delta^{\text{\tiny W}}$ whenever for any cylindrical $\phi,$ \begin{equation*} |\esp{\int_0^T v_s \nabla^{\text{\tiny W}} _s\phi\text{ d} s}|\le \, c \lVert \phi\rVert_{L^p} \end{equation*} and for such a process $v,$ $$\esp{\int_0^T v_s \nabla^{\text{\tiny W}} _s\phi\text{ d} s}=\esp{ \phi\, \delta^{\text{\tiny W}} v}.$$ We introduced the temporary notation $W$ for standard Brownian motion to clarify the forthcoming distinction between a standard Brownian motion and its time reversal. Actually, the time reversal of a standard Brownian is also a standard Brownian motion and thus, both of them ``live'' in the same Wiener space. We now precise how their respective Malliavin gradient and divergence are linked. Consider $B=(B(t),\, t\in [0,\, T])$ an $n$-dimensional standard Brownian motion and $\check{B}^T=(B(T)-B(T-t),\, t\in [0,\, T])$ its time reversal. Consider the following map \begin{align*} \Theta_T \, : \, \Omega & \longrightarrow \Omega\\ \omega & \longmapsto \check{\omega}=\omega(T)-\tau_{\scriptscriptstyle{T}}\omega ,\end{align*} and the commutative diagram \begin{equation*} \begin{CD} {\mathcal L}^2 @>\tau_{\scriptscriptstyle{T}} >> {\mathcal L}^2\\ @V{I^1_{0^+}}VV @VV{I^1_{0^+}}V\\ \Omega \supset {\mathbf H}\qquad @>>\Theta_T>\qquad {\mathbf H}\subset \Omega \end{CD} \end{equation*} Note that $\Theta_T^{-1}=\Theta_T$ since $\omega(0)=0.$ For a function $f\in {\mathcal C}^\infty_b({\mathbf R}}\def\N{{\mathbf N}^{nk})$, we define \begin{align*} \nabla_r f(\omega(t_1),\cdots,\, \omega(t_k))&=\sum_{j=1}^k\partial_j f (\omega(t_1),\cdots,\, \omega(t_k)) {\mathbf 1}_{[0,\, t_j]}(r)\text{ and }\\ \check{\nabla}_r f(\check{\omega}(t_1),\cdots,\, \check{\omega}(t_k))&=\sum_{j=1}^k\partial_j f (\check{\omega}(t_1),\cdots,\, \check{\omega}(t_k)) {\mathbf 1}_{[0,\, t_j]}(r). \end{align*} The operator $\nabla=\nabla^B$ (respectively $\check{\nabla}=\nabla^{\check{B}}$) is the Malliavin gradient associated with a standard Brownian motion (respectively its time reversal). Since, \begin{equation*} f(\check{\omega}(t_1),\cdots,\, \check{\omega}(t_k))=f(\omega(T)-\omega(T-t_1),\cdots,\, \omega(T)-\omega(T-t_k)), \end{equation*} we can consider $f(\check{\omega}(t_1),\cdots, \,\check{\omega}(t_k))$ as a cylindrical function with respect to the standard Brownian motion. As such its gradient is given by \begin{equation*} \nabla_r f(\check{\omega}(t_1),\cdots,\, \check{\omega}(t_k))=\sum_{j=1}^k\partial_j f (\check{\omega}(t_1),\cdots, \check{\omega}(t_k)){\mathbf 1}_{[T-t_j,\, T]}(r). \end{equation*} We thus have, for any cylindrical function $F$, \begin{equation} \label{eq:4} \nabla F\circ \Theta_T(\omega)=\tau_{\scriptscriptstyle{T}} \check{\nabla} F(\check{\omega}). \end{equation} Since $\Theta^*_T{\mathbf P}={\mathbf P}$ and $\tau_{\scriptscriptstyle{T}}$ is continuous from ${\mathcal L}^p$ into itself for any $p$, it is then easily shown that the spaces ${\mathbb D}_{p,\, k}$ and $\check{{\mathbb D}}_{p, \, k}$ (with obvious notations) coincide for any $p,\, k$ and that \eqref{eq:4} holds for any element of one of these spaces. Hence we have proved the following theorem: \begin{theorem} \label{thm:cdelta} For any $p\ge 1$ and any integer $k$, the spaces ${\mathbb D}_{p,\, k}$ and $\check{{\mathbb D}}_{p, \, k}$ coincide. For any $F\in {\mathbb D}_{p,\, k}$ for some $p,\, k$, \begin{equation*} \nabla (F\circ \Theta_T)=\tau_{\scriptscriptstyle{T}} \check{\nabla} (F\circ \Theta_T), {\mathbf P} \text{ a.s..} \end{equation*} \end{theorem} By duality, an analog result follows for divergences. \begin{theorem} \label{thm:taut_delta} A process $u$ belongs to the domain of $\delta$ if and only if $\tau_{\scriptscriptstyle{T}} u$ belongs to the domain of $\check{\delta}$ and then, the following equality holds: \begin{equation} \label{eq:7} \check{\delta}(u(\check{\omega}))(\check{\omega})=\delta(\tau_{\scriptscriptstyle{T}} u(\check{\omega}))(\omega)=\delta(\tau_{\scriptscriptstyle{T}} u\circ \Theta_T)(\omega). \end{equation} \end{theorem} \begin{proof} For $u\in {\mathcal L}^2$, for cylindrical $F$, we have on the one hand: \begin{equation*} \esp{F(\check{\omega})\check{\delta} u(\check{\omega})}=\esp{( \check{\nabla} F(\check{\omega}),\, u)_{{\mathcal L}^2}}, \end{equation*} and on the other hand, \begin{align*} \esp{( \check{\nabla} F(\check{\omega}),\, u)_{{\mathcal L}^2}}&=\esp{(\tau_{\scriptscriptstyle{T}} \nabla F\circ \Theta_T(\omega),\, u)_{{\mathcal L}^2}}\\ &=\esp{(\nabla F\circ \Theta_T(\omega),\, \tau_{\scriptscriptstyle{T}} u)_{{\mathcal L}^2}}\\ &=\esp{F\circ \Theta_T(\omega)\delta(\tau_{\scriptscriptstyle{T}} u)(\omega)}\\ &=\esp{F(\check{\omega})\delta(\tau_{\scriptscriptstyle{T}} u)(\omega)}. \end{align*} Since this is valid for any cylindrical $F$, (\ref{eq:7}) holds for $u\in {\mathcal L}^2$. Now, for $u$ in the domain of divergence (see \cite{nualart.book,ustunel_book}), \begin{equation*} \delta u = \sum_{i} \Bigl( (u,\, h_i)_{{\mathcal L}^2}\delta h_i-(\nabla u,h_i\otimes h_i)_{{\mathcal L}^2\otimes{\mathcal L}^2}\Bigr), \end{equation*} where $(h_i,\, i\in \N)$ is an orthonormal basis of ${\mathcal L}^2([0,\, T];\, {\mathbf R}}\def\N{{\mathbf N}^n)$. Thus, we have \begin{align*} \check{\delta} (u(\check{\omega}))(\check{\omega}) &= \sum_{i} \Bigl( (u(\check{\omega}),\, h_i)_{{\mathcal L}^2}\check{\delta} h_i(\check{\omega})-(\check{\nabla} u(\check{\omega}),h_i\otimes h_i)_{{\mathcal L}^2\otimes{\mathcal L}^2}\Bigr)\\ &=\sum_{i} \Bigl( (u(\check{\omega}),\, h_i)_{{\mathcal L}^2}\delta(\tau_{\scriptscriptstyle{T}} h_i)(\omega)-(\nabla u(\check{\omega}),\tau_{\scriptscriptstyle{T}} h_i\otimes h_i)_{{\mathcal L}^2\otimes{\mathcal L}^2}\Bigr)\\ &=\sum_{i} \Bigl( (\tau_{\scriptscriptstyle{T}} u(\check{\omega}),\, \tau_{\scriptscriptstyle{T}} h_i)_{{\mathcal L}^2}\delta(\tau_{\scriptscriptstyle{T}} h_i)(\omega)-(\nabla \tau_{\scriptscriptstyle{T}} u(\check{\omega}),\tau_{\scriptscriptstyle{T}} h_i\otimes \tau_{\scriptscriptstyle{T}} h_i)_{{\mathcal L}^2\otimes{\mathcal L}^2}\Bigr), \end{align*} where we have taken into account that $\tau_{\scriptscriptstyle{T}}$ in an involution. Since $(h_i,\, i\in \N)$ is an orthonormal basis of ${\mathcal L}^2([0,\, T];\, {\mathbf R}}\def\N{{\mathbf N}^n)$, identity (\ref{eq:7}) is satisfied for any $u$ in the domain of $\delta$. \end{proof} \subsection{Causality and quasi-nilpotence} \label{sec:caus-quasi-nilp} In anticipative calculus, the notion of trace of an operator plays a crucial role, we refer to \cite{0635.47002} for more details on trace. \begin{defn} Let $V$ be a bounded map from ${\mathcal L}^2([0,\, T];\, {\mathbf R}}\def\N{{\mathbf N}^n)$ into itself. The map $V$ is said to be trace-class whenever for one CONB $(h_n,\, n\ge 1)$ of ${\mathcal L}^2([0,\, T];\, {\mathbf R}}\def\N{{\mathbf N}^n)$, \begin{equation*} \sum_{n\ge 1} |(Vh_n,\, h_n)_{{\mathcal L}^2}|\text{ is finite.} \end{equation*} Then, the trace of $V$ is defined by \begin{equation*} \operatorname{trace}(V)= \sum_{n\ge 1} (Vh_n,\, h_n)_{{\mathcal L}^2}. \end{equation*} \end{defn} It is easily shown that the notion of trace does not depend on the choice of the CONB. \begin{defn} A family $E$ of projections $(E_\lambda$, $\lambda \in [0,1])$ in ${\mathcal L}^2([0,\, T];\, {\mathbf R}}\def\N{{\mathbf N}^n)$ is called a resolution of the identity if it satisfies the conditions \begin{enumerate} \item $E_0=0$ and $E_1=\operatorname{Id}$. \item $E_\lambda E_\mu=E_{\lambda\wedge \mu}$. \item $\lim_{\mu\downarrow \lambda}E_\mu =E_\lambda$ for any $\lambda\in [0,\, 1)$ and $\lim_{\mu\uparrow 1}E_\mu=\operatorname{Id}.$ \end{enumerate} \end{defn} For instance, the family $E=(e_{\lambda T}, \, \lambda\in [0,\, 1])$ is a resolution of the identity in ${\mathcal L}^2([0,\, T];\, {\mathbf R}}\def\N{{\mathbf N}^n).$ \begin{defn} A partition $\pi$ of $[0,\, T]$ is a sequence $\{0=t_0<t_1<\ldots <t_n=T\}$. Its mesh is denoted by $|\pi|$ and defined by $|\pi|=\sup_{i}|t_{i+1}-t_i|.$ \end{defn} The causality plays a crucial role in what follows. The next definition is just the formalization in terms of operator of the intuitive notion of causality. \begin{defn} A continuous map $V$ from ${\mathcal L}^2([0,\, T];\, {\mathbf R}}\def\N{{\mathbf N}^n)$ into itself is said to be $E$-causal if and only if the following condition holds: \begin{equation*} E_\lambda VE_\lambda = E_\lambda V \text{ for any } \lambda \in [0,\, 1]. \end{equation*} \end{defn} For instance, an operator $V$ in integral form $ Vf(t)=\int_0^T V(t,s)f(s)\text{ d} s$ is causal if and only if $V(t,s)=0$ for $s\ge t$, i.e., computing $Vf(t)$ needs only the knowledge of $f$ up to time $t$ and not after. Unfortunately, this notion of causality is insufficient for our purpose and we are led to introduce the notion of strict causality as in \cite{MR663906}. \begin{defn} Let $V$ be a causal operator. It is a strictly causal operator whenever for any $\varepsilon>0$, there exists a partition $\pi$ of $[0,T]$ such that for any $\pi^\prime=\{0=t_0<t_1<\ldots<t_n=T\}\subset \pi$, \begin{equation*} \| (E_{t_{i+1}}-E_{t_i})V (E_{t_{i+1}}-E_{t_i})\|_{{\mathcal L}^2} < \varepsilon, \text{ for } i=0,\cdots,\, n-1. \end{equation*} \end{defn} Note carefully that the identity map is causal but not strictly causal. Indeed, if $V=\operatorname{Id}$, for any $s<t$, \begin{equation*} \| (E_{t}-E_{s})V (E_{t}-E_{s})\|_{{\mathcal L}^2}= \|E_{t}-E_{s}\|_{{\mathcal L}^2}=1 \end{equation*} since $E_{t}-E_{s}$ is a projection. However, if $V$ is hyper-contractive, we have the following result: \begin{lemma} \label{lem:strict-causality} Assume the resolution of the identity to be either $E=(e_{\lambda T}, \, \lambda\in[0,\,1])$ or $E=(\operatorname{Id}-e_{(1-\lambda) T}, \, \lambda\in[0,\,1])$. If $V$ is an $E$-causal map continuous from ${\mathcal L}^2$ into ${\mathcal L}^p$ for some $p>2$ then $V$ is strictly $E$-causal. \end{lemma} \begin{proof}Let $\pi$ be any partition of $[0,\, T].$ Assume $E=(e_{\lambda T}, \, \lambda\in[0,\,1])$, the very same proof works for the other mentioned resolution of the identity. According to Hölder formula, we have: For any $ 0\le s<t\le T$, \begin{align*} \| (E_t-E_s) V (E_t-E_{s}) f\|_{{\mathcal L}^2}&=\int_s^{t} |V(f {\mathbf 1}_{(s,\, t]})(u)|^2\text{ d} u\\ &\le (t-s)^{1-2/p} \| V(f {\mathbf 1}_{(s,\, t]})\|_{{\mathcal L}^{p/2}}\\ &\le c\, (t-s)^{1-2/p} \|f\|_{{\mathcal L}^2}. \end{align*} Then, for any $\varepsilon>0$, there exists $\eta>0$ such that $|\pi|<\eta $ implies $\| (E_{t_{i+1}}-E_{t_i})V (E_{t_{i+1}}-E_{t_i}) f\|_{{\mathcal L}^2}\le \varepsilon$ for any $\{0=t_0<t_1<\ldots<t_n=T\}\subset \pi$ and any $i=0,\cdots,\, n-1.$ \end{proof} The importance of strict causality lies in the next theorem we borrow from \cite{MR663906}. \begin{theorem} \label{thm:nilpotent} The set of strictly causal operators coincides with the set of quasi-nilpotent operators, i.e., trace-class operators such that $\operatorname{trace}(V^n)=0$ for any integer $n\ge 1$. \end{theorem} Moreover, we have the following stability theorem. \begin{theorem} \label{thm:stability} The set of strictly causal operators is a two-sided ideal in the set of causal operators. \end{theorem} \begin{defn} Let $E$ be a resolution of the identity in ${\mathcal L}^2([0,\, T];\, {\mathbf R}}\def\N{{\mathbf N}^n)$. Consider the filtration ${\mathcal F}^E$ defined as \begin{equation*} {\mathcal F}^E_t=\sigma\{\delta^{\text{\tiny W}} (E_\lambda h), \lambda\le t,\, h\in {\mathcal L}^2\}. \end{equation*} An ${\mathcal L}^2$-valued random variable $u$ is said to be ${\mathcal F}^E$-adapted if for any $h\in {\mathcal L}^2$, the real valued process $<E_\lambda u,\, h>$ is ${\mathcal F}^E$-adapted. We denote by ${\mathbb D}_{p,k}^E({\mathfrak H})$ the set of ${\mathcal F}^E$-adapted random variables belonging to ${\mathbb D}_{p,k}( {\mathfrak H}).$ \end{defn} If $E=(e_{\lambda T}, \, \lambda \in [0,1])$, the notion of ${\mathcal F}^E$ adapted processes coincides with the usual one for the Brownian filtration and it is well known that a process $u$ is adapted if and only if $\nabla^{\text{\tiny W}} _r u(s)=0$ for $r>s.$ This result can be generalized to any resolution of the identity. \begin{theorem}[Proposition 3.1 of \protect\cite{ustunel95_1}] Let $u$ belongs to ${\mathbb L}_{p,1}$. Then $u$ is ${\mathcal F}^E$-adapted if and only if $\nabla^{\text{\tiny W}} u$ is $E$-causal. \end{theorem} We then have the following key theorem: \begin{theorem} \label{thm:trace_zero}Assume the resolution of the identity to be $E=(e_{\lambda T}, \, \lambda\in[0,\,1])$ either $E=(\operatorname{Id}-e_{(1-\lambda) T}, \, \lambda\in[0,\,1])$ and that $V$ is an $E$-strictly causal continuous operator from ${\mathcal L}^2$ into ${\mathcal L}^p$ for some $p>2$. Let $u$ be an element of ${\mathbb D}_{2,1}^E({\mathcal L}^2).$ Then, $V\nabla^{\text{\tiny W}} u $ is of trace class and we have $\operatorname{trace}(V\nabla^{\text{\tiny W}} u)=0$. \end{theorem} \begin{proof} Since $u$ is adapted, $\nabla^{\text{\tiny W}} u$ is $E$-causal. According to Theorem \ref{thm:stability}, $V\nabla^{\text{\tiny W}} u$ is strictly causal and the result follows by Theorem \ref{thm:nilpotent}. \end{proof} In what follows, ${E^0}$ is the resolution of the identity in the Hilbert space ${\mathcal L}^2$ defined by $e_{\lambda\, T} f=f{\mathbf 1}_{[0,\, \lambda T]}$ and ${\check{E}^0}$ is the resolution of the identity defined by \begin{math} \check{e}_{\lambda T}f = f{\mathbf 1}_{[(1-\lambda)T,T]}. \end{math} The filtration ${\mathcal F}^{E^0}$ and ${\mathcal F}^{\check{E}^0}$ are defined accordingly. Next lemma is immediate when $V$ is given in the form $Vf(t)=\int_0^t V(t,\, s) f(s)\text{ d} s$. Unfortunately such a representation as an integral operator is not always available. We give here an algebraic proof to emphasize the importance of causality. \begin{lemma} \label{lem:causalite} Let $V$ be a map from ${\mathcal L}^2([0,\, T];\, {\mathbf R}}\def\N{{\mathbf N}^n)$ into itself such that $V$ is ${E^0}$-causal. Let $V^*$ be the adjoint of $V$ in ${\mathcal L}^2([0,\, T];\, {\mathbf R}}\def\N{{\mathbf N}^n)$. Then, the map $\tau_{\scriptscriptstyle{T}} V^*_T\tau_{\scriptscriptstyle{T}}$ is ${\check{E}^0}$-causal. \end{lemma} \begin{proof} This is a purely algebraic lemma once we have noticed that \begin{equation} \label{eq:2} \tau_{\scriptscriptstyle{T}} e_r=(\operatorname{Id}-e_{T-r})\tau_{\scriptscriptstyle{T}} \text{ for any } 0\le r \le T. \end{equation} For, it suffices to write \begin{multline}\label{eq:8} \tau_{\scriptscriptstyle{T}} e_r f(s)=f(T-s){\mathbf 1}_{[0,\, r]}(T-s)\\ =f(T-s){\mathbf 1}_{[T-r,\, T]}(s)=(\operatorname{Id}-e_{T-r})\tau_{\scriptscriptstyle{T}} f(s), \text{ for any } 0\le s \le T. \end{multline} We have to show that \begin{equation*} e_r \tau_{\scriptscriptstyle{T}} V^*_T \tau_{\scriptscriptstyle{T}} e_r=e_r \tau_{\scriptscriptstyle{T}} V^*_T \tau_{\scriptscriptstyle{T}} \text{ or equivalently } e_r \tau_{\scriptscriptstyle{T}} V \tau_{\scriptscriptstyle{T}} e_r =\tau_{\scriptscriptstyle{T}} V \tau_{\scriptscriptstyle{T}} e_r, \end{equation*} since $e_r^*=e_r$ and $\tau_{\scriptscriptstyle{T}}^*=\tau_{\scriptscriptstyle{T}}.$ Now, \eqref{eq:8} yields \begin{equation*} e_r \tau_{\scriptscriptstyle{T}} V \tau_{\scriptscriptstyle{T}} e_r = \tau_{\scriptscriptstyle{T}} V \tau_{\scriptscriptstyle{T}} e_r- e_{T-r} V \tau_{\scriptscriptstyle{T}} e_r. \end{equation*} Use \eqref{eq:8} again to obtain \begin{equation*} e_{T-r} V \tau_{\scriptscriptstyle{T}} e_r=e_{T-r} V (\operatorname{Id} -e_{T-r})\tau_{\scriptscriptstyle{T}}=(e_{T-r} V -e_{T-r} V e_{T-r})\tau_{\scriptscriptstyle{T}}=0, \end{equation*} since $V$ is $E$-causal. \end{proof} \subsection{Stratonovitch integrals} \label{sec:stoch-integr-with} In what follows, $\eta$ belongs to $(0,1]$ and $V$ is a linear operator. For any $p\ge 2$, we set: \begin{hyp}[$p,\, \eta$] \label{hypA} The linear map $V$ is continuous from ${\mathcal L}^p([0,\, T]; {\mathbf R}}\def\N{{\mathbf N}^n)$ into the Banach space $\operatorname{Hol}(\eta)$. \end{hyp} \begin{defn} Assume that Hypothesis \ref{hypA}($p,\, \eta$) holds. The Volterra process associated to $V$, denoted by $W^V$ is defined by \begin{equation*} W^V(t)=\delta^{\text{\tiny W}} \bigl(V({\mathbf 1}_{[0,\, t]})\bigr), \text{ for all } t \in [0, \, T]. \end{equation*} \end{defn} For any subdivision $\pi$ of $[0,\, T]$, i.e., $\pi =\{0=t_0<t_1< \ldots<t_n=T\}$, of mesh $|\pi|$, we consider the Stratonovitch sums: \begin{multline} R^\pi(t,u)=\delta^{\text{\tiny W}}\Bigl(\sum_{t_i\in\pi}\frac{1}{\theta_i} \int_{t_i\wedge t}^{t_{i+1}\wedge t} \kern-3pt Vu(r) \text{ d} r\, {\mathbf 1}_{[t_i,\, t_{i+1})}\Bigr)\\ + \sum_{t_i\in\pi}\frac{1}{\theta_i} \iint\limits_{[t_i\wedge t,t_{i+1}\wedge t]^2} V(\nabla^{\text{\tiny W}} _ru)(s) \text{ d} s\text{ d} r. \label{eq:def_de_RpiT} \end{multline} \begin{defn} \label{def:strat-integr} We say that $u$ is $V$-Stratonovitch integrable on $[0,t]$ whenever the family $\text{R}^\pi(t,u),$ defined in (\ref{eq:def_de_RpiT}), converges in probability as $|\pi|$ goes to $0.$ In this case the limit will be denoted by $\int_0^t u(s) \circ \text{ d} W^V(s).$ \end{defn} \begin{example} \label{fbm_levy} The first example is the so-called L\'evy fractional Brownian motion of Hurst index $H>1/2$, defined as \begin{displaymath} \frac{1}{\Gamma(H+1/2)} \int_0^t (t-s)^{H-1/2}\text{ d} B_s=\delta(I_{T^-}^{H-1/2}({\mathbf 1}_{[0,\, t]})). \end{displaymath} This amounts to say that $V=I_{T^-}^{H-1/2}.$ Thus Hypothesis \ref{hypA}$(p,H-1/2-1/p)$ holds provided $p(H-1/2)>1$. \end{example} \begin{example} \label{fbm} The other classical example is the fractional Brownian motion with stationary increments of Hurst index $H>1/2,$ which can be written as \begin{equation*} \int_0^t K_H(t,s)\text{ d} B(s), \end{equation*} where \begin{equation} \label{defdekh} K_H(t,r)=\frac{(t-r)^{H- \frac{1}{2}}}{\Gamma(H+\frac{1}{2})} F(\frac{1}{2}-H,H-\frac{1}{2}, H+\frac{1}{2},1- \frac{t}{r})1_{[0,t)}(r). \end{equation} The Gauss hyper-geometric function $F(\alpha,\beta,\gamma,z)$ (see \cite{nikiforov88}) is the analytic continuation on ${\mathbb C}\times {\mathbb C}\times {\mathbb C} \backslash \{-1,-2,\ldots \}\times \{z\in {\mathbb C}, Arg |1-z| < \pi\}$ of the power series \begin{displaymath} \sum_{k=0}^{+ \infty} \frac{(\alpha)_k(\beta)_k}{(\gamma)_k k!}z^k, \end{displaymath} and \begin{displaymath} (a)_0=1 \text{ and } (a)_k = \frac{\Gamma(a+k)}{\Gamma(a)}=a(a+ 1)\dots (a+k-1). \end{displaymath} We know from \cite{samko93} that $K_H$ is an isomorphism from ${\mathcal L}^p ([0,1])$ onto ${\mathcal I}_{H+1/2,p}^+$ and \begin{equation*} K_Hf = I_{0^+}^1x^{H-1/2}I_{0^+}^{H-1/2}x^{1/2-H}f . \end{equation*} Consider ${\mathcal K}_H=I_{0^+}^{-1}\circ K_H.$ Then it is clear that \begin{equation*} \int_0^t K_H(t,\, s)\text{ d} B(s)=\int_0^t ({\mathcal K}_H)_T^*({\mathbf 1}_{[0,t]})(s)\text{ d} B(s), \end{equation*} hence that we are in the framework of Definition \ref{def:strat-integr} provided that we take $V=({\mathcal K}_H)_T^*$. Hypothesis \ref{hypA}$(p,H-1/2-1/p)$ is satisfied provided that $p(H-1/2)>1.$ \end{example} The next theorem then follows from \cite{decreusefond03.1}. \begin{theorem} \label{thm:strato_integrable} Assume that Hypothesis \ref{hypA}($p,\, \eta$) holds. Assume that $u$ belongs to ${\mathbb L}_{p,1}.$ Then $u$ is $V$-Stratonovitch integrable, there exists a process which we denote by $D^{\text{\tiny W}} u$ such that $D^{\text{\tiny W}} u$ belongs to $L^p({\mathbf P}\otimes d s)$ and \begin{equation} \label{eq:6} \int_0^T u(s) \circ \text{ d} W^V(s)=\delta^{\text{\tiny W}} (Vu)+ \int_0^T D^{\text{\tiny W}} u(s)\text{ d} s. \end{equation} The so-called ``trace-term'' satisfies the following estimate: \begin{equation} \esp{ \int_0^T |D^{\text{\tiny W}} u(r)|^p \text{ d} r}\le \, c\, T^{p\eta}\| u\|_{{\mathbb L}_{p,\, 1}}^p,\label{eq:121} \end{equation} for some universal constant $c$. Moreover, for any $r\le T$, $e_ru$ is $V$-Stratonovitch integrable and \begin{equation*} \int_0^r u(s) \circ \text{ d} W^V(s)= \int_0^T (e_ru)(s) \circ \text{ d} W^V(s)=\delta^{\text{\tiny W}} (Ve_ru)+ \int_0^r D^{\text{\tiny W}} u(s)\text{ d} s \end{equation*} and we have the maximal inequality: \begin{equation} \label{eq:3} \esp{ \|\int_0^. u(s)\circ \text{ d} W^V(s)\|_{\operatorname{Hol}(\eta)}^p}\le \, c\, \|u\|_{{\mathbb L}_{p,1}}^p, \end{equation} where $c$ does not depend on $u$. \end{theorem} The main result of this Section is the following theorem which states that the time reversal of a Stratonovitch integral is an adapted integral with respect to the time reversed Brownian motion. Due to its length, its proof is postponed to Section \ref{sec:substitution-formula}. \begin{theorem} \label{thm:int_strato_inverse} Assume that Hypothesis \ref{hypA}($p,\, \eta$) holds. Let $u$ belong to ${\mathbb L}_{p,1}$ and let $\check{V}_T=\tau_t V\tau_T$. Assume furthermore that $V$ is $\check{E}_0$-causal and that $\check{u}=u\circ \Theta_T^{-1}$ is ${\mathcal F}^{\check{E}_0}$-adapted. Then, \begin{equation} \label{eq:9} \int_{T-t}^{T-r} \tau_{\scriptscriptstyle{T}} u(s)\circ \text{ d} W^V(s) =\int_r^t \check{V}_T({\mathbf 1}_{[r,\, t]} \check{u})(s)\text{ d} \check{B}^T(s), \ 0\le r\le t\le T, \end{equation} where the last integral is an Itô integral with respect to the time reversed Brownian motion $\check{B}^T(s)=B(T)-B(T-s)=\Theta_T(B)(s).$ \end{theorem} \begin{rem} Note that at a formal level, we could have an easy proof of this theorem. For instance, consider the L\'evy fBm, a simple computations shows that $\check{V}_T=I^{H-1/2}_{0^+}$ for any $T.$ Thus, we are led to compute $\operatorname{trace}(I^{H-1/2}_{0^+}\nabla u)$. If we had sufficient regularity, we could write \begin{equation*} \operatorname{trace}(I^{H-1/2}_{0^+}\nabla u)=\int_0^T \int_0^s (s-r)^{H-3/2}\nabla_s u(r)\text{ d} r\text{ d} s=0 , \end{equation*} since $\nabla_s u(r)=0$ for $s>r$ for $u$ adapted. Obviously, there are many flaws in these lines of proof: The operator $I^{H-1/2}_{0^+}\nabla u$ is not regular enough for such an expression of the trace to be true. Even more, there is absolutely no reason for $\check{V}_T\nabla u$ to be a kernel operator so we can't hope such a formula. These are the reasons that we need to work with operators and not with kernels. \end{rem} \section{Volterra driven SDEs} \label{sec_time-reversal} Let ${\mathfrak G}$ the group of homeomorphisms of ${\mathbf R}}\def\N{{\mathbf N}^n$ equipped with the distance: We introduce a distance $d$ on ${\mathfrak G}$ by \begin{equation*} d(\varphi,\, \phi)=\rho(\varphi,\, \phi)+\rho(\varphi^{-1},\, \phi^{-1}), \end{equation*} where \begin{equation*} \rho(\varphi,\, \phi)=\sum_{N=1}^\infty 2^{-N} \frac{\sup_{|x|\le N}|\varphi(x)-\phi(x)|}{1+\sup_{|x|\le N}|\varphi(x)-\phi(x)|}\cdotp \end{equation*} Then, ${\mathfrak G}$ is a complete topological group. Consider the equations \begin{equation} \label{eq:Ap} \tag{A} X_{r,\, t}=x+\int_r^t \sigma(X_{r,\, s})\circ \text{ d} W^V(s), \ 0\le r\le t\le T. \end{equation} \begin{equation} \label{eq:B1} \tag{B} Y_{r,\, t} =x-\int_r^t \sigma(Y_{s,\, t})\circ \text{ d} W^V(s), \ 0\le r\le t\le T. \end{equation} As a solution of \eqref{eq:Ap} is to be constructed by ``inverting'' a solution of \eqref{eq:B1}, we need to add to the definition of a solution of \eqref{eq:Ap} or \eqref{eq:B1} the requirement of being a flow of homeomorphisms. This is the meaning of the following definition. \begin{defn} By a solution of \eqref{eq:A}, we mean a measurable map \begin{align*} \Omega \times [0,T] \times [0,T] & \longrightarrow {\mathfrak G}\\ (\omega,\, r,\, t) & \longmapsto (x\mapsto X_{r,t}(\omega,x)) \end{align*} such that the following properties are satisfied~: \begin{enumerate} \item For any $0\le r \le t \le T$, for any $x\in {\mathbf R}}\def\N{{\mathbf N}^n$, $X_{r,\, t}(\omega,\, x)$ is $\sigma\{W^V(s), r\le s\le t\}$-measurable, \item For any $0\le r \le T$, for any $x\in {\mathbf R}}\def\N{{\mathbf N}^n$, the processes $(\omega,\, t)\mapsto X_{r,t}(\omega,\, x)$ and $(\omega,\, t)\mapsto X_{r,t}^{-1}(\omega,\, x)$ belong to ${\mathbb L}_{p,1}$ for some $p\ge 2$. \item For any $0\le r\le s\le t,$ for any $x\in {\mathbf R}}\def\N{{\mathbf N}^n$, the following identity is satisfied: \begin{equation*} X_{r,t}(\omega,\, x)=X_{s,t}(\omega,\, X_{r,s}(\omega,\, x)). \end{equation*} \item Equation \eqref{eq:A} is satisfied for any $0\le r\le t\le T$ ${\mathbf P}$-a.s.. \end{enumerate} \end{defn} \begin{defn} By a solution of \eqref{eq:B}, we mean a measurable map \begin{align*} \Omega \times [0,T] \times [0,T] & \longrightarrow {\mathfrak G}\\ (\omega,\, r,\, t) & \longmapsto (x\mapsto Y_{r,t}(\omega,x)) \end{align*} such that the following properties are satisfied~: \begin{enumerate} \item For any $0\le r \le t \le T$, for any $x\in {\mathbf R}}\def\N{{\mathbf N}^n$, $Y_{r,\, t}(\omega,\, x)$ is $\sigma\{W^V(s), r\le s\le t\}$-measurable, \item For any $0\le r \le T$, for any $x\in {\mathbf R}}\def\N{{\mathbf N}^n$, the processes $(\omega,\, r)\mapsto Y_{r,t}(\omega,\, x)$ and $(\omega,\, r)\mapsto Y_{r,t}^{-1}(\omega,\, x)$ belong to ${\mathbb L}_{p,1}$ for some $p\ge 2$. \item Equation \eqref{eq:B} is satisfied for any $0\le r\le t\le T$ ${\mathbf P}$-a.s.. \item For any $0\le r\le s\le t,$ for any $x\in {\mathbf R}}\def\N{{\mathbf N}^n$, the following identity is satisfied: \begin{equation*} Y_{r,t}(\omega,\, x)=Y_{r,s}(\omega,\, Y_{s,t}(\omega,\, x)). \end{equation*} \end{enumerate} \end{defn} At last consider the equation, for any $0\le r\le t\le T$, \begin{equation} \label{eq:C} \tag{C} Z_{r,\, t}=x-\int_r^t \check{V}_T( \sigma\circ Z_{.,t}\ {\mathbf 1}_{[r,t]})(s)\text{ d} \check{B}^T(s) \end{equation} where $B$ is a standard $n$-dimensional Brownian motion. \begin{defn} By a solution of \eqref{eq:C}, we mean a measurable map \begin{align*} \Omega \times [0,T] \times [0,T] & \longrightarrow {\mathfrak G}\\ (\omega,\, r,\, t) & \longmapsto (x\mapsto Z_{r,t}(\omega,x)) \end{align*} such that the following properties are satisfied~: \begin{enumerate} \item For any $0\le r \le t\le T$, for any $x\in {\mathbf R}}\def\N{{\mathbf N}^n$, $Z_{r,\, t}(\omega,\, x)$ is $\sigma\{\check{B}^T (s), s\le r\le t \}$-measurable, \item For any $0\le r \le t \le T$, for any $x\in {\mathbf R}}\def\N{{\mathbf N}^n$, the processes $(\omega,\, r)\mapsto Z_{r,t}(\omega,\, x)$ and $(\omega,\, r)\mapsto Z_{r,t}^{-1}(\omega,\, x)$ belong to ${\mathbb L}_{p,1}$ for some $p\ge 2$. \item Equation \eqref{eq:C} is satisfied for any $0\le r\le t\le T$ ${\mathbf P}$-a.s.. \end{enumerate} \end{defn} \begin{theorem} \label{thm:solution_C} Assume that $\check{V}_T$ is an $E^0$ causal map continuous from ${\mathcal L}^p$ into ${\mathcal I}_{\alpha,p}$ for $\alpha >0$ and $p\ge 4$ such that $\alpha p>1.$ Assume $\sigma$ is Lipschitz continuous and sub-linear, see Eqn. \eqref{eq:22} for the definition. Then, there exists a unique solution to equation (\ref{eq:C}). Let $Z$ denote this solution. For any $(r,\, r^\prime)$, \begin{equation*} \esp{|Z_{r,T}-Z_{r^\prime, T}|^p}\le c |r-r^\prime|^{p\eta}. \end{equation*} Moreover, \begin{equation*} (\omega,\, r)\mapsto Z_{r,s}(\omega, Z_{s,t}(\omega,\, x))\in {\mathbb L}_{p,1}, \text{ for any } r\le s\le t\le T. \end{equation*} \end{theorem} Since this proof needs several lemmas, we defer it to Section \ref{sec:equation-refeq:c}. \begin{theorem} \label{thm:correspondance_solution_B_C} Assume that $\check{V}_T$ is an $E^0$ causal map continuous from ${\mathcal L}^p$ into ${\mathcal I}_{\alpha,p}$ for $\alpha >0$ and $p\ge 2$ such that $\alpha p>1.$ For fixed $T$, there exists a bijection between the space of solutions of Equation (\ref{eq:B1}) on $[0,T]$ and the set of solutions of Equation (\ref{eq:C}). \end{theorem} \begin{proof} Set \begin{equation*} Z_{r,T}(\check{\omega},\, x)=Y_{T-r,T}(\Theta_T^{-1}(\check{\omega}),\, x) \end{equation*} or equivalently \begin{equation}\label{eq:20} Y_{r,T}(\omega,\, x)=Z_{T-r,T}(\Theta_T(\omega),\, x). \end{equation} According to Theorem \ref{thm:int_strato_inverse}, $Y$ is satisfies (\ref{eq:B1}) if and only if $Z$ satisfies (\ref{eq:C}). The regularity properties are immediate since ${\mathcal L}^p$ is stable by $\tau_{\scriptscriptstyle{T}}.$ \end{proof} The first part of the next result is then immediate. \begin{corollary} \label{thm:solution_B} Assume that $\check{V}_T$ is an $E^0$ causal map continuous from ${\mathcal L}^p$ into ${\mathcal I}_{\alpha,p}$ for $\alpha >0$ and $p\ge 2$ such that $\alpha p>1.$ Then Equation (\ref{eq:B1}) has one and only solution and for any $0\le r\le s\le t,$ for any $x\in {\mathbf R}}\def\N{{\mathbf N}^n$, the following identity is satisfied: \begin{equation}\label{eq:14} Y_{r,t}(\omega,\, x)=Y_{r,s}(\omega,\, Y_{s,t}(\omega,\, x)). \end{equation} \end{corollary} \begin{proof} According to Theorem \ref{thm:correspondance_solution_B_C} and \ref{thm:solution_C}, \eqref{eq:B1} has at most one solution since \eqref{eq:C} has a unique solution. As to the existence, point (1) to (3) are immediately deduced from the corresponding properties of $Z$ and Equation \eqref{eq:20}. According to Theorem \ref{thm:solution_C}, $(\omega,\, r)\mapsto Y_{r,s}(\omega,\, Y_{s,t}(\omega,\, x))$ belongs to ${\mathbb L}_{p,1}$ hence we can apply the substitution formula and we get: \begin{multline}\label{eq:13} Y_{r,s}(\omega,\, Y_{s,t}(\omega,\, x))=Y_{s,t}(\omega,\, x)-\left.\int_r^s \sigma(Y_{\tau,s}(\omega,\, x))\circ \text{ d} W^V(\tau)\right|_{ x=Y_{s,t}(\omega,\, x)}\\ =x-\int_s^t \sigma(Y_{\tau,t}(\omega,\, x)\circ \text{ d} W^V(\tau)\\ -\int_r^s \sigma(Y_{\tau,s}(\omega,\, Y_{s,t}(\omega,\, x)))\circ \text{ d} W^V(\tau). \end{multline} Set \begin{equation*} R_{\tau,t}= \begin{cases} Y_{\tau,t}(\omega,\, x)& \text{ for } s\le \tau \le t\\ Y_{\tau,s}(\omega,\, Y_{s,t}(\omega,\, x)) & \text{ for } r\le \tau \le s. \end{cases} \end{equation*} Then, in view of (\ref{eq:13}), $R$ appears to be the unique solution (\ref{eq:B1}) and thus $R_{s,t}(\omega,\, x)=Y_{s,t}(\omega,\, x).$ Point (4) is thus proved. \end{proof} \begin{corollary} For $x$ fixed, the random field $(Y_{r,t}(x),\, 0\le r\le t\le T) $ admits a continuous version. Moreover, \begin{equation*} \esp{|Y_{r,s}(x)-Y_{r^\prime,s^\prime}(x)|^p}\le c (1+|x|^p) ( |s^\prime-s|^{p\eta}+|r-r^\prime|^{p\eta}). \end{equation*} We still denote by $Y$ this continuous version. \end{corollary} \begin{proof} Without loss of generality, assume that $s\le s^\prime$ and remark that $Y_{s,\, s^\prime(x)}$ thus belongs to $\sigma\{\check{B}^T_u, \, u\ge s\}$. \begin{multline*} \esp{|Y_{r,s}(x)-Y_{r^\prime,s^\prime}(x)|^p}\\ \begin{aligned} &\le c\left( \esp{|Y_{r,s}(x)-Y_{r^\prime,s}(x)|^p}+\esp{|Y_{r^\prime,s}(x)-Y_{r^\prime,s^\prime}(x)|^p}\right)\\ &=c\left( \esp{|Y_{r,s}(x)-Y_{r^\prime,s}(x)|^p}+\esp{|Y_{r^\prime,s}(x)-Y_{r^\prime,s}(Y_{s,s^\prime}(x))|^p}\right)\\ &=c\left( \esp{|Z_{s-r,s}(x)-Z_{s-r^\prime,s}(x)|^p}+\esp{|Z_{s-r^\prime,s}(x)-Z_{s-r^\prime,s}(Y_{s,s^\prime}(x))|^p}\right). \end{aligned} \end{multline*} According to Theorem \ref{thm:continuite_en_temps_Z}, \begin{equation}\label{eq:15} \esp{|Z_{s-r,s}(x)-Z_{s-r^\prime,s}(x)|^p}\le c|r-r^\prime|^{p\eta}(1+|x|^p). \end{equation} In view of Theorem \ref{thm:int_strato_inverse}, the stochastic integral which appears in Equation (\ref{eq:C}) is also a Stratonovitch integral hence we can apply the substitution formula and say \begin{equation*} Z_{s-r^\prime,s}(Y_{s,s^\prime}(x))=\left. Z_{s-r^\prime,s}(y)\right|_{y=Y_{s,s^\prime}(x)}. \end{equation*} Thus we can apply Theorem \ref{thm:continuite_en_temps_Z} and we obtain \begin{equation*} \esp{|Z_{s-r^\prime,s}(x)-Z_{s-r^\prime,s}(Y_{s,s^\prime}(x))|^p}\le c\esp{|x-Y_{s,s^\prime}(x)|^p}. \end{equation*} The right hand side of this equation is in turn equal to \begin{math} \esp{|Z_{0,s^\prime}-Z_{s^\prime-s,s^\prime}(x)|^p}, \end{math} thus, we get \begin{equation}\label{eq:16} \esp{|Z_{s-r^\prime,s}(x)-Z_{s-r^\prime,s}(Y_{s,s^\prime}(x))|^p}\le c (1+|x|^p) |s^\prime-s|^{p\eta} \end{equation} Combining (\ref{eq:15}) and (\ref{eq:16}) gives \begin{equation*} \esp{|Y_{r,s}(x)-Y_{r^\prime,s^\prime}(x)|^p}\le c (1+|x|^p) ( |s^\prime-s|^{p\eta}+|r-r^\prime|^{p\eta}), \end{equation*} hence the result. \end{proof} Thus, we have the main result of this paper. \begin{theorem} \label{thm:solution_A} Assume that $\check{V}_T$ is an $E^0$ causal map continuous from ${\mathcal L}^p$ into ${\mathcal I}_{\alpha,p}$ for $\alpha >0$ and $p\ge 4$ such that $\alpha p>1.$ Then Equation (\ref{eq:Ap}) has one and only one solution. \end{theorem} \begin{proof} Under the hypothesis, we know that Equation (\ref{eq:B1}) has a unique solution which satisfies (\ref{eq:14}). By definition of a solution of (\ref{eq:B1}), the process $Y^{-1}:(\omega,\, s)\mapsto Y_{st}^{-1}(\omega,\, x)$ belongs to ${\mathbb L}_{p,1}$ hence we can apply the substitution formula. Following the lines of proof of the previous theorem, we see that $Y^{-1}$ is a solution of (\ref{eq:Ap}). In the reverse direction, two distinct solutions of (\ref{eq:Ap}) would give raise to two solutions of (\ref{eq:B1}) by the same principles. Since this is definitely impossible in view of Theorem \ref{thm:solution_B}, Equation (\ref{eq:Ap}) has at most one solution. \end{proof} \section{Technical proofs} \label{sec:proofs} \subsection{Substitution formula} \label{sec:substitution-formula} The proof of \ref{thm:int_strato_inverse} relies on several lemmas including one known in anticipative calculus as the substitution formula, cf. \cite{nualart.book}. \begin{theorem} \label{thm:existence_trace} Assume that Hypothesis \ref{hypA}($p,\, \eta$) holds. Let $u$ belong to ${\mathbb L}_{p,1}.$ If $V\nabla^{\text{\tiny W}} u$ is of trace class, then \begin{equation*} \int_0^T D^{\text{\tiny W}} u(s)\text{ d} s=\operatorname{trace}(V\nabla^{\text{\tiny W}} u). \end{equation*} Moreover, \begin{equation*} \esp{\left|\operatorname{trace}(V\nabla^{\text{\tiny W}} u)\right|^p}\le \, c\, \|u\|_{{\mathbb L}_{p,1}}^p. \end{equation*} \end{theorem} \begin{proof} For each $k$, let $(\phi_{k,\, m},\, m=1,\cdots,\, 2^k)$ be the functions \begin{math} \phi_{k,\, m}=2^{k/2}{\mathbf 1}_{[(m-1)2^{-k},\, m2^{-k})}. \end{math} Let $P_k$ be the projection onto the span of the $\phi_{k,\, m}$, since $\nabla^{\text{\tiny W}} Vu$ is of trace class, we have (see \cite{MR2154153}) $$\operatorname{trace}(V\nabla^{\text{\tiny W}} p_t u)=\lim_{k\to +\infty} \operatorname{trace}(P_k \, V\nabla^{\text{\tiny W}} p_t u \, P_k).$$ Now, \begin{align*} \operatorname{trace}(P_k \, V\nabla^{\text{\tiny W}} u \, P_k)&=\sum_{m=1}^k (V\nabla^{\text{\tiny W}} p_t u,\ \phi_{k,m}\otimes \phi_{k,m})_{{\mathcal L}^2\otimes {\mathcal L}^2}\\ &=\sum_{m=1}^k2^k \int_{(m-1)2^{-k}\wedge t}^{ m2^{-k}\wedge t } \int_{(m-1)2^{-k}\wedge t }^{ m2^{-k}\wedge t} V(\nabla^{\text{\tiny W}} _ru)(s) \text{ d} s\text{ d} r. \end{align*} According to the proof of Theorem \ref{thm:strato_integrable}, the first part of the theorem follows. The second part is then a rewriting of (\ref{eq:121}). \end{proof} For $p\ge 1$, let $\Gamma_p$ be the set of random fields: \begin{align*} u\, :\, {\mathbf R}}\def\N{{\mathbf N}^m &\longrightarrow {\mathbb L}_{p,1}\\ x&\longmapsto ((\omega,\, s)\mapsto u(\omega, \, s,\, x)) \end{align*} equipped with the semi-norms, \begin{equation*} p_K(u)=\sup_{x\in K} \| u(x)\|_{{\mathbb L}_{p,1}} \end{equation*} for any compact $K$ of ${\mathbf R}}\def\N{{\mathbf N}^m$. \begin{corollary}[Substitution formula] \label{cor:substitution} Assume that Hypothesis \ref{hypA}($p,\, \eta$) holds. Let $\{u(x),\,, x\in {\mathbf R}}\def\N{{\mathbf N}^m\}$ belong to $\Gamma_p$. Let $F$ be a random variable such that $((\omega, \, s)\mapsto u(\omega, \, s,\, F))$ belongs to ${\mathbb L}_{p,1}$. Then, \begin{equation}\label{eq:12} \int_0^T u(s,F)\circ \text{ d} W^V(s) =\left.\int_0^T u(s,x)\circ \text{ d} W^V_s\right|_{x=F}. \end{equation} \end{corollary} \begin{proof} Simple random fields of the form \begin{equation*} u(\omega,\, s,\, x)=\sum_{l=1}^K H_l(x)u_l(\omega,\, s) \end{equation*} with $H_l$ smooth and $u_l$ in ${\mathbb L}_{p,1}$ are dense in $\Gamma_p$. In view of (\ref{eq:3}), it is sufficient to prove the result for such random fields. By linearity, we can reduce the proof to random fields of the form $H(x)u(\omega,\, s).$ Now for any partition $\pi$, \begin{multline*} \delta^{\text{\tiny W}}\Bigl(\sum_{t_i\in\pi}\frac{1}{\theta_i} \int_{t_i\wedge t}^{t_{i+1}\wedge t} \kern-3pt H( F)V(u(\omega, .))(r) \text{ d} r\, {\mathbf 1}_{[t_i,\, t_{i+1})}\Bigr)\\ =H( F) \delta^{\text{\tiny W}}\Bigl(\sum_{t_i\in\pi}\frac{1}{\theta_i} \int_{t_i\wedge t}^{t_{i+1}\wedge t} \kern-3pt V(u(\omega, .))(r) \text{ d} r\, {\mathbf 1}_{[t_i,\, t_{i+1})}\Bigr)\\ -\sum_{t_i\in\pi}\int_{t_i\wedge t}^{t_{i+1}\wedge t}\int_{t_i\wedge t}^{t_{i+1}\wedge t} H^\prime(F)\nabla^{\text{\tiny W}}_s F \, Vu(r)\text{ d} s\text{ d} r. \end{multline*} On the other hand, \begin{align*} \nabla^{\text{\tiny W}}_s (H(F)u(\omega,\, r))&=H^\prime(F)\nabla^{\text{\tiny W}}_s F \, u(r), \end{align*} hence \begin{multline*} \sum_{t_i\in\pi}\frac{1}{\theta_i} \iint\limits_{[t_i\wedge t,t_{i+1}\wedge t]^2} V(\nabla^{\text{\tiny W}} _rH(F)u)(s) \text{ d} s\text{ d} r\\ =\sum_{t_i\in\pi}\frac{1}{\theta_i} \iint\limits_{[t_i\wedge t,t_{i+1}\wedge t]^2} H^\prime(F)\nabla^{\text{\tiny W}}_s F \, Vu(r) \text{ d} s\text{ d} r. \end{multline*} According to Theorem \ref{thm:strato_integrable}, Eqn. (\ref{eq:12}) is satisfied for simple random fields. \end{proof} \begin{defn} \label{def:integrale} For any $0\le r\le t\le T$, for $u$ in ${\mathbb L}_{p,\, 1}$, we define \begin{math} \int_r^t u(s)\circ \text{ d} W^V(s) \end{math} as \begin{align*} \int_r^t u(s)\circ \text{ d} W^V(s)&=\int_0^t u(s)\circ \text{ d} W^V(s)-\int_0^r u(s)\circ \text{ d} W^V(s)\\ &= \int_0^T e_tu(s)\text{ d} W^V(s)-\int_0^T e_ru(s)\circ \text{ d} W^V(s)\\ &=\delta^{\text{\tiny W}}(V(e_t-e_r)u)+\int_r^t D^{\text{\tiny W}} u(s)\text{ d} s. \end{align*} \end{defn} By the very definition of trace class operators, the next lemma is straightforward. \begin{lemma} \label{lem:taut_trace} Let $A$ and $B$ be two continuous maps from ${\mathcal L}^2([0,\, T];\, {\mathbf R}}\def\N{{\mathbf N}^n)$ into itself. Then, the map $\tau_{\scriptscriptstyle{T}} A\otimes B$ (resp. $A\tau_{\scriptscriptstyle{T}} \otimes B$) is of trace class if and only if the map $A\otimes \tau_{\scriptscriptstyle{T}} B$ (resp. $A\otimes B \tau_{\scriptscriptstyle{T}}$) is of trace class. Moreover, in such a situation, \begin{equation*} \operatorname{trace}(\tau_{\scriptscriptstyle{T}} A\otimes B)=\operatorname{trace}( A\otimes \tau_{\scriptscriptstyle{T}} B) \text{, resp. } \operatorname{trace}( A\tau_{\scriptscriptstyle{T}} \otimes B)=\operatorname{trace}( A\otimes B\tau_{\scriptscriptstyle{T}}). \end{equation*} \end{lemma} The next corollary follows by a classical density argument. \begin{corollary} \label{lem:taut} Let $u\in {\mathbb L}_{2,1}$ such that $\nabla^{\text{\tiny W}}\otimes \tau_{\scriptscriptstyle{T}} V u$ and $\nabla^{\text{\tiny W}}\otimes V \tau_{\scriptscriptstyle{T}} u$ are of trace class. Then, $\tau_{\scriptscriptstyle{T}} \nabla^{\text{\tiny W}}\otimes V u$ and $\nabla^{\text{\tiny W}} \tau_{\scriptscriptstyle{T}} \otimes V u $ are of trace class. Moreover, we have: \begin{multline*} \operatorname{trace}(\nabla^{\text{\tiny W}}\otimes \tau_{\scriptscriptstyle{T}} V u)=\operatorname{trace}(\tau_{\scriptscriptstyle{T}} \nabla^{\text{\tiny W}}\otimes V u)\\ \text{ and } \operatorname{trace}(\nabla^{\text{\tiny W}}\otimes (V \tau_{\scriptscriptstyle{T}}) u)=\operatorname{trace}( \nabla^{\text{\tiny W}} \tau_{\scriptscriptstyle{T}} \otimes V u). \end{multline*} \end{corollary} \begin{proof}[Proof of~\ref{thm:int_strato_inverse}] We first study the divergence term. In view of \ref{thm:taut_delta}, we have \begin{align*} \delta^B (V(e_{T-r}-e_{T-t}) \tau_{\scriptscriptstyle{T}} \check{u} \circ \Theta_T)&=\delta^B (V\tau_{\scriptscriptstyle{T}} (e_t-e_r) \check{u} \circ \Theta_T)\\ &= \delta^B (\tau_{\scriptscriptstyle{T}} \check{V}_T (e_t-e_r) \check{u} \circ \Theta_T)\\ &=\check{\delta}(\check{V}_T (e_t-e_r) \check{u})(\check{\omega})\\ &=\int_r^t \check{V}_T ({\mathbf 1}_{[r,\, t]}\check{u})(s)\text{ d} B^T(s). \end{align*} According to Theorem \ref{lem:causalite}, $(\check{V}_T)^*$ is $\check{E}_0$ causal and according to \ref{lem:strict-causality}, it is strictly $\check{E}_0$ causal. Thus, Theorem \ref{thm:trace_zero} implies that $\check{\nabla} V (e_t-e_r) \check{u}$ is of trace class and quasi-nilpotent. Hence Lemma \ref{lem:taut} induces that \begin{equation*} \tau_{\scriptscriptstyle{T}} \check{V}_T \tau_{\scriptscriptstyle{T}} \otimes \tau_{\scriptscriptstyle{T}} \check{\nabla}\tau_{\scriptscriptstyle{T}} (e_t-e_r) \check{u} \end{equation*} is trace-class and quasi-nilpotent. Now, according to Theorem \ref{thm:cdelta}, we have \begin{equation*} \tau_{\scriptscriptstyle{T}} \check{V}_T \tau_{\scriptscriptstyle{T}} \otimes \tau_{\scriptscriptstyle{T}} \check{\nabla}\tau_{\scriptscriptstyle{T}} (e_t-e_r) \check{u}=V(\nabla \tau_{\scriptscriptstyle{T}} (e_{T-r}-e_{T-t})\check{u} \circ \Theta_T). \end{equation*} According to Theorem \ref{thm:strato_integrable}, we have proved (\ref{eq:9}). \end{proof} \subsection{The forward equation} \label{sec:equation-refeq:c} \begin{lemma} \label{lem:borne_VVsigma} Assume that Hypothesis \ref{hypA}$(p,\, \eta)$ holds and that $\sigma$ is Lipschitz continuous. Then, for any $0\le a\le b\le T$, the map \begin{align*} \check{V}_T\circ \sigma \, :\, C([0,T], \, {\mathbf R}}\def\N{{\mathbf N}^n)&\longrightarrow C([0,T], \, {\mathbf R}}\def\N{{\mathbf N}^n)\\ \phi &\longmapsto \check{V}_T(\sigma\circ \psi\ {\mathbf 1}_{[a,b]}) \end{align*} is Lipschitz continuous and Gâteaux differentiable. Its differential is given by: \begin{equation}\label{eq:18} d\check{V}_T\circ\sigma(\phi)[\psi]=\check{V}_T(\sigma^\prime\circ \phi \ \psi). \end{equation} Assume furthermore that $\sigma$ is sub-linear, i.e., \begin{equation}\label{eq:22} |\sigma(x)|\le c ( 1+|x|), \text{ for any }x\in {\mathbf R}}\def\N{{\mathbf N}^n. \end{equation} Then, for any $\psi\in C([0,T],\, {\mathbf R}}\def\N{{\mathbf N}^n)$, for any $t\in [0,T]$, \begin{align*} |\check{V}_T(\sigma\circ \psi)(t)| &\le c T^{\eta+1/p} (1+\int_0^t |\psi(s)|^p \text{ d} s)\\ &\le c T^{\eta+1/p} (1+\| \psi\|_\infty). \end{align*} \end{lemma} \begin{proof} Let $\psi$ and $\phi$ be two continuous functions, since $ C([0,T], \, {\mathbf R}}\def\N{{\mathbf N}^n)$ is continuously embedded in ${\mathcal L}^p$, $\check{V}_T(\sigma\circ\psi-\sigma\circ \phi)$ belongs to $\operatorname{Hol}(\eta)$. Moreover, \begin{align*} \sup_{t\le T} | \check{V}_T(\sigma\circ \psi\ {\mathbf 1}_{[a,b]})(t)-\check{V}_T(\sigma\circ \phi\ {\mathbf 1}_{[a,b]})(t) | & \le c\,\|\check{V}_T((\sigma\circ\psi-\sigma\circ \phi) \ {\mathbf 1}_{[a,b]})\|_{\operatorname{Hol}(\eta)}\\ &\le c \,\|(\sigma\circ\psi-\sigma\circ \phi)\ {\mathbf 1}_{[a,b]}\|_{{\mathcal L}^p}\\ &\le c \,\|\phi-\psi\|_{{\mathcal L}^p([a,\, b])}\\ &\le c \,\sup_{t\le T} | \psi(t)-\phi(t) |, \end{align*} since $\sigma$ is Lipschitz continuous. Let $\psi$ and $\psi$ two continuous functions on $[0,T]$. Since $\sigma$ is Lipschitz continuous, we have \begin{equation*} \sigma(\psi(t)+\varepsilon \phi(t))=\sigma(\psi(t))+\varepsilon\int_0^1 \sigma^\prime(u\psi(t)+(1-u)\phi(t))\text{ d} u. \end{equation*} Moreover, since $\sigma$ is Lipschitz, $\sigma^\prime$ is bounded and \begin{equation*} \int_0^T \left| \int_0^1 \sigma^\prime(u\psi(t)+(1-u)\phi(t))\text{ d} u\right|^p \text{ d} t \le c \, T. \end{equation*} This means that $(t\mapsto \int_0^1 \sigma^\prime(u\psi(t)+(1-u)\phi(t))\text{ d} u)$ belongs to ${\mathcal L}^p.$ Hence, according to Hypothesis \ref{hypA}, \begin{equation*} \| \check{V}_T(\int_0^1 \sigma^\prime(u\psi(.)+(1-u)\phi(.))\text{ d} u)\|_C\le c T. \end{equation*} Thus, \begin{equation*} \lim_{\varepsilon\to 0} \varepsilon^{-1}(\check{V}_T(\sigma\circ (\psi+\varepsilon\phi))-\check{V}_T(\sigma\circ \psi)) \text{ exists,} \end{equation*} and $\check{V}_T\circ \sigma$ is Gâteaux differentiable and its differential is given by \eqref{eq:18}. Since $\sigma\circ\psi$ belongs to $C([0,T],\, {\mathbf R}}\def\N{{\mathbf N}^n)$, according to Hypothesis \ref{hypA}, we have: \begin{align*} |\check{V}_T(\sigma\circ \psi)(t)|&\le c\left(\int_0^t s^{\eta p}|\sigma(\psi(s))|^p \text{ d} s\right)^{1/p}\\ &\le cT^\eta \left(\int_0^t (1+|\psi(s)|^p) \text{ d} s\right)^{1/p}\\ &\le c T^{\eta+1/p} (1+\|\psi\|_\infty^p )^{1/p}\\ &\le c T^{\eta+1/p} (1+\|\psi\|_\infty). \end{align*} The proof is thus complete. \end{proof} Following \cite{MR658690}, we then have the following non trivial result. \begin{theorem} \label{thm:continuite_en_temps_Z} Assume that Hypothesis \ref{hypA}$(p,\, \eta)$ holds and that $\sigma$ is Lipschitz continuous. Then, there exists one and only one measurable map from $\Omega \times [0,T] \times [0,T]$ into ${\mathfrak G}$ which satisfies the first two points of Definition \eqref{eq:C}. Moreover, \begin{equation*} \esp{|Z_{r,\, t}(x)-Z_{r^\prime,\, t} (x^\prime)|^p}\le c (1+|x|^p\vee |x^\prime|^p)\, \left(|r-r^\prime|^{p\eta}+|x-x^\prime|^p\right) \end{equation*} and for any $x\in {\mathbf R}}\def\N{{\mathbf N}^n$, for any $0\le r\le t \le T,$ we have \begin{equation*} \esp{|Z_{r,t}(x)|^p}\le c (1+|x|^p)e^{c T^{\eta p +1}}. \end{equation*} Note even if $x$ and $x^\prime$ are replaced by $\sigma\{\check{B}^T(u),\, t\le u\}$ measurable random variables, the last estimates still holds. \end{theorem} \begin{proof} Existence, uniqueness and homeomorphy of a solution of \eqref{eq:C} follow from \cite{MR658690}. The regularity with respect to $r$ and $x$ is obtained as usual by BDG inequality and Gronwall Lemma. For $x$ or $x^\prime$ random, use the independence of $\sigma\{\check{B}^T(u),\, t\le u\}$ and $\sigma\{\check{B}^T(u),\, r\wedge r^\prime\le u \le t \}$. \end{proof} \begin{theorem} Assume that Hypothesis \ref{hypA}$(p,\, \eta)$ holds and that $\sigma$ is Lipschitz continuous and sub-linear. Then, for any $x\in {\mathbf R}}\def\N{{\mathbf N}^n$, for any $0\le r\le s\le t\le T$, $(\omega,\, r)\mapsto Z_{r,s}(\omega, \, Z_{s,t}(x))$ and $(\omega,\, r)\mapsto Z_{r,t}^{-1}(\omega, \, x)$ belong to ${\mathbb L}_{p,1}.$ \end{theorem} \begin{proof} According to \cite[Theorem 3.1]{hirsch88}, the differentiability of $\omega\mapsto Z_{r,t}(\omega, \, x)$ is ensured. Furthermore, \begin{equation*} \nabla_u Z_{r,t}=- \check{V}_T(\sigma\circ Z_{.,t}{\mathbf 1}_{[r,\, t]})(u)-\int_r^t \check{V}_T (\sigma^\prime(Z_{.,t} ).\nabla_u Z_{.,t}{\mathbf 1}_{[r,\, t]})(s) \text{ d} \check{B}(s), \end{equation*} where $\sigma^\prime$ is the differential of $\sigma.$ For $M>0$, let \begin{equation*} \xi_M=\inf\{\tau,\, |\nabla_u Z_{\tau,\, t}|^p\ge M\} \text{ and } Z^M_{\tau,\ t}=Z_{\tau\vee \xi_M,\, t}. \end{equation*} Since $\check{V}_T$ is continuous from ${\mathcal L}^p$ in to itself and $\sigma$ is Lipschitz, according to BDG inequality, for $r\le u $, \begin{multline*} \esp{ |\nabla_u Z^M_{r,t}|^p }\\ \begin{aligned} & \le c \esp{ |\check{V}_T(\sigma\circ Z^M_{.,t}{\mathbf 1}_{[r,\, t]})(u)|^p } +c \, \esp{ \int_r^t |\check{V}_T(\sigma^\prime( Z^M_{.,t} )\, \nabla_u Z^M_{., t}{\mathbf 1}_{[r,\, t]})(s)|^p \text{ d} s}\\ &\le c \left(1+\esp{\int_r^t u^{p\eta} \int_r^u |Z_{\tau,t}|^p \text{ d} \tau\text{ d} u} + \esp{\int_r^t s^{p\eta} \int_r^s |\nabla_u Z^M_{\tau ,t}|^p\text{ d}\tau\ \text{ d} s} \right)\\ &\le c \left(1+\esp{\int_r^t |Z_{\tau,t}|^p (t^{p\eta+1}-\tau^{p\eta+1})\text{ d} \tau} + \esp{\int_r^t |\nabla_u Z^M_{\tau ,t}|^p (t^{p\eta+1}-\tau^{p\eta+1})\text{ d}\tau}\right)\\ &\le ct^{p\eta+1} \left(1+\esp{\int_r^t |Z_{\tau,t}|^p \text{ d} \tau} + \esp{\int_r^t |\nabla_u Z^M_{\tau ,t}|^p \text{ d}\tau}\right). \end{aligned} \end{multline*} Then, Gronwall Lemma entails that \begin{equation*} \esp{|\nabla_u Z^M_{r,t}|^p }\le c\, \left(1+\esp{\int_r^t |Z_{\tau,t}|^p \text{ d} \tau}\right), \end{equation*} hence by Fatou lemma, \begin{equation*} \esp{|\nabla_u Z_{r,t}|^p }\le c\, \left(1+\esp{\int_r^t |Z_{\tau,t}|^p \text{ d} \tau}\right). \end{equation*} The integrability of $\esp{|\nabla_u Z_{r,t}|^p }$ with respect to $u$ follows. Now, since $0\le r\le s\le t\le T$, $Z_{s,t}(x)$ is independent of $Z_{r,s}(x),$ thus the previous computations still hold and $(\omega,\, r)\mapsto Z_{r,s}(\omega, \, Z_{s,t}(x))$ belong to ${\mathbb L}_{p,1}$. According to \cite{MR810975}, to prove that $Z_{r,t}^{-1}(x)$ belongs to ${\mathbb D}_{p,1}$, we need to prove \begin{enumerate} \item for every $h\in {\mathcal L}^2$, there exists an absolutely continuous version of the process $(t\mapsto Z_{r,t}^{-1}(\omega+th,\, x))$, \item there exists $D Z_{r,t}^{-1}$, an ${\mathcal L}^2$-valued random variable such that for every $h\in {\mathcal L}^2$, \begin{equation*} \frac{1}{t}( Z_{r,t}^{-1}(\omega+th,\, x)-Z_{r,t}^{-1}(\omega,\, x))\xrightarrow{t\to 0} \int_0^T D Z_{r,t}^{-1}(s) h(s)\text{ d} s, \end{equation*} where the convergence holds in probability, \item $D Z_{r,t}^{-1}$ belongs to ${\mathcal L}^2(\Omega,\, {\mathcal L}^2)$. \end{enumerate} We first show that \begin{equation}\label{eq:19} \esp{ \left| \frac{\partial Z_{r,t}}{\partial x}(\omega,\, Z_{r,t}^{-1}(x))\right|^{-p} }\text{ is finite.} \end{equation} Since \begin{equation*} \frac{\partial Z_{r,t}}{\partial x}(\omega,\, x)=\operatorname{Id} + \int_r^t \check{V}_T(\sigma^\prime(Z_{.,t}(x)) \frac{\partial Z_{.,t}(\omega,\, x)}{\partial x})(s)\text{ d} \check{B}(s), \end{equation*} Let $\Theta_v=\sup_{u\le v} |\partial_x Z_{u,\, t}(x)|$. The same kind of computations as above entails that (for the sake of brevity, we do not detail the localisation procedure as it is similar to the previous one): \begin{multline*} \esp{\Theta_v^{2q}} \le c+c\, \esp{\int_u^t \Theta_s^{2(q-1)} \left(\int_u^s |\partial_x Z_{\tau,\, t}(x)|^{p}|\text{ d}\tau\right)^{2/p}\text{ d} s}\\ +c\, \esp{\left(\int_u^t \Theta_s^{q-2} \left(\int_u^s |\partial_x Z_{\tau,\, t}(x)|^{p}|\text{ d}\tau\right)^{2/p}\right)^2\text{ d} s}. \end{multline*} Hence, \begin{equation*} \esp{\Theta_v^{2q}} \le c\left(1+\int_v^t \esp{\Theta_s^{2q}}\text{ d} s\right), \end{equation*} and \eqref{eq:19} follows by Fatou and Gronwall lemmas. Since $Z_{r,t}(\omega, \, Z_{r,t}^{-1}(\omega, \, x))=x,$ the implicit function theorem imply that $Z_{r,\, t}^{-1}(x)$ satisfies the first two properties and that \begin{equation*} {\nabla}Z_{r,t}(\omega,\, Z_{r,t}^{-1}(x))+\frac{\partial Z_{r,t}}{\partial x}(\omega,\, Z_{r,t}^{-1}(x))\tilde{\nabla}Z^{-1}_{r,t}(\omega,\, x). \end{equation*} It follows by Hölder inequality and Equation \eqref{eq:19} that \begin{equation*} \| DZ_{r,t}^{-1}(x))\|_{p,1}\le c \| Z_{r,t}(x))\|_{2p,1}\|(\partial_x Z_{r,\, t}(x))^{-1}\|_{2p}, \end{equation*} hence $Z_{r,\, t}^{-1}$ belongs to ${\mathbb L}_{p,\, 1}.$ \end{proof} \appendix \end{document}
arXiv
Hostname: page-component-7ccbd9845f-hcslb Total loading time: 1.766 Render date: 2023-01-29T20:33:11.684Z Has data issue: true Feature Flags: { "useRatesEcommerce": false } hasContentIssue true >Journals >British Journal of Nutrition >Volume 112 Issue 1 >Iron deficiency is uncommon among lactating women in... Iron deficiency is uncommon among lactating women in urban Nepal, despite a high risk of inadequate dietary iron intake Published online by Cambridge University Press: 08 April 2014 Sigrun Henjum , Mari Manger , Eli Skeie , Manjeswori Ulak , Andrew L. Thorne-Lyman , Ram Chandyo , Prakash S. Shrestha , Lindsey Locks , Rune J. Ulvik and Wafaie W. Fawzi ...Show all authors Sigrun Henjum* Oslo and Akershus University College of Applied Sciences, PO Box 4, St Olavs Plass, N-0130Oslo, Norway Mari Manger Centre for International Health, University of Bergen, PO Box 7800, 5020Bergen, Norway Eli Skeie Manjeswori Ulak Department of Child Health, Institute of Medicine, Tribhuvan University, Kathmandu, Nepal Andrew L. Thorne-Lyman Department of Nutrition, Harvard School of Public Health, 665 Huntington Avenue, Boston, MA02115, USA Center on Globalization and Sustainable Development, the Earth Institute, Columbia University, New York, NY, USA Ram Chandyo Centre for International Health, University of Bergen, PO Box 7800, 5020Bergen, Norway Department of Child Health, Institute of Medicine, Tribhuvan University, Kathmandu, Nepal Prakash S. Shrestha Lindsey Locks Department of Nutrition, Harvard School of Public Health, 665 Huntington Avenue, Boston, MA02115, USA Rune J. Ulvik Department of Clinical Science, University of Bergen, 5020Bergen, Norway Laboratory of Clinical Biochemistry, Haukeland University Hospital, 5021Bergen, Norway Department of Nutrition, Harvard School of Public Health, 665 Huntington Avenue, Boston, MA02115, USA Department of Epidemiology, Harvard School of Public Health, 665 Huntington Avenue, Boston, MA02115, USA Department of Global Health and Population, Harvard School of Public Health, 665 Huntington Avenue, Boston, MA02115, USA Tor A. Strand Centre for International Health, University of Bergen, PO Box 7800, 5020Bergen, Norway Division of Medical Services, Innlandet Hospital Trust, 2629Lillehammer, Norway *Corresponding author: S. Henjum, email [email protected] Save PDF (0.2 mb) View PDF[Opens in a new window] Rights & Permissions[Opens in a new window] The main objective of the present study was to examine the association between dietary Fe intake and dietary predictors of Fe status and Hb concentration among lactating women in Bhaktapur, Nepal. We included 500 randomly selected lactating women in a cross-sectional survey. Dietary information was obtained through three interactive 24 h recall interviews including personal recipes. Concentrations of Hb and plasma ferritin and soluble transferrin receptors were measured. The daily median Fe intake from food was 17·5 mg, and 70 % of the women were found to be at the risk of inadequate dietary Fe intake. Approximately 90 % of the women had taken Fe supplements in pregnancy. The prevalence of anaemia was 20 % (Hb levels < 123 g/l) and that of Fe deficiency was 5 % (plasma ferritin levels < 15 μg/l). In multiple regression analyses, there was a weak positive association between dietary Fe intake and body Fe (β 0·03, 95 % CI 0·014, 0·045). Among the women with children aged < 6 months, but not those with older infants, intake of Fe supplements in pregnancy for at least 6 months was positively associated with body Fe (P for interaction < 0·01). Due to a relatively high dietary intake of non-haem Fe combined with low bioavailability, a high proportion of the women in the present study were at the risk of inadequate intake of Fe. The low prevalence of anaemia and Fe deficiency may be explained by the majority of the women consuming Fe supplements in pregnancy. Iron deficiencyLactating womenIron intakesPlasma ferritinSoluble transferrin receptors British Journal of Nutrition , Volume 112 , Issue 1 , 14 July 2014 , pp. 132 - 141 DOI: https://doi.org/10.1017/S0007114514000592[Opens in a new window] Copyright © The Authors 2014 Fe deficiency is the most prevalent micronutrient deficiency globally. It is an important cause of anaemia( 1 ) that affects 42 % of pregnant women and nearly one-third of all non-pregnant women of reproductive age( Reference McLean, Cogswell and Egli 2 ). Low dietary Fe content, often combined with low bioavailability from plant-based diets, are important causes of Fe deficiency, especially in pregnant women with increased Fe requirements( Reference Zimmermann and Hurrell 3 ). The degree of Fe absorption depends on the form of Fe consumed and by the presence of dietary enhancers or inhibitors of its absorption. Highly bioavailable haem Fe is likely to be consumed infrequently and in small amounts by people in resource-poor settings. The majority of dietary Fe consumed in such countries is in the form of non-haem Fe, which is often poorly absorbed due to the presence of dietary inhibitors such as phytate and tannins. The present study was carried out in Bhaktapur municipality in semi-urban communities in the Kathmandu valley. A previous survey carried out in this population has reported an anaemia prevalence of 12 % and that 54 % consumed inadequate amounts of Fe( Reference Chandyo, Strand and Ulvik 4 ). The 2011 Demographic Health Survey showed that 39 % of lactating women in Nepal (15–49 years old) were anaemic( 5 ). Another study conducted among pregnant women in the south-eastern plains of Nepal reported that dietary intake of haem Fe was significantly associated with a lower risk for Fe deficiency without anaemia( Reference Makhoul, Taren and Duncan 6 ). However, few studies carried out in developing-country settings have related the measures of dietary intake to the biochemical measures of Fe status( Reference Backstrand, Allen and Black 7 ). Furthermore, a systematic review on dietary micronutrient intakes of women in resource-poor settings has concluded that there is a need for more documentation of the risk of inadequate micronutrient intakes among women living in low-income settings( Reference Torheim, Ferguson and Penrose 8 ). Therefore, the main objective of the present study was to examine the association between dietary Fe intake and dietary predictors of Fe status and Hb concentration in a representative sample among lactating women in Bhaktapur, Nepal. Study area and population A cross-sectional survey was carried out among 500 randomly selected healthy lactating women (17–44 years old) from Bhaktapur municipality, Nepal. Bhaktapur is an urban area located 15 km east of the capital Kathmandu and was chosen because of the socio-economic diversity of this population, which gave us a unique opportunity to explore dietary variation. It has a total population of approximately 75 000, predominantly of the Newari ethnic group, and mostly farmers, semi-skilled or unskilled labourers and daily wage earners. Study design and participants From a public health perspective, we wanted to detect deficiencies of micronutrients such as Zn, Fe and vitamin B12 with a prevalence of >25 %. It was calculated that 450 mothers would be adequate to detect this prevalence with an absolute precision of 4 %, i.e. with a 95 % CI ranging from 21 to 29 %. Assuming incomplete sampling from approximately 10 % of these women, we calculated a final desired sample size of 500 women. In the first stage of sampling, we used a population-proportional-to-size method to select sixty-six of 160 geographical areas ('toles'). In the second stage, we obtained the census lists of all women living in the sixty-six toles and selected the subjects randomly from these lists. We approached 582 women in order to enrol 500 women in the study (Fig. 1). A total of 500 lactating (encompassing both exclusive and partial) women were enrolled in the study and completed the first 24 h dietary recall. Due to the dropouts, the sample sizes for the second and third 24 h recalls were 487 and 477, respectively. A total of eleven women were excluded due to errors in the interactive 24 h recalls, and thus the final sample size consisted of 466 lactating women who had completed three 24 h recalls. Fig. 1 Flow chart of the recruitment of the study subjects. Women came to the hospital to receive physical examinations, dietary interviews and blood draws. The first woman was enrolled in January 2008 and the last in February 2009. The inclusion criteria were that they were lactating, had no self-reported on-going infections and were able to provide household information. Women with anaemia (Hb levels < 123 g/l) were offered free treatment with Fe supplements according to the national guidelines. All women gave written informed consent before the start of the study, which was approved by the ethical review board of the Institute of Medicine, Tribhuvan University. Women were classified as literate if they could read and write, and illiterate if they could do neither or only one of the two. Schooling was defined as primary school (1st–3rd grade), secondary school (4th–10th grade), school-leaving certificate, intermediate school or Bachelor's degree. Dietary assessment Nepali-speaking, trained fieldworkers performed the interactive 24 h recalls, and each woman participated in three interactive 24 h recalls( Reference Gibson and Ferguson 9 ). Every fieldworker in the present survey received training by a dietitian for a period of 2 months. The fieldworkers were trained in interview techniques, how to use the electronic scales, how to estimate the volume of different foods, how to collect recipes, how to use the food codes, how to handle difficult situations and how to calibrate the weights before every interview. They were not only trained as a group, but also practised 24 h recalls on each other and on at least five women from the community before conducting the recalls for the present study. To ensure that the days represented normal intake and to minimise interviewer biases, the recalls were obtained to represent three different weekdays with each recall period separated by 2–11 d, and conducted by three different fieldworkers. Saturdays (weekends) were excluded. The 24 h recalls were collected during 1 year in order to cover the seasonal variation in food supply across the participants in the study area. The same procedure was used to collect the 24 h recalls for each of the 3 d. First, the participants were asked to name all the food and drinks consumed during the preceding day, including anything consumed outside the home and the time of consumption. Second, they were asked to describe the ingredients and cooking methods for each recipe. Third, the amounts of foods and dishes were estimated using an electronic scale (Philips) with a precision of 1 g and a maximum capacity of 5 kg. The scales were calibrated daily. Cooked rice was used for estimating the volume of rice, vegetable stew ('tarkari') and pickles, and water was used to estimate the volume of lentils ('dal'). Fresh vegetables were used to measure the size and quantity of vegetables used in the recipes. The amounts of meat, fish, bread and fruits were estimated by food models and pictures made exclusively for the present study according to Gibson & Ferguson( Reference Gibson and Ferguson 9 ). Clay models were used for estimating the portions of meat and fish, wooden models were used for bread, whereas pictures were used for estimating the amounts of fruits consumed. Finally, the participants were asked to recall snacks consumed between meals during the last 24 h from a list of snack foods, made especially for the present study. The personal recipes of the participants were collected. Standard recipes were made for tea, spices (masala), lentils, bread, vegetable stew (tarkari) and pickles, and were used when the participants had bought ready-made food or when the food had been eaten at someone else's place. The standard recipes were developed from a collection of recipes in a pilot study, and the average of the ingredients from at least twelve recipes for each dish was calculated. Information on the consumption of fortified foods was not collected. Most of the fortified food available in the study area were designed for infants and preschool children, and thus were not commonly consumed by adults. Intake of iron supplements in pregnancy Information on the consumption of Fe supplements in pregnancy was collected through questionnaires. The women were encouraged at the hospital to take Fe supplements during pregnancy, and bought Fe supplements from the hospital or from local drug stores. In the questionnaire, the women were asked for how many months they consumed Fe supplements in pregnancy and in which trimester they started to have the supplements. Information on the frequency of consumption, the dosage or the brand name of Fe supplements was not collected, and thus the intake of supplements was not included in the analysis of dietary Fe intake. The common practice for the administration of Fe supplements was 60 mg of elemental iron sulphate from the second trimester of pregnancy. Nutrient analysis Because there is no standard food composition table (FCT) available for Nepal, a FCT was compiled for the present study ('Composite Bhaktapur FCT'). In this FCT, nutrient values and foods were derived from WorldFood 2( 10 ), the 'Nutritive Value of Indian Foods'( Reference Gopalan, Rama Sastri and Balasubramanian 11 ) and, where necessary, from the Thai FCT( 12 ) or the US FCT( 13 ). The three 24 h dietary recalls and the 'Composite Bhaktapur FCT' were entered in nutrient analysis software designed exclusively for the present study. The source of phytate data was the Indian FCT( Reference Gopalan, Rama Sastri and Balasubramanian 11 ). Intake of haem Fe was calculated based on the assumption that haem Fe makes up 40 % of Fe in meat, poultry or fish. The usual intake distributions were calculated by the multiple source method, which is characterised by a two-part shrinkage technique applied to the residuals of two regression models: one for the positive daily intake data and another for the event of consumption( Reference Haubrock, Nothlings and Volatier 14 , Reference Harttig, Haubrock and Knuppel 15 ). Estimation of adequacy of dietary iron intake Estimation of dietary Fe availability in this population was made according to the method described by Murphy et al. ( Reference Murphy, Beaton and Calloway 16 ), using quantitative data on the intake of haem Fe, non-haem Fe, coffee and tea, as well as on the amount of ascorbic acid and protein from meat, fish and poultry per 4184 kJ (1000 kcal) of energy consumed. The average bioavailability of dietary Fe intake in this population was calculated to be 4·5 %. The bioavailability of dietary Fe was also calculated according to the algorithm developed by Bhargava et al. ( Reference Bhargava, Bouis and Scrimshaw 17 ). This algorithm includes estimation of Fe stores, intake of fish and meat, ascorbic acid, phytate, non-haem Fe and haem Fe, which was developed for Bangladeshi women, a population that might be equalled to that of the present study. This algorithm gave an estimated average bioavailability of 1·8 %. The WHO recommends the use of the calculation of 5 or 10 % Fe absorption in developing countries depending on the diet( 18 ). We therefore used the 5 % bioavailability assumption for the evaluation of Fe intake in the present study. The 2004 Estimated Average Requirements (EAR) and Recommended Nutrient Intake (RNI) of the FAO/WHO were used to evaluate the nutritional adequacy of the women's diets depending on post-partum. The RNI of Fe for lactating women (0–3 months) and women aged 19–50 years is 30 and 58·8 mg, respectively, when the bioavailability is 5 %. The risk of inadequate dietary Fe intake for women who had been lactating for < 3 months was classified using the following definitions: very high risk (average dietary Fe intake below the EAR); moderate risk (average dietary Fe intake between the EAR and the RNI); low risk (average dietary Fe intake higher than the RNI)( Reference Torheim, Ferguson and Penrose 8 ). A full probability approach was used( Reference Allen, de Benois, Dary and Hurrell 19 ) that estimated the risk of inadequate dietary Fe intake as a total product of the probability of inadequacy for a given range of intake multiplied by the percentage of women with intakes in that range. Anthropometric measurements Weight was measured using a UNICEF weighing scale (Seca; Salter). Height was measured with a locally made board in the clinic and calibrated weekly. Maternal BMI was calculated as weight/(height)2 (kg/m2). BMI < 18·5 kg/m2 was considered as underweight, 18·5 kg/m2>BMI < 25 kg/m2 as normal weight and BMI ≥ 25 kg/m2 as overweight( 20 ). During the first hospital visit, the first of the three 24 h recalls was performed, and a venous blood sample was collected from a cubital vein into a micronutrient-free, heparinised polypropylene tube (Sarstedt). Hb concentration was measured immediately by HemoCue (Vedbæk)( 21 ), which was regularly calibrated as recommended by the manufacturer. After centrifuging the sample at 760 g for 10 min at room temperature, plasma was transferred to micronutrient-free polypropylene vials (Eppendorf, Hinz, Germany) and stored at − 70oC before transport on dry ice to Norway, and then further stored at − 80oC until analysis was performed at the Laboratory of Clinical Biochemistry, Haukeland University Hospital, Bergen. The plasma concentration of biochemical components was determined on a Modular Analytics System by Roche Diagnostics (Roche Diagnostics GmbH), with the analytical CV being 5 % for each test. Plasma ferritin concentration was analysed by an electrochemiluminescence immunoassay, while soluble transferrin receptor (TfR) concentration was analysed by immunoturbidimetry. Cut-off limits for the analytical test After adjusting for the altitude of the study area (1400 m), anaemia in this population was defined as Hb levels < 123 g/l( Reference Goodman 22 ). Mild anaemia was defined as Hb levels between 103 and 122 g/l, moderate anaemia as Hb levels between 73 and 102 g/l and severe anaemia as Hb levels < 73 g/l( Reference Goodman 22 ). Fe deficiency, expressed as depleted Fe stores, was defined as plasma ferritin levels < 15 μg/l( 1 ). Increased need of Fe in the erythropoietic bone marrow and peripheral tissues was defined as TfR levels >4·4 mg/l( Reference Kolbe-Busch, Lotz and Hafner 23 ). As increased concentration of C-reactive protein is a sensitive marker of inflammation, women with C-reactive protein levels >5 mg/l were excluded from the analyses in which plasma ferritin was involved. Calculation of body iron stores Body Fe assessed as surplus (positive value) or deficit (negative value) of Fe in the tissues was calculated by using the formula described by Cook et al. ( Reference Cook, Flowers and Skikne 24 ). The formula was derived from a close linear relationship that was found between the logarithm of the ratio of the concentration of soluble serum TfR and plasma ferritin (R:F ratio, μg/μg) and body Fe expressed as mg Fe/kg body weight, corrected for the absorption of dietary Fe: $$\begin{eqnarray} mg\,Fe/kg = - (log(R:F\ ratio) - 2\cdot 8229)/0\cdot 1207. \end{eqnarray}$$ Since analysis of TfR refers to the in-house ELISA developed by Flowers et al. ( Reference Flowers, Skikne and Covell 25 ), the results obtained for TfR by the Roche method were converted by using the regression equation presented by Pfeiffer et al. ( Reference Pfeiffer, Cook and Mei 26 ): $$\begin{eqnarray} TfR\hyphen Flowers = 1\cdot 5\times TfR\hyphen Roche + 0\cdot 35. \end{eqnarray}$$ Data processing and statistical analysis Data were analysed using SPSS version 17 (SPSS, Inc.), STATA version 12 (Stata Corporation) and R version 2.16 (r-project.org). Continuous data that were not normally distributed are presented as medians and 25th (P25) and 75th (P75) percentiles. Two-tailed tests with a significance level of 5 % were used for all analyses. We measured the association of relevant independent variables with dependent variables (Hb and body Fe) using multiple linear regression models. The variables that were known to influence Hb concentration and body Fe as well as selected socio-economic variables were included in initial crude models. Candidate variables included dietary Fe intake, vitamin C intake, vitamin A intake (β-carotene), phytate intake, use of Fe supplements in pregnancy (for at least 6 months), mother's age, mother's BMI, mother's literacy, parity and infant's age at the time of the baseline visit. All covariates showing a linear association (P< 0·10) in the crude regression models were included in a preliminary multiple regression model. The variables that were still significantly associated in this model (P< 0·10) were retained in the final model( Reference Hosmer and Lemeshow 27 ). Analysis of the residuals was performed in order to examine the fit of the model. In the final model, the following interactions between the independent variables were assessed and included if the interaction term was significant (P< 0·10): (Dietary Fe intake × vitamin C (dichotomous variable, intake P25 (42 mg/d) and P75 (72 mg/d)) and time since birth (dichotomous variable, cut-off 6 months) × prenatal Fe supplement for at least 6 months). We explored and depicted the linearity of the associations between the independent and dependent variables in generalised additive models( Reference Wood 28 ). We adjusted for clustering of outcomes due to the sampling design using the SVY group of commands for complex survey data in STATA. The mean age of the 500 enrolled women was 25·8 years, and the majority were literate, had a healthy BMI and less than three children. The intake of Fe supplements in pregnancy was reported by 90 % of the women, the mean duration of which was 4·8 months, and more than half of them started the supplementation during the second trimester (Table 1). Table 1 Demographic and anthropometric characteristics of 500 lactating women in Bhaktapur, Nepal (Mean values and standard deviations; number of subjects and percentages) * Farmers, carpet factory workers, daily wage earners, self-employed or service workers. † Primary school (1st–3rd grade); secondary school (4th–10th grade); school-leaving certificate; intermediate school; or Bachelor's degree. ‡ BMI < 18·5 kg/m2 defined as underweight; BMI 18·5–25 kg/m2 defined as normal weight; BMI>25 kg/m2 defined as overweight. Intakes of dietary iron and risk of inadequate intakes of iron The intake of dietary Fe and other dietary factors with the potential to affect Fe absorption are presented in Table 2. The median daily Fe intake from food was 17·5 mg and almost all the Fe consumed was in the form of non-haem Fe (98 %). The daily dietary intake of phytate (3304 mg/d) was well above the level known to adversely affect Fe absorption, and the main sources were rice, dal, potato and whole-wheat flour. Enhancers of Fe absorption (meat, fish and poultry, and vitamin C) were consumed in moderate amounts. The principal sources of dietary Fe in this population were mustard leaves, rice flakes ('beaten rice'), turnip leaves, rice and whole-wheat flour (Table 3). Based on the EAR of the WHO/FAO for women consuming a diet with low bioavailability (5 %), 72·7 % of the women who were ≤ 3 months post-partum (n 33) were considered to have a very high risk of inadequate dietary Fe intake. Only 15 % of the women who were ≤ 3 months post-partum were at a low risk of inadequate dietary Fe intake (Table 4). Using a full probability approach for women who were >3 months post-partum (n 432), the total prevalence of inadequate intake of dietary Fe was estimated to be 78 %. Table 2 Dietary intakes* of energy, nutrients, foods and constituents influencing iron absorption in 466 lactating women in Bhaktapur, Nepal (Mean values and standard deviations; median values and 25th–75th percentiles) * Usual intakes based on three 24 h recalls per women. † 5 % bioavailability calculated according to Murphy et al. ( Reference Murphy, Beaton and Calloway 16 ). ‡ n 327. § n 425. ∥ Gibson & Ferguson( Reference Gibson and Ferguson 9 ). Table 3 Main sources of dietary iron in Bhaktapur, Nepal * Value obtained from the Indian food composition table( Reference Gopalan, Rama Sastri and Balasubramanian 11 ). † When using a value from Suma et al. ( Reference Suma, Sheetal and Jyothi 33 ), the contribution to iron intake was 18 % and the iron content was 5·2 mg. Table 4 Proportion of different risk groups of inadequate iron intake for thirty-three lactating women (≤3 months post-partum) consuming a diet with 5 % iron bioavailability* in Bhaktapur, Nepal (Number of subjects and percentages) * Iron bioavailability calculated according to Murphy et al. ( Reference Murphy, Beaton and Calloway 16 ) and WHO recommendations( 18 ). † Average dietary iron intake is less than or equal to the estimated average requirement for lactating women. ‡ Average dietary iron intake is greater than the estimated average requirement and less than or equal to the recommended nutrient intake for lactating women. § Average dietary iron intake is greater than the recommended nutrient intake for lactating women. Iron status and anaemia After adjusting for the altitude of the study area (1400 m), the prevalence of anaemia (Hb levels < 123 g/l) was 20 % and mild anaemia (Hb levels 103–122 g/l) was 17 %. Of the women, only 5 % (n 26) had depleted Fe stores with plasma ferritin levels < 15 μg/l, while 15 % (n 73) had TfR levels >4·4 mg/l, indicating an insufficient supply of Fe to the erythropoietic bone marrow and peripheral tissues. Of the women with plasma ferritin levels < 15 μg/l, 69 % (n 18) also had TfR levels >4·4 mg/l as a sign of empty Fe stores with accompanying Fe-deficient erythropoesis evidenced by a mean Hb concentration of 112 (sd 1·3) g/l. Thus, the prevalence in the whole study group of true Fe-deficient anaemia was 3·6 %. The remaining eight subjects with plasma ferritin levels < 15 μg/l and no increase in the levels of TfR had a sufficient supply of Fe to the tissues despite depleted Fe stores. Of the study participants, 103 had plasma ferritin levels between 15 and 35 μg/l, which is compatible with depleted or very small Fe stores in many women( Reference Hallberg, Bengtsson and Lapidus 29 ). In this subgroup, twenty-one women had TfR levels >4·4 mg/l, indicating restricted synthesis of Hb due to an insufficient supply of Fe to the bone marrow. Since low levels of plasma ferritin may be the evidence of a negative Fe balance, it is conceivable that thirty-nine (53 %, with plasma ferritin levels < 35 mg/l) of the seventy-three women with increased TfR values could be explained by a restricted supply of Fe to the tissues. Body Fe calculated from the logarithm of the geometric mean of the R:F ratios (95·6) was 7·0 (sd 3·3) mg Fe/kg body weight with a range between − 8·9 and 14·5 mg Fe/kg. Of the women, fifteen (3 %) had negative values indicating tissue Fe deficiency, with a mean of − 2·8 (sd 3·4) mg Fe/kg. The rest of the group (n 485, i.e. 97 %) had a mean body Fe stores of 7·3 (sd 5·5) mg Fe/kg (Table 5). Table 5 Hb, plasma ferritin and transferrin receptor concentrations among 500 lactating women in Bhaktapur, Nepal (Mean values and standard deviations or ranges; number of subjects and percentages) * Threshold for defining anaemia is adjusted for altitude( Reference Goodman 22 ). † n 476; twenty-four women excluded due to the elevated WHO cut-off levels of CRP for the depletion of iron stores( 21 ). ‡ Cut-off point for iron-deficient erythropoiesis( Reference Kolbe-Busch, Lotz and Hafner 23 ). § Negative value indicates a quantitative measure of tissue iron deficit (i.e. lack of stored iron). Dietary predictors of iron status and anaemia In the linear regression models (Table 6), intake of Fe supplements in pregnancy predicted Hb concentration. Dietary Fe intake and intake of Fe supplements in pregnancy predicted body Fe. Intake of Fe supplements in pregnancy was associated with a higher Hb concentration of 0·29 (95 % CI 0·04, 0·54) g/l (P= 0·03). Dietary Fe intake and potential enhancers and inhibitors of Fe absorption were not associated with Hb concentration. There was a weak positive association between dietary intake of Fe and body Fe (β 0·03, 95 % CI 0·014, 0·045). In addition, for women with children aged < 6 months, but not for those with older infants (P for interaction < 0·01), intake of Fe supplements in pregnancy was positively associated with body Fe stores. We identified the same predictors in the multiple linear regression models using ferritin and TfR concentrations as the dependent variables. The R 2 values in the models with ferritin and TfR were 0·22 and 0·09, respectively. The association of dietary Fe intake with biochemical markers is also depicted in the graphs obtained from generalised additive models (Fig. 2). Table 6 Multiple linear regression models of the relationship between dietary iron intake, Hb concentration and body iron (n 500) (β Coefficients and 95 % confidence intervals) * Both models included mother's age, parity, literacy and child's age. † Intake of Fe supplements in pregnancy at least 6 months, dichotomous variable (yes/no). ‡ No significant interaction between time since birth (dichotomous variable, cut-off 6 months) and iron supplements in pregnancy for at least 6 months. § Interaction between time since birth (dichotomous variable, cut-off 6 months) and iron supplements in pregnancy for at least 6 months. ∥ Continuous variable. ¶ Dichotomous variable (yes/no). Fig. 2 Association between daily iron intake and concentrations of (a) Hb, (b) plasma ferritin and (c) plasma transferrin receptor in lactating women (n 466) in Bhaktapur, Nepal. The graph was constructed using the generalised additive models in R. The solid curves depict the estimated dose–response curve; the shaded areas represent the 95 % CI. The small vertical lines on the x-axis show the distribution of the observations. In the population of lactating women in Bhaktapur, we found that more than 98 % of dietary Fe intake consumed was in the form of non-haem Fe and that the intake of phytate was high. Therefore, >70 % of these women were estimated to be at the risk of inadequate Fe intake. At the same time, only 5 % of the women had plasma ferritin levels indicative of Fe deficiency and 15 % of the women had elevated levels of TfR mainly indicative of Fe-deficient erythropoiesis. We demonstrated that dietary Fe intake and intake of Fe supplements in pregnancy predicted body Fe. Furthermore, intake of Fe supplements in pregnancy, but not dietary variables, showed a strong positive association with Hb concentration. Dietary iron intake The Fe consumed by these Nepali women was almost exclusively in the form of non-haem Fe. It should be noted that no fortified food staples were available to this population at the time of data collection. The main contributors of dietary Fe intake were cooked green leaf relishes, unrefined whole-meal bread 'roti' and rice flakes, all of which contained significant amounts of phytate based on available food composition data( 10 , Reference Gopalan, Rama Sastri and Balasubramanian 11 ), and therefore impaired the bioavailability of Fe. In addition, meat that enhances the uptake of non-haem Fe( Reference Bothwell, Charlton and Cook 30 ) was not commonly consumed. In contrast, intakes of vitamin C were about 70 % of the RNI for lactating women and obtained from vegetable relishes commonly eaten with meals, which may have had a positive impact on the bioavailability of Fe. Tea consumption is unlikely to have had any substantial negative impact on the bioavailability of Fe because it is typically not consumed with meals. The dietary Fe sources were similar to that of a previous study among non-pregnant, non-lactating women in Bhaktapur; however, that study reported lower mean intakes of Fe (8·4 mg/d)( Reference Chandyo, Strand and Ulvik 4 ). This may in part be due to the different FCT used. The previous study used exclusively data from the WorldFood Dietary Assessment System( 10 ), whereas we used primarily the Indian food composition data( Reference Gopalan, Rama Sastri and Balasubramanian 11 ) because Indian foods were more specific to food items consumed in Nepal. The Fe content of mustard leaves, turnip leaves, rice flakes, refined rice and whole-wheat flour was consistently higher in the Indian FCT, compared with similar substitute foods in WorldFood. These values may be higher than the true Fe content of these common Nepali foods. However, a study in the Kathmandu valley found that dietary Fe intakes based on chemical analysis of 24 h diet composites were three times higher than the calculated dietary intakes for the same 24 h period using the US Department of Agriculture database( Reference Moser, Reynolds and Acharya 31 ). Therefore, it is possible that the Fe content of the foods consumed in Bhaktapur is higher than estimated. There is a clear need for a Nepal-specific FCT. Probability of inadequacy Based on plasma ferritin and TfR levels, we found the prevalence of Fe deficiency to be 5·3 and 14·6 %, respectively. Yet, we estimated that 70 % of the women were at the risk of inadequate dietary Fe intake. There are several possible reasons for this discrepancy. First, by using a cut-off value of plasma ferritin < 15 μg/l for depleted Fe stores, we have probably underestimated the prevalence of Fe deficiency as signalled by this biomarker (the prevalence of Fe deficiency was 25·8 % by using the cut-off values of plasma ferritin < 35 μg/l). As shown in the study by Hallberg et al. ( Reference Hallberg, Bengtsson and Lapidus 29 ), at this cut-off value, the diagnostic sensitivity and specificity of plasma ferritin are 75 and 98 %, respectively. They have found that the Fe stores could be negligible at plasma ferritin concentrations from 35 μg/l and below. Second, 90 % of the women had used Fe supplements in pregnancy and more than half of them initiated the supplementation in the second trimester. Thus, temporary Fe supplementation in pregnancy is likely to have had a positive effect on Fe stores during the first few months of lactation, but probably not later( 5 ) when Fe loss increased due to the return of menstruation and probably because Fe supplementation was stopped. Third, we may have overestimated the probability of inadequate dietary Fe intake because we used the EAR for all non-pregnant, non-lactating women who were >3 months post-partum. Many of these women may still have had lactation amenorrhoea and therefore had lower Fe requirements than what was the basis of the EAR used. However, the risk of inadequate dietary Fe intake for women ≤ 3 months post-partum, for whom lactation amenorrhoea was taken into account, was similar to those who had given birth earlier. Fourth, as discussed above, the food composition data used may not have accurately reflected the true Fe, phytate and vitamin C contents of Nepali foods. It is possible that the Fe content of several foods is higher and/or the phytate content is lower than what was estimated based on the available data. The calculation of body Fe based on the ratio between the concentrations of TfR and plasma ferritin gives a more precise picture of the tissue Fe content than what is achieved by using the cut-off limits of the biochemical tests. The present results in Fe-replete and Fe-deficient women were somewhat different from what was reported by Cook et al. ( Reference Cook, Flowers and Skikne 24 ) in their study of women between 20 and 45 years of age. In women with normal Fe status, they have found mean Fe stores of 5·5 (sd 3·35) v. 7·3 (sd 5·5) mg Fe/kg in the present study, and in women with Fe deficiency, they have found a mean deficit in tissue Fe of − 3·9 (sd 3·23) v. − 2·8 (sd 3·4) mg/kg in the present study. As discussed above, the Fe status of women in the present study may have benefited from Fe supplementation and less Fe loss due to amenorrhoea. We found a significant, albeit weak, positive association between body Fe and dietary Fe intake, and also demonstrated that this association was linear. The lack of an association between dietary Fe intake and Hb concentration may in part be due to the fact that only a small proportion of anaemia in this population was Fe-deficiency anaemia; 82 and 63 % of the anaemic women did not have low plasma ferritin or elevated TfR levels, respectively. However, the generalised additive model curve revealed a linear association between dietary Fe intake and Hb concentration at lower dietary Fe intakes (Fig. 2). However, this effect is small compared with the strong association between the intake of Fe supplements in pregnancy and Hb concentration in the present study. As discussed earlier, this effect persists in the lactation period. In addition, the 2011 Nepal Demographic and Health Survey showed that 80 % of Nepali women took Fe supplementation during last pregnancy and 41 % consumed Fe supplements post-partum( 5 ). Lastly, other micronutrient deficiencies may also have been responsible for lower Hb concentrations observed in the present study, such as folate, vitamin B12 and vitamin A( 32 ), but their relationships to Hb were not studied. The present study had a number of strengths. We had a representative sample of lactating women, with a relatively large sample when compared with that used in most other dietary studies. We had three interactive 24 h recalls with personal recipes adapted to local foods. We trained local staff to perform the recall interviews and the interviews were done throughout the year, covering the seasonal variation in food intake at a group level. Using plasma ferritin and TfR as biomarkers, the near-linear association of their relevant changes with dietary Fe intake indicates strong relative validity of our adapted interactive 24 h recall method. Inflammation will obscure the interpretation of plasma ferritin, but not TfR, which can explain the difference in the prevalence of Fe deficiency based on plasma ferritin and TfR. However, chronic and acute illness was an exclusion criterion in the present study and very few (n 24 women) had elevated C-reactive protein levels. Therefore, we believe that the difference in prevalence is not due to inflammation. The primary weakness of the present study was the reliance of external FCT, which may not reflect the true nutrient composition of local Nepali foods. Our findings may have also been adversely affected by bias caused by the fact that participating women knew they were coming in for a dietary interview and may have altered their diet. However, the fact that few women had insufficient Fe intake may suggest that potential bias would have been towards the overestimation, rather than underestimation, of Fe intake. Probably due to a high dietary intake of non-haem Fe combined with low bioavailability, a high proportion of the lactating women in the present study were at the risk of inadequate intake of Fe. The low prevalence of anaemia and Fe deficiency may be explained by the majority of the women consuming Fe supplements in pregnancy. The present study was funded by a grant from the Norwegian Council of Universities' Committee for Development Research, Education (project no. NUFUSM-2007/10177), Research Council of Norway (project no. 172226), and a grant from the GCRieber Funds and Feed the Future Food Security Innovation Lab: Collaborative Research on Nutrition which is funded by the United States Agency for International Development. A. L. T.-L. was supported by the National Institutes of Health training grant T32 #DK 007703. The contributions of the authors were as follows: T. A. S., R. C., S. H., W. W. F. and P. S. S. designed the research; R. C., M. U., E. S. and S. H. conducted the research; S. H., A. T.-L., R. J. U., L. L. and T.A.S analysed the data; S. H. and M. M. wrote the paper; S. H. and T. A. S. had primary responsibility for the final content. All authors read and approved the final manuscript. None of the authors has any conflict of interest to declare. 1 World Health Organization, United Nations Children's Fund, United Nations University (2001) Iron Deficiency Anaemia Assessment, Prevention, and Control. A Guide For Programme Managers. Geneva: WHO. http://www.who.int/nutrition/publications/micronutrients/anaemia_iron_deficiency/WHO_NHD_01.3/en/index.html.Google Scholar 2 McLean, E, Cogswell, M, Egli, I, et al. (2009) Worldwide prevalence of anaemia, WHO Vitamin and Mineral Nutrition Information System, 1993–2005. Public Health Nutr 12, 444–454.CrossRefGoogle ScholarPubMed 3 Zimmermann, MB & Hurrell, RF (2007) Nutritional iron deficiency. Lancet 370, 511–520.CrossRefGoogle ScholarPubMed 4 Chandyo, RK, Strand, TA, Ulvik, RJ, et al. (2007) Prevalence of iron deficiency and anemia among healthy women of reproductive age in Bhaktapur, Nepal. Eur J Clin Nutr 61, 262–269.CrossRefGoogle ScholarPubMed 5 NDHS: Ministry of Health and Population (Nepal) (2012) Nepal Demographic Health Survey – 2011. Kathmandu, Nepal: New ERA and ICF International.Google ScholarPubMed 6 Makhoul, Z, Taren, D, Duncan, B, et al. (2012) Risk factors associated with anemia, iron deficiency and iron deficiency anemia in rural Nepali pregnant women. Southeast Asian J Trop Med Public Health 43, 735–746.Google ScholarPubMed 7 Backstrand, JR, Allen, LH, Black, AK, et al. (2002) Diet and iron status of nonpregnant women in rural Central Mexico. Am J Clin Nutr 76, 156–164.Google ScholarPubMed 8 Torheim, LE, Ferguson, EL, Penrose, K, et al. (2010) Women in resource-poor settings are at risk of inadequate intakes of multiple micronutrients. J Nutr 140, 2051S–2058S.CrossRefGoogle ScholarPubMed 9 Gibson, RS & Ferguson, EL (2008) An Interactive 24-Hour Recall for Assessing the Adequacy of Iron and Zinc Intakes in Developing Countries. HarvestPlus Technical Monograph 8. Washington, DC: International Food Policy Research Institute (IFPRI) and International Center for Tropical Agriculture (CIAT).Google Scholar 10 Food and Agriculture Organization of the United Nations (2010) Food Composition: Overview of the WorldFood Dietary Assessment System. http://typo3.fao.org/fileadmin/templates/food_composition/documents/World_food_dietary_assessment_system/Overview_of_the_WorldFood_Dietary_Assessment_System.pdf.Google Scholar 11 Gopalan, C, Rama Sastri, B & Balasubramanian, S (1989) Nutritive Value of Indian Foods. Hyderabad, India: National Institute of Nutrition.Google Scholar 12 Institute of Nutrition Mahidol University (2002) Food Composition Database for INMUCAL Program. Thailand: Institute of Nutrition.Google Scholar 13 United States Department of Agriculture, Agricultural Research Service (2012) Search the USDA National Nutrient Database for Standard Reference. http://ndb.nal.usda.gov/.Google Scholar 14 Haubrock, J, Nothlings, U, Volatier, JL, et al. (2011) Estimating usual food intake distributions by using the multiple source method in the EPIC-Potsdam Calibration Study. J Nutr 141, 914–920.CrossRefGoogle ScholarPubMed 15 Harttig, U, Haubrock, J, Knuppel, S, et al. (2011) The MSM program: web-based statistics package for estimating usual dietary intake using the Multiple Source Method. Eur J Clin Nutr 65, Suppl. 1, S87–S91.CrossRefGoogle ScholarPubMed 16 Murphy, SP, Beaton, GH & Calloway, DH (1992) Estimated mineral intakes of toddlers: predicted prevalence of inadequacy in village populations in Egypt, Kenya, and Mexico. Am J Clin Nutr 56, 565–572.Google ScholarPubMed 17 Bhargava, A, Bouis, HE & Scrimshaw, NS (2001) Dietary intakes and socioeconomic factors are associated with the hemoglobin concentration of Bangladeshi women. J Nutr 131, 758–764.Google ScholarPubMed 18 World Health Organization, Food and Agriculture Organization of the United Nations (2002) Human Vitamin and Mineral Requirements. Rome, Italy: Food and Agriculture Organization of the United Nations (FAO).Google ScholarPubMed 19 World Health Organization, Food and Agriculture Organization of the United Nations (2006) Guidelines for Food Fortification with Micronutrients [Allen, LH, de Benois, B, Dary, O and Hurrell, R, editors]. Geneva: World Health Organization, Food and Agriculture Organization of the United Nations.Google Scholar 20 WHO (2006) Global Database on Body Mass Index: An Interactive Surveillance Tool for Monitoring Nutrition Transition. Geneva: WHO.Google Scholar 21 Hemocue Corporate (2011) Our Products – Hemoglobin Portfolio. Available at http://www.hemocue.com/index.php?page = 51.Google Scholar 22 Goodman, RA (1989) Current trends: CDC criteria for anemia in children and child bearing-aged women. MMWR Morb Mortality Wkly Rep 38, 400–404.Google Scholar 23 Kolbe-Busch, S, Lotz, J, Hafner, G, et al. (2002) Multicenter evaluation of a fully mechanized soluble transferrin receptor assay on the Hitachi and Cobas Integra analysers. The determination of reference ranges. Clin Chem Lab Med 40, 529–536.CrossRefGoogle Scholar 24 Cook, JD, Flowers, CH & Skikne, BS (2003) The quantitative assessment of body iron. Blood 101, 3359–3364.CrossRefGoogle ScholarPubMed 25 Flowers, CH, Skikne, BS, Covell, AM, et al. (1989) The clinical measurement of serum transferrin receptor. J Lab Clin Med 114, 368–377.Google ScholarPubMed 26 Pfeiffer, CM, Cook, JD, Mei, Z, et al. (2007) Evaluation of an automated soluble transferrin receptor (sTfR) assay on the Roche Hitachi analyzer and its comparison to two ELISA assays. Clin Chim Acta 382, 112–116.CrossRefGoogle ScholarPubMed 27 Hosmer, DW & Lemeshow, S (2000) Applied Logistic Regression, 2nd ed. New York: John Wiley & Sons.CrossRefGoogle Scholar 28 Wood, SN (2000) Modelling and smoothing parameter estimation with multiple quadratic penalties. J R Statist Soc B 62, 413–428.CrossRefGoogle Scholar 29 Hallberg, L, Bengtsson, C, Lapidus, L, et al. (1993) Screening for iron deficiency: an analysis based on bone-marrow examinations and serum ferritin determinations in a population sample of women. Br J Haematol 85, 787–798.CrossRefGoogle Scholar 30 Bothwell, TH, Charlton, RW, Cook, JD, et al. (1979) Iron Metabolism in Man. Oxford, UK: Blackwell Scientific Publications.Google Scholar 31 Moser, PB, Reynolds, RD, Acharya, S, et al. (1988) Copper, iron, zinc, and selenium dietary intake and status of Nepalese lactating women and their breast-fed infants. Am J Clin Nutr 47, 729–734.Google ScholarPubMed 32 WHO (2004) Assessing the Iron Status of Populations: A Report of a Joint World Health Organization/Centers for Disease Control Technical Consultation on the Assessment of Iron Status at the Population Level. Geneva: WHO.Google Scholar 33 Suma, RC, Sheetal, G, Jyothi, LA, et al. (2007) Influence of phytin phosphorous and dietary fibre on in vitro iron and calcium bioavailability from rice flakes. Int J Food Sci Nutr 58, 637–643.CrossRefGoogle ScholarPubMed View in content This article has been cited by the following publications. This list is generated based on data provided by Crossref. Henjum, Sigrun Torheim, Liv Elin Thorne-Lyman, Andrew L Chandyo, Ram Fawzi, Wafaie W Shrestha, Prakash S and Strand, Tor A 2015. Low dietary diversity and micronutrient adequacy among lactating women in a peri-urban area of Nepal. Public Health Nutrition, Vol. 18, Issue. 17, p. 3201. Henjum, Sigrun Kjellevold, Marian Ulak, Manjeswori Chandyo, Ram Shrestha, Prakash Frøyland, Livar Strydom, Emmerentia Dhansay, Muhammad and Strand, Tor 2016. Iodine Concentration in Breastmilk and Urine among Lactating Women of Bhaktapur, Nepal. Nutrients, Vol. 8, Issue. 5, p. 255. Watanabe, Yuna Yoshikata, Hiromi Ishida, Hiromi and Uenishi, Kazuhiro 2016. Iron Status of Women Lactating for Over One Year: A Longitudinal Study from the First Trimester to Post-lactation. The Japanese Journal of Nutrition and Dietetics, Vol. 74, Issue. 4, p. 89. Ulak, Manjeswori Chandyo, Ram Thorne-Lyman, Andrew Henjum, Sigrun Ueland, Per Midttun, Øivind Shrestha, Prakash Fawzi, Wafaie Graybill, Lauren and Strand, Tor 2016. Vitamin Status among Breastfed Infants in Bhaktapur, Nepal. Nutrients, Vol. 8, Issue. 3, p. 149. Haugen, Johanne Ulak, Manjeswori Chandyo, Ram Henjum, Sigrun Thorne-Lyman, Andrew Ueland, Per Midtun, Øivind Shrestha, Prakash and Strand, Tor 2016. Low Prevalence of Vitamin D Insufficiency among Nepalese Infants Despite High Prevalence of Vitamin D Insufficiency among Their Mothers. Nutrients, Vol. 8, Issue. 12, p. 825. Chandyo, R K Henjum, S Ulak, M Thorne- Lyman, A L Ulvik, R J Shrestha, P S Locks, L Fawzi, W and Strand, T A 2016. The prevalence of anemia and iron deficiency is more common in breastfed infants than their mothers in Bhaktapur, Nepal. European Journal of Clinical Nutrition, Vol. 70, Issue. 4, p. 456. Strand, Tor A. Ulak, Manjeswori Kvestad, Ingrid Henjum, Sigrun Ulvik, Arve Shrestha, Merina Thorne-Lyman, Andrew L. Ueland, Per M. Shrestha, Prakash S. and Chandyo, Ram K. 2018. Maternal and infant vitamin B12 status during infancy predict linear growth at 5 years. Pediatric Research, Vol. 84, Issue. 5, p. 611. Henjum, Sigrun Lie, Øyvind Ulak, Manjeswori Thorne-Lyman, Andrew L. Chandyo, Ram K. Shrestha, Prakash S. W. Fawzi, Wafaie Strand, Tor A. and Kjellevold, Marian 2018. Erythrocyte fatty acid composition of Nepal breast-fed infants. European Journal of Nutrition, Vol. 57, Issue. 3, p. 1003. Henjum, Sigrun Kvestad, Ingrid Shrestha, Merina Ulak, Manjeswori Chandyo, Ram K. Thorne-Lyman, Andrew L. Shrestha, Prakash S. Kjellevold, Marian Hysing, Mari and Strand, Tor A. 2018. Erythrocyte DHA and AA in infancy is not associated with developmental status and cognitive functioning five years later in Nepalese children. Nutrition Journal, Vol. 17, Issue. 1, Amano, Izuki and Murakami, Ayako 2019. Prevalence of infant and maternal anemia during the lactation period in Japan. Pediatrics International, Vol. 61, Issue. 5, p. 495. Nunn, Rebecca L. Kehoe, Sarah H. Chopra, Harsha Sahariah, Sirazul A. Gandhi, Meera Di Gravio, Chiara Coakley, Patsy J. Cox, Vanessa A. Sane, Harshad Shivshankaran, Devi Marley-Zagar, Ella Margetts, Barrie M. Jackson, Alan A. Potdar, Ramesh D. and Fall, Caroline H. D. 2019. Dietary micronutrient intakes among women of reproductive age in Mumbai slums. European Journal of Clinical Nutrition, Vol. 73, Issue. 11, p. 1536. Rahmannia, Sofa Diana, Aly Luftimas, Dimas Erlangga Gurnida, Dida Akhmad Herawati, Dewi Marhaeni Diah Houghton, Lisa Anne Gibson, Rosalind Susan and Glover-Amengor, Mary 2019. Poor dietary diversity and low adequacy of micronutrient intakes among rural Indonesian lactating women from Sumedang district, West Java. PLOS ONE, Vol. 14, Issue. 7, p. e0219675. Madhari, Radhika S. Boddula, Swetha Ravindranadh, Palika Jyrwa, Yvette Wilda Boiroju, Naveen Kumar Pullakhandam, Raghu Mamidi, Raja Sriswan Nimmathota, Arlappa Kulkarni, Bharati and Thingnganing, Longvah 2020. High dietary micronutrient inadequacy in peri‐urban school children from a district in South India: Potential for staple food fortification and nutrient supplementation. Maternal & Child Nutrition, Vol. 16, Issue. S3, Morrison, Joanna Giri, Romi Arjyal, Abriti Kharel, Chandani Harris‐Fry, Helen James, Philip Baral, Sushil Saville, Naomi and Hillman, Sara 2021. Addressing anaemia in pregnancy in rural plains Nepal: A qualitative, formative study. Maternal & Child Nutrition, Vol. 17, Issue. S1, Kvestad, Ingrid McCann, Adrian Chandyo, Ram K Giil, Lasse M Shrestha, Merina Ulak, Manjeswori Hysing, Mari Ueland, Per M and Strand, Tor A 2021. One-Carbon Metabolism in Nepalese Infant–Mother Pairs and Child Cognition at 5 Years Old. The Journal of Nutrition, Vol. 151, Issue. 4, p. 883. Saville, Naomi M. Kharel, Chandani Morrison, Joanna Harris-Fry, Helen James, Philip Copas, Andrew Giri, Santosh Arjyal, Abriti Beard, B. James Haghparast-Bidgoli, Hassan Skordis, Jolene Richter, Adam Baral, Sushil and Hillman, Sara 2022. Comprehensive Anaemia Programme and Personalized Therapies (CAPPT): protocol for a cluster-randomised controlled trial testing the effect women's groups, home counselling and iron supplementation on haemoglobin in pregnancy in southern Nepal. Trials, Vol. 23, Issue. 1, de Quadros, Victoria Padula Balcerzak, Agnieszka Allemand, Pauline de Sousa, Rita Ferreira Bevere, Teresa Arsenault, Joanne Deitchler, Megan and Holmes, Bridget Anna 2022. Global Trends in the Availability of Dietary Data in Low and Middle-Income Countries. Nutrients, Vol. 14, Issue. 14, p. 2987. View all Google Scholar citations for this article. Save article to Kindle Sigrun Henjum (a1), Mari Manger (a2), Eli Skeie (a2), Manjeswori Ulak (a3), Andrew L. Thorne-Lyman (a4) (a5), Ram Chandyo (a2) (a3), Prakash S. Shrestha (a3), Lindsey Locks (a4), Rune J. Ulvik (a6) (a7), Wafaie W. Fawzi (a4) (a8) (a9) and Tor A. Strand (a2) (a10) Save article to Dropbox To save this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Dropbox account. Find out more about saving content to Dropbox. Save article to Google Drive To save this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you used this feature, you will be asked to authorise Cambridge Core to connect with your Google Drive account. Find out more about saving content to Google Drive. Reply to: Submit a response Title * Please enter a title for your response. Contents * Contents help Close Contents help - No HTML tags allowed - Web page URLs will display as text only - Lines and paragraphs break automatically - Attachments, images or tables are not permitted Please enter your response. First name * Please enter your first name. Last name * Please enter your last name. Email * Email help Close Email help Your email address will be used in order to notify you when your comment has been reviewed by the moderator and in case the author(s) of the article or the moderator need to contact you directly. Occupation Please enter your occupation. Affiliation Please enter any affiliation. Conflicting interests Do you have any conflicting interests? * Conflicting interests help Close Conflicting interests help Please list any fees and grants from, employment by, consultancy for, shared ownership in or any close relationship with, at any time over the preceding 36 months, any organisation whose interests may be affected by the publication of the response. Please also list any non-financial associations or interests (personal, professional, political, institutional, religious or other) that a reasonable reader would want to know about in relation to the submitted work. This pertains to all the authors of the piece, their spouses or partners. Please enter details of the conflict of interest or select 'No'. Please tick the box to confirm you agree to our Terms of use. * Please accept terms of use. Please tick the box to confirm you agree that your name, comment and conflicts of interest (if accepted) will be visible on the website and your comment may be printed in the journal at the Editor's discretion. * Please confirm you agree that your details will be displayed.
CommonCrawl
Math Proofs Derivative of sin(x) The derivative of $ \sin(x) $ is: $$ \tfrac{d}{dx}\big(\sin(x)\big) = \cos(x) $$ Take the definition of the derivative: $$ \tfrac{d}{dx}\big(\sin(x)\big) = \lim_{h \to 0} \left(\frac{\sin(x + h) - \sin(x)}{h}\right) $$ Use the angle sum formula for sine to expand $ \sin(x + h) $: $$ \tfrac{d}{dx}\big(\sin(x)\big) = \lim_{h \to 0} \left(\frac{\sin(x)\cos(h) + \cos(x)\sin(h) - \sin(x)}{h}\right) $$ Factor out $ \sin(x) $. $$ \tfrac{d}{dx}\big(\sin(x)\big) = \lim_{h \to 0} \left(\frac{\sin(x)\big(\cos(h) - 1\big) + \cos(x)\sin(h)}{h}\right) $$ Split the limit. $$ \tfrac{d}{dx}\big(\sin(x)\big) = \lim_{h \to 0} \left(\frac{\sin(x)\big(\cos(h) - 1\big)}{h}\right) + \lim_{h \to 0} \left(\frac{\cos(x)\sin(h)}{h}\right) $$ Factor out all factors without $ h $. $$ \tfrac{d}{dx}\big(\sin(x)\big) = \sin(x) \cdot \lim_{h \to 0} \left(\frac{\cos(h) - 1}{h}\right) + \cos(x) \cdot \lim_{h \to 0} \left(\frac{\sin(h)}{h}\right) $$ Evaluate these limits. Use that the second limit is equal to $ 1 $ and $ \cos(0) = 1 $. $$\begin{aligned}\tfrac{d}{dx}\big(\sin(x)\big) &= \sin(x) \cdot \lim_{h \to 0} \left(\frac{1 - 1}{h}\right) + \cos(x) \cdot 1 \newline&= \sin(x) \cdot 0 + \cos(x) \cdot 1 \newline&= \cos(x)\end{aligned}$$ © 2023 by Jeroen van Rensen Tree of Proofs https://mathproofs.jrensen.nl/proofs/derivative-of-sinx Printed on January 28, 2023
CommonCrawl
\begin{document} \title{A harmonic Lanczos bidiagonalization method for computing interior singular triplets of large matrices} \titlerunning{harmonic method for interior SVD problems} \author{Datian Niu \and Xuegang Yuan} \institute{D. Niu \at School of Science, Dalian Nationalities University, \\ Dalian, 116600, China\\ \email{[email protected]} \and X. Yuan \at School of Science, Dalian Nationalities University, \\ Dalian, 116600, China\\ \email{[email protected]} \and This research was Supported by the NSFC Grants 10872045 and by Program for New Century Excellent Talents in University. } \date{Received: date / Accepted: date} \maketitle \begin{abstract} This paper proposes a harmonic Lanczos bidiagonalization method for computing some interior singular triplets of large matrices. It is shown that the approximate singular triplets are convergent if a certain Rayleigh quotient matrix is uniformly bounded and the approximate singular values are well separated. Combining with the implicit restarting technique, we develop an implicitly restarted harmonic Lanczos bidiagonalization algorithm and suggest a selection strategy of shifts. Numerical experiments show that one can use this algorithm to compute interior singular triplets efficiently. \keywords{ Singular triplets \and Lanczos bidiagonalization process \and Harmonic Lanczos bidiagonalization method \and Implicit restarting technique \and Harmonic shifts} \subclass{65F15 \and 15A18} \end{abstract} \section{Introduction} \label{intro} The singular value decomposition (SVD) of a matrix $A\in R^{M\times N}, M\geq N$ is given by \begin{equation} A=U\Sigma V^{\rm T}, \end{equation} where $\Sigma=diag(\sigma_1, \sigma_2, \cdots, \sigma_N)$, $U=(u_1,u_2,\cdots,u_M)$ and $V=(v_1,v_2,\cdots,v_N)$ are orthogonal matrices of order $M$ and $N$ respectively. $(\sigma_i,u_i,v_i),i=1,2,\cdots N,$ are called the singular triplets of $A$. Consider the $(M+N)\times (M+N)$ augmented matrix \begin{equation} \tilde A = \left ( \begin{array}{cc} 0 & A \\ A^{\rm T} & 0 \end{array} \right ).\label{aug} \end{equation} Then, the eigenvalues of $\tilde A$ are $\pm \sigma_1, \pm \sigma_2, \cdots, \pm \sigma_N$ and $M-N$ zeros. The eigenvectors associated with $\sigma_i$ and $-\sigma_i$ are $\frac{1}{\sqrt{2}}\left (u_i^{\rm T},v_i^{\rm T}\right )^{\rm T}$ and $\frac{1}{\sqrt{2}}\left (u_i^{\rm T},-v_i^{\rm T}\right )^{\rm T}$ respectively. Therefore, the SVD problems are equivalent to the eigenproblems of augmented matrices. The SVD methods are widely used in determination of numerical rank, determination of spectral condition number, least square problems, regression analysis, image processing, signal processing, pattern recognition, information retrieval, and so on. At present, computation of largest or smallest singular triplets of large matrices has been well studied, Lanczos bidiagonalization method and its variants are the most popular methods. In 1981, Golub et al. \cite{golub1981} firstly designed a block Lanczos bidiagonalization method to compute some largest singular triplets. Larsen \cite{larsen1998} discussed the reorthogonalization of the Lanczos bidiagonalization process. Jia and Niu \cite{jianiu2003} proposed a refined Lanczos bidiagonalization method to compute some largest and smallest singular triplets. Kokiopoulou et al. \cite{kokio2004} used the harmonic projection technique to compute the smallest singular values. Baglama and Reichel \cite{baglama2005,baglama2006} used Ritz values and harmonic Ritz values to approximate the largest and smallest singular values respectively. Hernandez et al. \cite{hernandez2008} provided a parallel implementation of the Lanczos bidiagonalization method. Stoll \cite{stoll2008} developed a Krylov-schur approch to partial SVD. Recently, Jia and Niu \cite{jianiu2009} proposed a refined harmonic Lanczos bidiagonalization method to compute some smallest singular triplets. All of above methods compute the Lanczos bidiagonalization process, build two $m-$dimensional Krylov subspaces, then extract approximate singular triplets from these two subspace by different ways. Hochstenbach \cite{hochstenbach2001,hochstenbach2004} also give the Jacobi-Davidson type algorithms for SVD problems. Due to the storage requirement and the computational cost, all the projection methods must be restarted. The implicit restarting technique \cite{sorensen1992} proposed by Sorensen is the most powerful tool and is widely used in many projection methods. The success of this technique heavily depends on the selection of the shifts, see \cite{jia1999,sorensen1992}. For eigenvalue problems, Sorensen \cite{sorensen1992} used the unwanted Ritz values as the shifts to restart Arnoldi method, and Morgan \cite{morgan2006} used the unwanted harmonic Ritz values as the shifts to restart harmonic Arnoldi method. Jia \cite{jia1999,jia2002} used the refined shifts and refined harmonic shifts obtained by the information of the refined Ritz vectors and refined harmonic vectors to restart refined Arnoldi method and refined harmonic Arnoldi method, respectively. For SVD problems, Kokiopoulou et al. \cite{kokio2004} used the unwanted harmonic Ritz values as the shifts. Baglama and Reichel \cite{baglama2005,baglama2006} explicitly augmented the Lanczos bidiagonalization method with certain Ritz vectors or harmonic Ritz vectors. Jia and Niu \cite{jianiu2003,jianiu2009} gave an refined (harmonic) shift strategy within the implicitly restarted refined (harmonic) Lanczos bidiagonalization method. In this paper, we are concerned with the computation of interior singular triplets. For a given target $\tau$, we want to compute some singular triplets nearest $\tau$. So, we sort the singular triplets by \begin{equation}\label{sort} |\sigma_1-\tau|\le |\sigma_2-\tau| \le \cdots \le |\sigma_N-\tau|. \end{equation} We must emphasize that, in this paper, $\sigma_1$ is the singular value nearest $\tau$ rather than the smallest singular value, meanwhile, $\sigma_N$ is the singular value farthest from $\tau$ rather than the largest singular value. Since the largest eigenvalues of $(\tilde A-\tau I)^{-1}$ are the eigenvalues of $\tilde A$ closest to $\tau$, and the SVD problem of $A$ is equivalent to the eigenproblem of $\tilde A$, we can use shift-invert technique on $\tilde A-\tau I$ to compute the interior singular triplets, such as shift-and-invert Arnoldi method ({\sf svds}). In this paper, we assume that $M$ and $N$ are large and that $A$ can not be factorized. The shift-and-invert technique need the factorization of $\tilde A-\tau I$. Since $M$ and $N$ are large, $M+N$, the dimension of $\tilde A-\tau I$, is larger. We can not do any factorizations on $\tilde A-\tau I$. Therefore, the shift-and-invert technique is not suitable for interior SVD problems. Another approach for computing interior singular triplets is the harmonic projection method. The harmonic projection method has been widely used to compute interior eigenpairs, see \cite{morgan2000,morgan2006}, and has been adopted to combine with Lanczos bidiagonalization methods to compute smallest singular triplets \cite{baglama2005,baglama2006,kokio2004,jianiu2003}. However, if we use the harmonic projection method explicitly on $\tilde A-\tau I$, the scale of the problem is increased and this leads to the increasing computational cost. Further, we ignore the special structure of $\tilde A$ or $\tilde A-\tau I$, and the projected matrix and the updated process of implicit restarting may lose this structure. Therefore, we must use the harmonic projection method implicitly. Until now, no literature has been appeared to compute interior singular triplets by the harmonic projection method implicitly. In this paper, we propose a harmonic Lanczos bidiagonalization method for computing interior singular triplets by combining the harmonic projection technique with the Lanczos bidiagonalization process. We analyze the convergence behavior, show that the harmonic Ritz approximations converge to the desired interior singular triplets if some Rayleigh quotient matrix is uniformly bounded and the harmonic Ritz values are well separated. Then, based on Morgan's harmonic shift strategy \cite{morgan2006} for computing interior eigenvalues, we give a selection of the shifts within the framework of the implicitly restarted harmonic Lanczos bidiagonalization methods. Further, we report some numerical experiments of computation of interior singular triplets. It appears that the algorithm we proposed is suitable for computing the interior singular triplets of large matrices. Throughout this paper, denote by $||\cdot||$ the spectral norm of a matrix and the vector 2-norm, by ${\cal K}_m(C,v_1)=span\{v_1,Cv_1,\cdots,C^{m-1}v_1\}$ the $m-$ dimensional Krylov subspace generated by the matrix $C$ and the starting vector $v_1$, by superscript '$^{\rm T}$' the transpose of matrix or vector, by $e_m$ the $m-$th coordinate vector of dimension $m$. \section{Harmonic Lanczos bidiagonalization method} \subsection{Lanczos bidiagonalization process} Golub et al. \cite{golub1981} proposed a Lanczos bidiagonalization method to compute the largest singular triplets of $A$. This method is equivalent to the symmetric Lanczos method on $\tilde A$ with a special initial vector. It is based on the Lanczos bidiagonalization process, which is shown in matrix form as follows: \begin{eqnarray} AQ_m=P_mB_m, \label{bid1}\\ A^{\rm T}P_m=Q_mB_m^{\rm T}+\beta_m q_{m+1}e_m^{\rm T}, \label{bid2} \end{eqnarray} where \begin{equation} B_m=\left ( \begin{array} {c c c c} \alpha_1 & \beta_1 & & \\ & \alpha_2 & \ddots & \\ & & \ddots & \beta_{m-1} \\ & & & \alpha_m \end{array} \right) \end{equation} is an upper bidiagonal matrix, $Q_m=(q_1,q_2,\ldots,q_m)$ and $P_m=(p_1,p_2,\ldots,p_m)$ span the Krylov subspaces ${\cal K}_m(A^{\rm T}A,q_1)$ and ${\cal K}_m(AA^{\rm T},p_1)$, respectively. In finite precision arithmetic, the columns of $P_m$ and $Q_m$ may lose the orthogonality rapidly and must be reorthognalized. From the analysis of Simon and Zha \cite{simonzha2000}, we know that only the columns of one of the matrices $P_m$ and $Q_m$ need to be reorthogonalized. When $M\gg N$, Reorthogonalization on $Q_m$ only can reduce the computational cost considerably. So we only perform reorthogonalization on $Q_m$. \subsection{Harmonic Lanczos bidiagonalization method} Given the subspace \begin{equation} {\cal E}=span\left \{ \left ( \begin{array}{cc} P_m & 0\\ 0 & Q_m \end{array} \right ) \right \}. \end{equation} Making use of the harmonic projection principle, we compute some approximate eigenpairs $(\theta_i,\tilde \varphi_i)$ of $\tilde A$ nearest $\tau$ by requiring \begin{equation} \left \{ \begin{array}{c} \tilde \varphi_i\in {\cal E},\\ (\tilde A-\theta_i I)\tilde \varphi_i \bot (\tilde A - \tau I){\cal E}. \end{array} \right . \label{harmproj} \end{equation} From (\ref{bid1}) and (\ref{bid2}), (\ref{harmproj}) can be rewritten as the following generalized eigenproblem: \begin{equation} \left ( \begin{array}{cc} -\tau I & B_m \\ B_m^{\rm T} & -\tau I \end{array} \right ) \left ( x_i \atop y_i \right )=\frac{1}{\theta_i-\tau} \left ( \begin{array}{cc} \tau^2 I+B_mB_m^{\rm T}+\beta_me_me_m^{\rm T} & -2\tau B_m\\ -2\tau B_m^{\rm T} & \tau^2 I+B_m^{\rm T}B_m \end{array} \right ) \left ( x_i \atop y_i \right ). \end{equation} Assume that $\theta_i>0, i=1,2,\cdots,k+l$, which are sorted by $$|\theta_1-\tau | \le |\theta_2-\tau | \le \cdots \le |\theta_{k+l}-\tau |$$ and $\theta_i<0,i=k+l+1,k+l+2,\cdots,2m$. We can use $\theta_i,i=1,2,\cdots,k$ and $\tilde \varphi_i=\left (P_mx_i \atop Q_my_i \right )$ as the approximation of the desired eigenpair of $\tilde A$. Because of the relation between the singular triplets of $A$ and the eigenpairs of $\tilde A$, we use $\theta_i, \tilde u_i=P_mx_i/||x_i||=P_m\tilde x_i, \tilde v_i=Q_my_i/||y_i||=Q_m\tilde y_i,i=1,2,\cdots,k$ as the approximate singular triplets of $A$ nearest $\tau$. Here we call $\theta_i, \tilde u_i, \tilde v_i$ the harmonic Ritz value, the left and right harmonic Ritz vector, respectively. From (\ref{bid1}) and (\ref{bid2}), we have $$||A\tilde v_i-\theta_i\tilde u_i||=||B_m\tilde y_i-\theta_i\tilde x_i||,$$ $$ ||A\tilde v_i-\theta_i\tilde u_i||=\sqrt{||B_m^{\rm T}\tilde x_i-\theta_i\tilde y_i||^2+\beta_m^2|e_m^{\rm T}\tilde x_i|^2}.$$ Therefore, if \begin{equation}\label{stop} \sqrt{||B_m\tilde y_i-\theta_i\tilde x_i||^2+||B_m^{\rm T}\tilde x_i-\theta_i\tilde y_i||^2+\beta_m^2|e_m^{\rm T}\tilde x_i|^2}<tol, \end{equation} where $tol$ is a prescribed tolerance, then the method is known as convergent. So we need not form $\tilde u_i$ and $\tilde v_i$ explicitly before convergence. \subsection{Convergence analysis} Set $$\tilde B=\left ( \begin{array}{cc} -\tau I & B_m \\ B_m^{\rm T} & -\tau I \end{array} \right )$$ and $$\tilde C=\left ( \begin{array}{cc} \tau^2 I+B_mB_m^{\rm T}+\beta_me_me_m^{\rm T} & -2\tau B_m\\ -2\tau B_m^{\rm T} & \tau^2 I+B_m^{\rm T}B_m \end{array} \right ),$$ then $\theta_i, i=1, 2, \cdots, 2m$ are the eigenvalues of $\tilde B^{-1}\tilde C$. The matrix $\tilde B$ is called the Rayleigh quotient matrix of $\tilde A$ with respect to the subspace ${\cal E}$ and the target $\tau$. The following results are direct from Theorem 2.1, Corollary 2.2 and Theorem 3.2 of \cite{jia2005}. \begin{theorem}\label{Thvalue} Assume that $(\sigma,u,v)$ is a singular triplet of $A$, define that $\epsilon=\sin\angle\left (\left (u\atop v \right ),{\cal E}\right )$ is the distance between the vector $\left (u\atop v \right )$ and the subspace ${\cal E}$. Then there exists a perturbation matrix F such that $\sigma$ is an exact eigenvalue of $\tilde B^{-1}\tilde C+F$, where \begin{equation} ||F||\le\frac{\epsilon}{\sqrt{1-\epsilon^2}}||\tilde B^{-1}||(\sigma||A||+||A||^2). \end{equation} Furthermore, there exists an eigenvalue of $\tilde B^{-1}\tilde C$ satisfying \begin{equation} |\theta-\sigma|\leq(2||A||+||F||)||F||. \end{equation} \end{theorem} Theorem \ref{Thvalue} shows that if $\epsilon$ tends to zero and if $||\tilde B^{-1}||$ is uniformly bounded, then there exists one harmonic Ritz value $\theta$ converging to the desired singular value $\sigma$. However, from the interlacing theorem of eigenvalues \cite{golub1996}, since $$\tilde B=\left ( \begin{array}{cc} P_m & 0\\ 0 & Q_m \end{array} \right )^{\rm T} (\tilde A-\tau I) \left ( \begin{array}{cc} P_m & 0\\ 0 & Q_m \end{array} \right ),$$ we have that the eigenvalues of $\tilde B$ are between the largest and smallest eigenvalue of $\tilde A -\tau I$. Therefore, $\tilde B$ may be singular, which leads to arbitrarily large $\|\tilde B^{-1}\|$. Hence, we must assume $\|\tilde B^{-1}\|$ is uniformly bounded. In fact, this is the inherent defect of the harmonic projection methods, which can be easily obtained from Jia's analysis \cite{jia2005}. Similarly to the analysis in \cite{jia2005}, if $\tau$ is very close to a desired singular value $\sigma$ of $A$, then the method may miss it. We replace $\theta_i$ by the Rayleigh-quotient $\rho_i=\tilde u_i^TA\tilde v_i=\tilde x_i^TB_m\tilde y_i$ as the approximate singular value, as was done in \cite{hochstenbach2004,jianiu2009}. In general, $\rho_i$ is more accurate than $\theta_i$. \begin{theorem}\label{Thvector} Let $(\theta,z)$ be an eigenpair of $\tilde B^{-1}\tilde C$, where $z=\left (x \atop y\right )$, and assume $(z,Z_\bot)$ to be orthogonal such that \begin{equation} \left (z^{\rm T} \atop Z_\bot^{\rm T} \right )\tilde B^{-1}\tilde C(z,Z_\bot)= \left (\begin{array}{cc} \theta & g^{\rm T} \\ 0 & G \end{array} \right ). \end{equation} If \begin{equation} sep(\theta,G)=||(G-\theta I)^{-1}||^{-1}>0, \end{equation} then \begin{eqnarray} \sin \angle \left (\left (u \atop v \right ),\left ( \tilde u \atop \tilde v \right )\right ) & \le &\left (1+\frac{2||\tilde B^{-1}||||A||}{\sqrt{1-\epsilon^2}sep(\sigma,G)} \right )\varepsilon \nonumber \\ &\le & \left (1+\frac{2||\tilde B^{-1}||||A||}{\sqrt{1-\epsilon^2}(sep(\theta,G)-|\sigma-\theta|)} \right )\varepsilon. \end{eqnarray} \end{theorem} Theorem \ref{Thvector} shows that if $\|\tilde B^{-1}\|$ is uniformly bounded and $sep(\theta,G)$ is bounded below by a positive constant, that is, all harmonic Ritz values are well separated, then the harmonic Ritz vectors $\tilde u,\tilde v$ converge to the desired left and right singular vector. \section{Implicit restarting technique, shifts selection and an adaptive shifting strategy} \subsection{Implicit restarting technique} Due to the storage requirement and the computational cost, the number of Lanczos bidiagonalization steps $m$ can not be large. However, for a relatively small $m$, the approximate singular triplets do not converge. Therefore, the method must be restarted generally. The implicit restarting technique proposed by Sorensen \cite{sorensen1992} is a powerful restarting tool for the Lanczos and Arnoldi process, and has been adopted to the Lanczos bidiagonalization process \cite{bjorck1994,jianiu2003,jianiu2009,kokio2004,larsen}. After running the implicit QR iteration $p$ steps on $B_m$ and using the shifts $\mu_j,j=1,2,\cdots,p$, we have \begin{equation} \left \{ \begin{array}{l} (B_m^{\rm T}B_m-\mu_1^2I)\cdots(B_m^{\rm T}B_m-\mu_p^2I)=\tilde P R,\\ \tilde P^{\rm T} B_m \tilde Q \mbox{ upper bidiagonal}, \end{array} \right . \end{equation} where $\tilde P, \tilde Q$ are the products of the left and right Givens rotation matrices applied to $B_m$. Performing the above process gives the following relation: \begin{eqnarray} AQ_{m-p}^+&=&P_{m-p}^+B_{m-p}^+,\\ A^{\rm T}P_{m-p}^+&=&Q_{m-p}^+{B_{m-p}^+}^{\rm T}+(\beta_{m-p}\tilde p_{m,m-p} q_{m+1}+\beta_{m-p}^+ q_{m-p+1}^+)e_{m-p}^{\rm T}, \end{eqnarray} where $Q_{m-p}^+$ and $q_{m-p+1}^+$ are the first $m-p$ columns and the $(m-p+1)$-th column of $Q_m\tilde Q$, $P_{m-p}^+$ is the first $m-p$ columns of $P_m\tilde P$, $B_{m-p}^+$ is the leading $(m-p)\times (m-p)$ block of $\tilde P B_m \tilde Q$, $\tilde p_{m,m-p}$ is the $(m,m-p)$ element of $\tilde P$. Since $\beta_{m-p}\tilde p_{m,m-p}q_{m+1}+\beta_{m-p}^+ q_{m-p+1}^+$ is orthogonal to $Q_{m-p}^+$, we obtain a $(m-p)$-step Lanczos bidiagonalization process starting with $q_1^+$, where \begin{equation}\label{initq} \gamma q_1^+=\prod_{j=1}^p(A^{\rm T}A-\mu_j^2 I)q_1 \label{update} \end{equation} with $\gamma$ a factor making $\|q_1^+\|=1$. It is then extended to the $m$-step Lanczos bidiagonalization process in a standard way. \subsection{shifts selection and adaptive shifting strategy} Once the shifts $\mu_1,\mu_2,\ldots,\mu_p$ are given, we can run the implicitly restarted algorithm described above iteratively. The success of the implicit restarting technique heavily depends on the selection of the shifts. As is shown in \cite{jianiu2003}, from (\ref{initq}), it can be easily seen that the more accurate the shifts approximate to some unwanted singular values, the more information on the unwanted singular vectors are dampened out after restarting. Therefore, the resulting subspace contains more information on the desired singular vectors, and the algorithms may converge faster. For eigenproblems and SVD problems, Morgan \cite{morgan2006} and Kokiopoulou et al. \cite{kokio2004} suggested using the unwanted harmonic Ritz values as shifts. A natural choice of the shifts within our algorithm is the unwanted approximate singular values $\theta_{k+j},j=1,2,\cdots,l$, since they are the best approximations available to some of the unwanted singular values within our framework. From (\ref{initq}), we see the component along the desired $k$-th singular vector $u_k$ is greatly damped if a shift $\mu_i$ is very close to $\sigma_k$, so $\mu_i$ is a bad shift and $\rho_k$ may converge to $\sigma_k$ very slowly or not at all. To correct this problem, Larsen \cite{larsen} proposed an adaptive strategy to compute largest singular triplets. He replaces a bad shift by zero shift. Jia and Niu \cite{jianiu2003,jianiu2009} gave a modified form for computing smallest singular triplets. Define the relative gaps of $\rho_k$ and all the shifts $\mu_i,i=1,2,\cdots,l$ by \begin{equation} {\rm relgap}_{ki}=\left |\frac{(\rho_k-\varepsilon_k)-\mu_i}{\rho_k}\right |, \label{harmrelgap} \end{equation} where $\varepsilon_k$ is the residual norm (\ref{stop}). If ${\rm relgap}_{ki}\leq 10^{-3}$, $\mu_i$ is a bad shift and should be replaced by a suitable quantity. They replace the bad shifts by the largest or the smallest approximate singular value for computing the smallest or the largest singular triplets. In this paper, a good strategy is replacing the bad shifts by the approximate singular value farthest from $\tau$, as this strategy amplifies the components of $q_1^+$ in $v_i,i=1,2,\cdots,k$ and damps those in $v_i,i=k+1,k+2,\cdots,N$. \section{Numerical Experiments} Numerical experiments are carried out using Matlab 7.1 R14 on an Intel Core 2 E6320 with CPU 1.86GHZ and 2GB of memory under the Window XP operating system. Machine epsilon is $\epsilon_{\rm mach}\approx 2.22\times 10^{-16}$. The stopping criteria is \begin{equation} stopcrit=\max_{1 \leq i \leq k}\sqrt{\|A\tilde v_i-\rho_i \tilde u_i\|^2+\|A^{\rm T}\tilde u_i-\rho_i\tilde v_i\|^2}. \end{equation} If \begin{equation} \frac{stopcrit} {\|A\|_1}<tol, \label{stopcrit} \end{equation} then stop. From (\ref{stop}), we need not form $\tilde u_i, \tilde v_i$ explicitly before convergence. For large eigenproblems, in order to speed up convergence, most of the implicitly restarted Krylov type subspace algorithms, such as ARPACK({\sf eigs}), compute $k+3$ approximate eigenpairs when $k$ eigenpairs are desired. This strategy has been adopted to SVD problems, see \cite{baglama2005,jianiu2009}. In this paper, we also compute $k+3$ approximate singular triplets and use $l-3$ shifts in implicit restarting process. All test matrices are from \cite{bai1997}. We take $tol=10^{-6}$. In all the tables, '$iter$' denotes the number of restart, '$time$' denotes the CPU timings in second, '$mv$' denotes the number of matrix-vector products. Since the matrix-vector products performed on $A$ are equal to those on $A^{\rm T}$, we only count the matrix-vector products on $A$. \subsection{Computation of smallest singular triplets} Obviously, we can compute some smallest singular triplets by taking $\tau=0$. We compute three singular triplets nearest $\tau=0, 0.01, 0.005, 0.001$ of WELL1850, respectively. These three singular values are all the three smallest singular values. The computed three singular values are $$\sigma_1 \approx 1.611969e-002, \sigma_2 \approx 1.911309e-002, \sigma_3 \approx 2.315889e-002.$$ Table \ref{tab1} reports the computational results. Fig. \ref{fig1} plots the absolute residual norms of the computed singular triplets for $m=15$ and $m=20$, respectively. From Table \ref{tab1} and Fig. \ref{fig1}, we see that for all $\tau$, our algorithm can compute three singular triplets accurately. However, for different $\tau$, the algorithm has a great difference on restart numbers, matrix-vector products and CPU times. This phenomenon shows a good choice of target point $\tau$ can speed up the convergence considerably. \begin{table}[htp] \caption{WELL1850 for $k=3$, $m=10,15,20,25$, $\tau=0,0.001,0.005,0.01$} \label{tab1} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline $m$ & $iter$ & $time$ & $mv$ & $stopcrit$ & $iter$ & $time$ & $mv$ & $stopcrit$ \\ \hline &\multicolumn{4} {|c|} {$\tau=0$}&\multicolumn{4} {|c|} {$\tau=0.001$} \\ \hline 10 & 543 & 5.15 & 2178 & 1.67e-005 & 178 & 1.89 & 718 & 1.66e-005 \\ \hline 15 & 119 & 3.20 & 1077 & 1.67e-005 & 74 & 1.99 & 672 & 1.25e-005 \\ \hline 20 & 56 & 3.48 & 790 & 1.50e-005 & 48 & 2.81 & 678 & 1.62e-005 \\ \hline 25 & 35 & 3.20 & 671 & 1.35e-005 & 35 & 3.64 & 671 & 1.14e-005 \\ \hline &\multicolumn{4} {|c|} {$\tau=0.005$}&\multicolumn{4} {|c|} {$\tau=0.01$} \\ \hline 10 & 160 & 1.66 & 646 & 1.66e-005 & 179 & 1.76 & 722 & 1.60e-005 \\ \hline 15 & 68 & 1.86 & 618 & 1.62e-005 & 64 & 1.73 & 582 & 1.52e-005 \\ \hline 20 & 39 & 2.31 & 552 & 1.08e-005 & 37 & 2.23 & 524 & 1.45e-005 \\ \hline 25 & 31 & 3.10 & 595 & 1.27e-005 & 29 & 2.89 & 557 & 1.06e-005 \\ \hline \end{tabular} \end{table} \begin{figure*} \caption{Absolute residual norms for WELL1850 for $k=3$, $m=20$, $\tau=0,0.01,0.005,0.001$} \label{fig1} \end{figure*} \subsection{Computation of three interior singular triplets nearest different $\tau$} The test matrix is DW2048, a $2048 \times 2048$ matrix. We compute three singular triplets nearest different $\tau$. The computational results are shown in Tables \ref{tab2}-\ref{tab3}. From Table \ref{tab2}, we see that the relative errors of the computed singular values are no more than $O(10^{-9})$. The Tables demonstrate that our algorithm can compute the desired singular triplets accurately. \begin{table}[htp] \caption{Three computed singular values of DW2048 nearest $\tau=0.2, 0.5, 0.6, 0.8$ for $m=50$}\label{tab2} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{2} {|c|} {$\tau=0.2$} & \multicolumn{2} {|c|} {$\tau=0.5$} \\ \hline $\rho_j$ & $|\rho_j-\sigma_j|/\sigma_j$ & $\rho_j$ & $|\rho_j-\sigma_j|/\sigma_j$ \\ \hline 2.0031301e-001 & 1.55e-014 & 4.9933773e-001 & 1.62e-14 \\ \hline 1.9939880e-001 & 5.90e-014 & 5.0082218e-001 & 1.04e-12 \\ \hline 1.9813769e-001 & 1.08e-009 & 4.9764898e-001 & 8.76e-11 \\ \hline \multicolumn{2} {|c|} {$\tau=0.6$} & \multicolumn{2} {|c|} {$\tau=0.8$} \\ \hline $\rho_j$ & $|\rho_j-\sigma_j|/\sigma_j$ & $\rho_j$ & $|\rho_j-\sigma_j|/\sigma_j$ \\ \hline 6.0106012e-001 & 4.29e-12 & 8.0014466e-001 & 2.41e-12 \\ \hline 6.0193472e-001 & 4.22e-11 & 7.9954438e-001 & 5.46e-12 \\ \hline 5.9689466e-001 & 2.40e-13 & 7.9932106e-001 & 1.08e-10 \\ \hline \end{tabular} \end{table} \begin{table}[htp] \caption{DW2048 for $k=3$, $m=30, 40, 50$, $\tau=0.2, 0.5, 0.6, 0.8$} \label{tab3} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline &\multicolumn{4} {|c|} {$\tau=0.2$}&\multicolumn{4} {|c|} {$\tau=0.5$} \\ \hline $m$ & $iter$ & $time$ & $mv$ & $stopcrit$ & $iter$ & $time$ & $mv$ & $stopcrit$ \\ \hline 30 & 501 & 109 & 11655 & 9.97e-007 & 255 & 51.2 & 6123 & 9.93e-007 \\ \hline 40 & 298 & 113 & 9993 & 9.81e-007 & 97 & 38.2 & 3271 & 9.90e-007 \\ \hline 50 & 221 & 136 & 9652 & 9.85e-007 & 83 & 50.9 & 3656 & 9.10e-007 \\ \hline &\multicolumn{4} {|c|} {$\tau=0.6$}&\multicolumn{4} {|c|} {$\tau=0.8$}\\ \hline $m$ & $iter$ & $time$ & $mv$ & $stopcrit$ & $iter$ & $time$ & $mv$ & $stopcrit$ \\ \hline 30 & 125 & 24.6 & 3006 & 9.11e-007 & 405 & 78.2 & 9525 & 9.87e-007 \\ \hline 40 & 69 & 27.0 & 2343 & 9.47e-007 & 180 & 70.4 & 6079 & 9.64e-007 \\ \hline 50 & 46 & 28.1 & 2012 & 8.61e-007 & 136 & 81.8 & 5989 & 9.50e-007 \\ \hline \end{tabular} \end{table} \subsection{Computation of interior singular triplets for different $k$} We compute $k=1,3,5,10$ smallest singular triplets nearest $\tau=4.5$ of LSHP2233, a $2233 \times 2233$ matrix. Table \ref{tab5} reports the results. We see that our algorithm can compute the desired singular triplets with high precision. \begin{table}[htp] \caption{Ten computed singular values of LSHP2233 nearest $\tau=4.5$ for $m=50$} \label{tab4} \begin{tabular}{|c|c|c|c|} \hline $\rho_1$ & $|\rho_1-\sigma_1|/\sigma_1$ & $\rho_2$ & $|\rho_2-\sigma_2|/\sigma_2$ \\ \hline 4.4988631 & 1.58e-15 & 4.5091282 & 1.36e-14 \\ \hline $\rho_3$ & $|\rho_3-\sigma_3|/\sigma_3$ & $\rho_4$ & $|\rho_4-\sigma_4|/\sigma_4$ \\ \hline 4.5113859 & 1.22e-14 & 4.4815289 & 6.54e-15 \\ \hline $\rho_5$ & $|\rho_5-\sigma_5|/\sigma_5$ & $\rho_6$ & $|\rho_6-\sigma_6|/\sigma_6$ \\ \hline 4.5188882 & 5.11e-15 & 4.5210494 & 1.18e-14 \\ \hline $\rho_7$ & $|\rho_7-\sigma_7|/\sigma_7$ & $\rho_8$ & $|\rho_8-\sigma_8|/\sigma_8$ \\ \hline 4.4783693 & 1.07e-14 & 4.4716358 & 8.74e-15 \\ \hline $\rho_9$ & $|\rho_9-\sigma_9|/\sigma_9$ & $\rho_{10}$ & $|\rho_{10}-\sigma_{10}|/\sigma_{10}$ \\ \hline 4.5331457 & 5.68e-15 & 4.4638926 & 2.03e-10 \\ \hline \end{tabular} \end{table} \begin{table}[htp] \caption{LSHP2233 for $k=1, 3, 5, 10$, $m=30, 40 , 50$, $\tau=4.5$} \label{tab5} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline &\multicolumn{4} {|c|} {$k=1$}&\multicolumn{4} {|c|} {$k=3$} \\ \hline $m$ & $iter$ & $time$ & $mv$ & $stopcrit$ & $iter$ & $time$ & $mv$ & $stopcrit$ \\ \hline 30 & 467 & 108 & 11920 & 6.88e-006 & 560 & 127 & 13171 & 6.95e-006 \\ \hline 40 & 190 & 83.6 & 6844 & 6.86e-006 & 230 & 98.1 & 7826 & 6.79e-006 \\ \hline 50 & 159 & 114 & 7188 & 6.63e-006 & 216 & 140 & 9404 & 6.66e-006 \\ \hline &\multicolumn{4} {|c|} {$k=5$}&\multicolumn{4} {|c|} {$k=10$} \\ \hline $m$ & $iter$ & $time$ & $mv$ & $stopcrit$ & $iter$ & $time$ & $mv$ & $stopcrit$ \\ \hline 30 & 322 & 64.8 & 6972 & 6.99e-006 & 651 & 103 & 10761 & 6.91e-006 \\ \hline 40 & 207 & 78.5 & 6632 & 6.78e-006 & 168 & 61.0 & 4548 & 4.70e-006 \\ \hline 50 & 132 & 83.8 & 5487 & 6.40e-006 & 165 & 92.4 & 6003 & 6.99e-006 \\ \hline \end{tabular} \end{table} \section{Conclusion} In this paper, combining the harmonic projection principle with the implicit restarting technique, we propose an implicitly restarted harmonic Lanczos bidiagonalization algorithm for computing some interior singular triplets. Based on Morgan's harmonic shift strategy for computing interior eigenpairs, we give a selection of the shifts within our algorithm. Numerical experiments show that our algorithm is suitable for interior SVD problems. The interior singular values can be computed with higher relative precision. The Matlab code can be obtained from the authors upon request. \end{document}
arXiv
\begin{definition}[Definition:Classical Algorithm/Multiplication] Let $u = \sqbrk {u_{m - 1} u_{m - 2} \dotsm u_1 u_0}_b$ and $v = \sqbrk {v_{n - 1} v_{n - 2} \dotsm v_1 v_0}_b$ be $m$-digit and $n$-digit integers respectively. The '''classical multiplication algorithm''' forms their $m + n$-digit product $u v$: :$w = \sqbrk {w_{m + n - 1} w_{m + n - 2} \dotsm w_1 w_0}_b$ The steps are: :$(\text M 1): \quad$ Initialise: ::::Set $w_{m - 1}, w_{m - 2}, \dotsc, w_1, w_0 := 0$ ::::Set $j := 0$ :$(\text M 2): \quad$ Is $v_j = 0$? ::::If so, go to Step $(\text M 6)$. :$(\text M 3): \quad$ Initialise loop: ::::Set $i := 0$ ::::Set $k := 0$ :$(\text M 4): \quad$ Multiply and Add: ::::Set $t := u_i \times v_j + w_{i + j} + k$ ::::Set $w_{i + j} := t \pmod b$ ::::Set $k := \floor {\dfrac t b}$ :$(\text M 5): \quad$ Loop on $i$: ::::Increase $i$ by $1$. ::::If $i < m$, go back to Step $(\text M 4)$. ::::Otherwise, set $w_{j + m} := k$ :$(\text M 6): \quad$ Loop on $j$: ::::Increase $j$ by $1$. ::::If $j < n$, go back to Step $(\text M 2)$. ::::Otherwise exit. \end{definition}
ProofWiki
Forthcoming papers Mat. Sb.: Mat. Sb. (N.S.), 1969, Volume 78(120), Number 4, Pages 611–632 (Mi msb3573) Integral representations of functions holomorphic in strictly pseudo-convex domains and some applications G. M. Henkin Mathematics of the USSR-Sbornik, 1969, 7:4, 597–616 Bibliographic databases: UDC: 517.551 MSC: 32A26, 32T05, 32A10 Citation: G. M. Henkin, "Integral representations of functions holomorphic in strictly pseudo-convex domains and some applications", Mat. Sb. (N.S.), 78(120):4 (1969), 611–632; Math. USSR-Sb., 7:4 (1969), 597–616 \Bibitem{Hen69} \by G.~M.~Henkin \paper Integral representations of functions holomorphic in strictly pseudo-convex domains and some applications \jour Mat. Sb. (N.S.) \vol 78(120) \mathnet{http://mi.mathnet.ru/msb3573} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=249660} \jour Math. USSR-Sb. \crossref{https://doi.org/10.1070/SM1969v007n04ABEH001105} http://mi.mathnet.ru/eng/msb3573 http://mi.mathnet.ru/eng/msb/v120/i4/p611 A. I. Petrosyan, "Uniform approximation of functions by polynomials on Weil polyhedra", Math. USSR-Izv., 4:6 (1970), 1250–1271 G. M. Henkin, "Integral representation of functions in strictly pseudoconvex domains and applications to the $\overline\partial$-problem", Math. USSR-Sb., 11:2 (1970), 273–281 A. V. Romanov, G. M. Henkin, "Exact Hölder estimates for the solutions of the $\bar\partial$-equation", Math. USSR-Izv., 5:5 (1971), 1180–1192 B. S. Mityagin, G. M. Henkin, "Linear problems of complex analysis", Russian Math. Surveys, 26:4 (1971), 99–164 P. L. Polyakov, "O banakhovykh kogomologiyakh rassloennykh prostranstv", UMN, 26:4(160) (1971), 243–244 F. Reese Harvey, R. O. Wells, "Holomorphic approximation and hyperfunction theory on aC 1 totally real submanifold of a complex manifold", Math Ann, 197:4 (1972), 287 G. M. Henkin, "Continuation of bounded holomorphic functions from submanifolds in general position to strictly pseudoconvex domains", Math. USSR-Izv., 6:3 (1972), 536–563 P. L. Polyakov, "Banach cohomology of piecewise strictly pseudoconvex domains", Math. USSR-Sb., 17:2 (1972), 237–256 Brian Cole, R.Michael Range, "A-measures on complex manifolds and some applications", Journal of Functional Analysis, 11:4 (1972), 393 M. I. Kadets, B. S. Mityagin, "Complemented subspaces in Banach spaces", Russian Math. Surveys, 28:6 (1973), 77–95 Bui Doan Khanh, "Les A-mesures sur le tore de deux dimensions et le calcul symbolique de deux contractions permutables", Journal of Functional Analysis, 15:1 (1974), 33 V. N. Senichkin, "On the approximation of functions of several complex variables on fat compact subsets of $\mathbf C^n$ by polynomials", Math. USSR-Sb., 26:2 (1975), 260–279 A. V. Romanov, "A formula and estimates for the solutions of the tangential Cauchy–Riemann equation", Math. USSR-Sb., 28:1 (1976), 49–71 R. O. Uells (ml.), "Teoriya funktsii na differentsiruemykh mnogoobraziyakh v $\mathbb C^n$", UMN, 33:1(199) (1978), 157–193 R. A. Airapetyan, "Boundary properties of an integral of Cauchy–Leray type in piecewise smooth domains in $\mathbf C^n$, and some applications", Math. USSR-Sb., 40:1 (1981), 1–20 A. G. Sergeev, G. M. Henkin, "Uniform estimates for solutions of the $\overline\partial$-equation in pseudoconvex polyhedra", Math. USSR-Sb., 40:4 (1981), 469–507 R. A. Airapetyan, "Analytic representations for $\operatorname{CR}$-functions satisfying Hölder's condition", Russian Math. Surveys, 35:5 (1980), 247–248 Boggess A., "Kernels for the Local Solvability of the Tangential Cauchy-Riemann Equations", Duke Math. J., 47:4 (1980), 903–921 Boggess A., "Kernels for the Tangential Cauchy-Riemann Equations", Trans. Am. Math. Soc., 262:1 (1980), 1–49 Bernd Droste, "Holomorphic approximation of ultradifferentiable functions", Math Ann, 257:3 (1981), 293 Jean Leray, "Prolongements du theoreme de Cauchy-Kowalewski", Seminario Mat e Fis di Milano, 52:1 (1982), 35 Brian J Cole, T.W Gamelin, "Tight uniform algebras and algebras of analytic functions", Journal of Functional Analysis, 46:2 (1982), 158 Bourgain J., "The Dimension Conjecture for Polydisk Algebras", Isr. J. Math., 48:4 (1984), 289–304 Janas J., "On Integral Formulas and their Applications in Functional-Calculus", J. Math. Anal. Appl., 114:2 (1986), 328–339 Huiping Li, "BMO, VMO and Hankel operators on the Bergman space of strongly pseudoconvex domains", Journal of Functional Analysis, 106:2 (1992), 375 L. A. Aizenberg, A. M. Kytmanov, "On the possibility of holomorphic extension, into a domain, of functions defined on a connected piece of its boundary. II", Russian Acad. Sci. Sb. Math., 78:1 (1994), 1–10 Drinovec-Drnovsek, B, "Approximation of holomorphic mappings on strongly pseudoconvex domains", Forum Mathematicum, 20:5 (2008), 817 Barrett, DE, "The spectrum of the Leray transform for convex Reinhardt domains in C-2", Journal of Functional Analysis, 257:9 (2009), 2780 Sayed Saber, "Solvability of the tangential Cauchy-Riemann equations on boundaries of strictly q-convex domains", Lobachevskii J Math, 32:3 (2011), 189 Sayed Saber, "Global Boundary Regularity for the $${\overline\partial}$$ -problem on Strictly q-convex and q-concave Domains", Complex Anal. Oper. Theory, 6:6 (2012), 1157 S. Saber, "The $\bar \partial $ -problem on q-pseudoconvex domains with applications", Math. Slovaca, 63:3 (2013), 521 R. F. Shamoyan, S. M. Kurilenko, "On traces of analytic Herz and Bloch type spaces in bounded strongly pseudoconvex domains in $\mathbb{C}^{n}$", Probl. anal. Issues Anal., 4(22):1 (2015), 73–94 Bourgain J., "on Uniformly Bounded Bases in Spaces of Holomorphic Functions", Am. J. Math., 138:2 (2016), 571–584 B. Berndtsson, S. V. Kislyakov, R. G. Novikov, V. M. Polterovich, P. L. Polyakov, A. E. Tumanov, A. A. Shananin, C. L. Epstein, "Gennadi Markovich Henkin (obituary)", Russian Math. Surveys, 72:3 (2017), 547–570 Full text: 163
CommonCrawl
# Overview of linear algebra and matrices Linear algebra is a branch of mathematics that deals with linear equations and their representations using matrices and vectors. Matrices are rectangular arrays of numbers that represent linear equations and their solutions. They are used in various fields such as physics, engineering, and data analysis. - The concept of a matrix and its properties - The different types of matrices, such as square, singular, and non-square matrices - Matrix operations, such as addition, subtraction, and multiplication - Applications of matrices in solving linear equations and systems Consider the following matrix: $$ A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} $$ This matrix represents a linear system of equations: $$ \begin{cases} x + 2y = 1 \\ 3x + 4y = 2 \end{cases} $$ ## Exercise Consider the following matrix: $$ B = \begin{bmatrix} 2 & 3 \\ 4 & 5 \end{bmatrix} $$ What is the product of matrix A and matrix B? To find the product of matrices A and B, you can use the following formula: $$ AB = \begin{bmatrix} a_{11}b_{11} + a_{12}b_{21} & a_{11}b_{12} + a_{12}b_{22} \\ a_{21}b_{11} + a_{22}b_{21} & a_{21}b_{12} + a_{22}b_{22} \end{bmatrix} $$ ## Exercise Find the product of matrices A and B. # Matrix operations and their applications - Matrix addition and subtraction - Scalar multiplication and division - Matrix transposition - Matrix inversion Consider the following matrices: $$ A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} $$ $$ B = \begin{bmatrix} 5 & 6 \\ 7 & 8 \end{bmatrix} $$ What is the sum of matrices A and B? ## Exercise Find the sum of matrices A and B. To find the sum of matrices A and B, you can simply add the corresponding elements: $$ A + B = \begin{bmatrix} 1+5 & 2+6 \\ 3+7 & 4+8 \end{bmatrix} $$ # Introduction to the NumPy library To get started with NumPy, you first need to install it. You can do this using pip: ``` pip install numpy ``` Once you have installed NumPy, you can import it into your Python script: ```python import numpy as np ``` - Creating arrays and matrices - Basic mathematical operations - Indexing and slicing - Linear algebra functions ## Exercise Create a NumPy array of size 3x3 and fill it with the numbers 1 to 9. # Introduction to the SciPy library To get started with SciPy, you first need to install it. You can do this using pip: ``` pip install scipy ``` Once you have installed SciPy, you can import it into your Python script: ```python import scipy as sp ``` - Solving linear systems of equations - Numerical integration and differentiation - Optimization and root-finding - Linear algebra functions ## Exercise Solve the following linear system of equations using SciPy: $$ \begin{cases} x + y = 3 \\ 2x + 3y = 10 \end{cases} $$ # Applying the inversion method using NumPy and SciPy - The inversion method and its limitations - Using NumPy to invert matrices - Using SciPy to invert matrices Consider the following matrix: $$ A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} $$ What is the inverse of matrix A? ## Exercise Find the inverse of matrix A using NumPy. # Inverting matrices using the NumPy and SciPy libraries - The process of matrix inversion - Handling singular and non-square matrices - Comparing the inversion method to other methods for solving linear systems Consider the following matrix: $$ A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} $$ What is the inverse of matrix A? ## Exercise Find the inverse of matrix A using NumPy. # Solving linear systems of equations using the inversion method - Applying the inversion method to solve linear systems - Comparing the inversion method to other methods for solving linear systems - Applications of the inversion method in practical problems Consider the following matrix: $$ A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} $$ What is the solution of the linear system of equations represented by matrix A? ## Exercise Solve the linear system of equations represented by matrix A using the inversion method. # Applications of the inversion method in practical problems - Applying the inversion method to solve linear systems in physics - Applying the inversion method to solve linear systems in engineering - Applying the inversion method to solve linear systems in data analysis Consider the following matrix: $$ A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} $$ What is the solution of the linear system of equations represented by matrix A? ## Exercise Solve the linear system of equations represented by matrix A using the inversion method. # Handling singular and non-square matrices - The concept of singular matrices - Handling singular matrices in the inversion method - The concept of non-square matrices - Handling non-square matrices in the inversion method Consider the following matrix: $$ A = \begin{bmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \end{bmatrix} $$ What is the solution of the linear system of equations represented by matrix A? ## Exercise Solve the linear system of equations represented by matrix A using the inversion method. # Comparing the inversion method to other methods for solving linear systems - The advantages and disadvantages of the inversion method - Comparing the inversion method to methods such as Gaussian elimination and Cramer's rule - Applications of other methods in practical problems Consider the following matrix: $$ A = \begin{bmatrix} 1 & 2 \\ 3 & 4 \end{bmatrix} $$ What is the solution of the linear system of equations represented by matrix A? ## Exercise Solve the linear system of equations represented by matrix A using the inversion method. ## Exercise Compare the inversion method to other methods for solving linear systems.
Textbooks
The arithmetic progressions $\{2, 5, 8, 11, \ldots\}$ and $\{3, 10, 17, 24, \ldots \}$ have some common values. What is the largest value less than 500 that they have in common? Let $a$ be a common term. We know that \begin{align*} a&\equiv 2\pmod 3\\ a&\equiv 3\pmod 7 \end{align*} Congruence $(1)$ means that there exists a non-negative integer such that $a=2+3n$. Substituting this into $(2)$ yields \[2+3n\equiv 3\pmod 7\implies n\equiv 5\pmod 7\] So $n$ has a lower bound of $5$. Then $n\ge 5\implies a=2+3n\ge 17$. $17$ satisfies the original congruences, so it is the smallest common term. Subtracting $17$ from both sides of both congruences gives \begin{align*} a-17&\equiv -15\equiv 0\pmod 3\nonumber\\ a-17&\equiv -14\equiv 0\pmod 7\nonumber \end{align*} Since $\gcd(3,7)$, we get $a-17\equiv 0\pmod{3\cdot 7}$, that is, $a\equiv 17\pmod{21}$. So all common terms must be of the form $17+21m$ for some non-negative integer $m$. Note that any number of the form also satisfies the original congruences. The largest such number less than $500$ is $17+21\cdot 22=\boxed{479}$.
Math Dataset
Michael Viscardi Michael Anthony Viscardi (born February 22, 1989 in Plano, Texas) of San Diego, California is an American mathematician who, as a highschooler, won the 2005 Siemens Competition and Davidson Fellowship with a mathematical project on the Dirichlet problem, whose applications include describing the flow of heat across a metal surface, winning $100,000 and $50,000 in scholarships, respectively.[1][2] Viscardi's theorem is an expansion of the 19th-century work of Peter Gustav Lejeune Dirichlet.[3] He was also named a finalist with the same project in the Intel Science Talent Search. Viscardi placed Best of Category in Mathematics at the International Science and Engineering Fair (ISEF) in May 2006. Viscardi also qualified for the United States of America Mathematical Olympiad and the Junior Science and Humanities Symposium. Michael Viscardi Born (1989-02-22) February 22, 1989 Plano, Texas, United States NationalityAmerican Alma materHarvard University Massachusetts Institute of Technology Known forSiemens Competition winner Awards2010 Hoopes Prize Scientific career FieldsMathematics Doctoral advisorRoman Bezrukavnikov Other academic advisorsShing-Tung Yau Joe Harris Life Viscardi was homeschooled for high school, supplemented with mathematics classes at the University of California, San Diego.[4][5] He is also a pianist and violinist, and onetime concertmaster of the San Diego Youth Symphony.[5] Viscardi is a member of the Harvard College class of 2010.[6] He graduated summa cum laude from Harvard, receiving a 2010 Thomas T. Hoopes, Class of 1919, Prize, and earning the 2011 Morgan Prize honorable mention for his senior thesis "Alternate Compactifications of the Moduli Space of Genus One Maps".[7] He worked as a postdoc at UC Berkeley from 2016 to 2018.[8] Selected publication • ———; Ebenfelt, Peter (2007), "An Explicit Solution to the Dirichlet Problem with Rational Holomorphic Data in Terms of a Riemann Mapping", Computational Methods and Function Theory, 7 (1): 127–140, doi:10.1007/BF03321636, S2CID 120812150. References 1. Briggs, Tracey Wong (December 5, 2005), "Problems no problem for Siemens winners", USA Today. 2. Rodriguez, Juan-Carlos (December 6, 2005), "California teen wins science competition", Seattle Times. 3. "Teen Updates 19th-Century Mathematical Law", ABC News, December 9, 2005. 4. "Homeschooled teen wins top science honor", Associated Press, 2005 5. "Mathematics Student Wins the Siemens-Westinghouse Competition", FOCUS, Mathematical Association of America, vol. 26, no. 1, p. 3, January 2006 6. Herchel Smith Research Fellows to begin this summer 7. Viscardi, Michael (2010). "Alternate compactifications of the moduli space of genus one maps". arXiv:1005.1431 [math.AG]. 8. Viscardi's webpage at Berkeley External links • Viscardi's website at MIT • Michael Viscardi: Person of the Week • Michael's Presentation • Biography at Davidson Institute site Authority control: Academics • MathSciNet • Mathematics Genealogy Project • zbMATH
Wikipedia
\begin{document} \begin{abstract} The intrinsic volumes of a convex body are fundamental invariants that capture information about the average volume of the projection of the convex body onto a random subspace of fixed dimension. The intrinsic volumes also play a central role in integral geometry formulas that describe how moving convex bodies interact. Recent work has demonstrated that the sequence of intrinsic volumes concentrates sharply around its centroid, which is called the central intrinsic volume. The purpose of this paper is to derive finer concentration inequalities for the intrinsic volumes and related sequences. These concentration results have striking implications for high-dimensional integral geometry. In particular, they uncover new phase transitions in formulas for random projections, rotation means, random slicing, and the kinematic formula. In each case, the location of the phase transition is determined by reducing each convex body to a single summary parameter. \end{abstract} \maketitle \section{Introduction} In his 1733 treatise~\cite{leclerc1733essai}, \emph{Essai d'arithm{\'e}tique morale}, Georges-Louis Leclerc, Comte de Buffon, initiated the field of geometric probability. The arithmetic is ``moral'' because it assesses the degree of hope or fear that an uncertain event should instill, or---more concretely---whether a game of chance is fair. In a discussion of the game \emph{franc-carreau}, Buffon calculated the probability that a needle, dropped at random on a plank floor, touches more than one plank (Figure~\ref{fig:buffon-schematic}). In 1860, Barbier~\cite{barbier1860note} gave a visionary solution to Buffon's needle problem, based on invariance properties of the random model. Advanced further by the insight of Crofton~\cite{Cro89:Probability}, this approach evolved into the field of integral geometry. See the elegant book~\cite{KR97:Introduction-Geometric} of Klain \& Rota for an introduction to the subject. \begin{figure} \caption{\textbf{Buffon's Needle.} What is the probability that a needle, dropped at random on a plank floor, touches more than one plank? The needle is less likely to cross a boundary when it is short (needle $L_1$), relative to the width $D$ of a plank. A long needle ($L_2$) is more likely to cross a boundary.} \label{fig:buffon-schematic} \end{figure} Let us contrast two different ways of describing the solution to Buffon's problem. First, we can write the intersection probability $p(L, D)$ exactly in terms of the length $L$ of the needle and the width $D$ of the planks: \begin{equation}\label{eqn:buffon-exact}\tag{A} p(L,D) = \frac{2}{\pi} \cdot \begin{cases} L/D, & L\leq D;\\ \arccos\left(D/L\right) + (L/D) \big(1 - \sqrt{1 - ( D/L )^2 } \big), & L\geq D. \end{cases} \end{equation} As a consequence, a needle of length $L = \pi D/4$ is equally likely to touch one plank or two. Even without the precise result, it is clear that very short needles are unlikely to touch two planks, while very long needles are likely to do so. We can capture this intuition more vividly with an alternative formula. For each $\alpha \in [0, 0.36]$, \begin{equation} \label{eqn:buffon-approx}\tag{B} \begin{aligned} L/D &\leq \phantom{1-{}}(2/\pi) \cdot \alpha &\text{implies}&& p(L,D) &\leq \alpha ;\\ L/D &\geq (1-2/\pi) \cdot \alpha^{-1} &\text{implies}&& p(L,D) &\geq 1-\alpha. \end{aligned} \end{equation} This result states that there are two complementary regimes for the length of the needle. Depending on which case is active, the probability that the needle crosses a boundary is either very small or very large. In between, there is a wide transition region where neither condition is in force. A major concern of integral geometry is to identify expressions of Type~\eqref{eqn:buffon-exact} for more involved problems. This program has led to a large corpus of exact results, written in terms of geometric invariants, such as intrinsic volumes or quermassintegrals. Unfortunately, the classic formulas are difficult to interpret or to instantiate, except in the simplest cases (e.g., problems in dimension two or three). See~\cite{SW08:Stochastic-Integral,Sch14:Convex-Bodies} for a comprehensive account of this theory. Meanwhile, the field of asymptotic convex geometry operates in high dimensions. The main goal of this activity is to discover qualitative phenomena, involving interpretable geometric quantities, more in the spirit of Type~\eqref{eqn:buffon-approx}. On the other hand, the analytic and probabilistic methods that are common in this research often lead to statements that include unspecified constants. See~\cite{artstein2015asymptotic} for a recent textbook on asymptotic convex geometry. This paper establishes precise---but interpretable---approximations of the famous Kubota, Crofton, rotation mean, and kinematic formulas, which describe how moving convex sets interact. Our theorems can be viewed as the Type~\eqref{eqn:buffon-approx} counterparts of results that have only been expressed in Type~\eqref{eqn:buffon-exact} formulations. The main ingredient is a set of novel measure concentration results for the intrinsic volumes of a convex body. The proofs combine methodology from integral geometry and from asymptotic convex geometry. The most intriguing new insight from our analysis is that each formula splits into two complementary regimes, where the probability of a geometric event is either negligible or overwhelming. Our approach identifies the exact location of the change-point between the two regimes in terms of a simple summary parameter that we can (in principle) calculate. Moreover, in high dimensions, the width of the transition region becomes so narrow that we can interpret the results as \emph{sharp phase transitions} for problems in Euclidean integral geometry. \section{Weighted Intrinsic Volumes} This section begins with basic concepts from convex geometry. Then we introduce the intrinsic volumes, which are canonical measures of the content of a convex body. Afterward, we describe different ways to reweight the intrinsic volumes to make integral geometry formulas more transparent. Sections~\ref{sec:main-conc} and~\ref{sec:main-phase} present the main results of the paper. \subsection{Notation} This paper uses standard concepts from convexity; see~\cite{Roc70:Convex-Analysis,Bar02:Course-Convexity,SW08:Stochastic-Integral,Sch14:Convex-Bodies}. The appendices contain background on invariant measures (Appendix~\ref{sec:invariant}) and integral geometry (Appendices~\ref{sec:elements} and~\ref{sec:formulas}). Here are the basics. We work in the Euclidean space $\mathbbm{R}^n$, equipped with the usual inner product $\ip{\cdot}{\cdot}$ and norm $\norm{\cdot}$. A \emph{convex body} is a compact, convex subset of $\mathbbm{R}^n$. The empty set $\emptyset$ is also a convex body. The \emph{Cartesian product} of two sets $\set{S}, \set{T} \subseteq \mathbbm{R}^n$ is the set $\set{S} \times \set{T} := \{ (\vct{x}, \vct{y}) \in \mathbbm{R}^{2n} : \vct{x} \in \set{S}, \vct{y} \in \set{T} \}$. The \emph{Minkowski sum} is the set $\set{S} + \set{T} := \{ \vct{x} + \vct{y} \colon \vct{x} \in \set{S}, \vct{y} \in \set{T} \}$. For a scalar $\lambda \in \mathbbm{R}$, we can form the \emph{dilation} $\lambda \set{S} := \{ \lambda \vct{s} : \vct{s} \in \set{S} \}$. Given a linear subspace $\set{L} \subseteq \mathbbm{R}^n$, with orthogonal complement $\set{L}^\perp$, define the \emph{orthogonal projection} $\set{S}|\set{L} := \{ \vct{x} \in \set{L} : \set{S} \cap (\vct{x} + \set{L}^\perp) \neq \emptyset \}$. Unless otherwise stated, the term \emph{subspace} always means a linear subspace. The \emph{dimension} of a nonempty convex set $\set{K} \subseteq \mathbbm{R}^n$ is the dimension of its \emph{affine hull}, the smallest affine space that contains $\set{K}$. When $\dim(\set{K}) = i$, the $i$-dimensional volume $\mathrm{V}_i(\set{K})$ is the Lebesgue measure of $\set{K}$, relative to its affine hull. When $\set{K}$ is $0$-dimensional (i.e., a single point), we set $\mathrm{V}_0(\set{K}) = 1$. The notation $\ball{n} := \{ \vct{x} \in \mathbbm{R}^n : \norm{\vct{x}} \leq 1 \}$ is reserved for the Euclidean unit ball. We write $\kappa_n$ for the volume and $\omega_n$ for the surface area of $\ball{n}$. They satisfy the formulas \begin{align} \label{eqn:ball-sphere} \kappa_n &:= \mathrm{V}_n(\ball{n}) = \frac{\pi^{n/2}}{\Gamma(1 + n/2)} &\text{and}&& \omega_n &:= n \kappa_n = \frac{2\pi^{n/2}}{\Gamma(n/2)}. \end{align} As usual, $\Gamma$ denotes the gamma function. We instate the convention from combinatorics that a sequence, indexed by integers, is automatically extended with zero outside its support. For example, $\kappa_{i} = 0$ for integers $i < 0$, and the binomial coefficient $\binom{n}{i} = 0$ for integers $i < 0$ and $i > n$. The operator $\mathbbm{P}$ computes the probability of an event, while $\operatorname{\mathbb{E}}$ returns the expectation of a random variable. The symbol $\sim$ means ``has the distribution.'' We sometimes use infix notation for the minimum ($\wedge$) and maximum ($\vee$) of two numbers. \begin{figure} \caption{\textbf{Steiner's Formula.} In two dimensions, the sum of a convex body $\set{K}$ and a scaled disc $\lambda \ball{}$ is a quadratic polynomial in the radius $\lambda$. The coefficients depend on the area and the perimeter of $\set{K}$. In higher dimensions, the polynomial expansion involves the intrinsic volumes of the convex body.} \label{fig:steiner} \end{figure} \subsection{Intrinsic Volumes} \label{sec:intvol-def} Let $\set{K} \subset \mathbbm{R}^n$ be a nonempty convex body. \emph{Steiner's formula} states that the volume of the sum of $\set{K}$ and a scaled Euclidean ball $\lambda \ball{n}$ can be expressed as a polynomial in the radius $\lambda$ of the ball: \begin{equation}\label{eqn:steiner-intro} \mathrm{V}_n(\set{K}+\lambda \ball{n}) = \sum_{i=0}^n \lambda^{n-i} \kappa_{n-i} \cdot \mathrm{V}_i(\set{K}) \quad\text{for all $\lambda \geq 0$.} \end{equation} See Figure~\ref{fig:steiner} for an illustration. The coefficients $\mathrm{V}_i(\set{K})$ are called \emph{intrinsic volumes} of the convex body, and the Steiner formula~\eqref{eqn:steiner-intro} serves as their definition. For the empty set, $\mathrm{V}_i(\emptyset) = 0$ for every $i$. The intrinsic volumes are normalized to be independent of the ambient dimension in which the convex body is embedded, so we do not need to specify the dimension in the notation. In particular, when $\dim(\set{K}) = i$, the $i$th intrinsic volume $\mathrm{V}_i(\set{K})$ coincides with the $i$-dimensional Lebesgue measure of the set, so the definitions are consistent. Intrinsic volumes share many properties of the ordinary volume (Appendix~\ref{sec:intvol-properties}). In particular, the intrinsic volumes are nonnegative, they increase with respect to set inclusion, and they are invariant under rigid motions (i.e., translations, rotations, and reflections). Furthermore, the $i$th intrinsic volume $\mathrm{V}_i$ is positive-homogeneous of degree $i$. That is, $\mathrm{V}_i(\lambda \set{K}) = \lambda^i \,\mathrm{V}_i(\set{K})$ for all $\lambda \geq 0$. Several of the intrinsic volumes have familiar interpretations: $\mathrm{V}_{n-1}(\set{K})$ is half the $(n-1)$-dimensional \emph{surface area}, and $\mathrm{V}_1(\set{K})$ is proportional to the \emph{mean width}, with a factor depending on $n$. The \emph{Euler characteristic} $\mathrm{V}_0(\set{K})$ indicates whether the convex body is nonempty. Figure~\ref{fig:steiner} suggests that the $i$th intrinsic volume should reflect the $i$-dimensional content of the body. Indeed, Kubota's formula (Fact~\ref{fact:projection-intvol}) shows the $\set{V}_i(\set{K})$ is proportional to the average $i$-dimensional volume of the projections of $\set{K}$ onto $i$-dimensional subspaces. Crofton's formula (Fact~\ref{fact:slicing-intvol}) gives a dual representation in terms of the $(n-i)$-dimensional affine slices of the convex body. Finally, we introduce the \emph{total intrinsic volume}, also known as the \emph{Wills functional}~\cite{Wil73:Gitterpunktanzahl,Had75:Willssche}: \begin{equation} \label{eqn:wills} \mathrm{W}(\set{K}) := \sum_{i=0}^n \mathrm{V}_i(\set{K}). \end{equation} The total intrinsic volume reflects contributions to the content of the convex body from all dimensions. It also allows us to compare the size of convex bodies that have different dimensions. The total intrinsic volume functional~\eqref{eqn:wills} is nonnegative, monotone with respect to set inclusion, and invariant under rigid motions. Obviously, the total intrinsic volume $\mathrm{W}(\set{K})$ is comparable with the maximum intrinsic volume, $\max_i \mathrm{V}_i(\set{K})$, up to a dimensional factor, $\dim( \set{K} )$. Better estimates follow from concentration of intrinsic volumes (Theorem~\ref{thm:intvol-conc}). \subsection{Weighted Intrinsic Volumes} \label{sec:wvol} For applications in integral geometry, Nijenhuis~\cite{nijenhuis1974chern} recognized that it is valuable to reweight the intrinsic volume sequence; see also~\cite[pp.~176--177]{SW08:Stochastic-Integral}. Our work builds on this idea, but we use different normalizations. The correct choice of weights depends on which transformation group we are considering, either rotations or rigid motions. \begin{definition}[Weighted Intrinsic Volumes] \label{def:wintvol} Let $\set{K} \subset \mathbbm{R}^n$ be a convex body in dimension $n$. For indices $i = 0,1,2, \dots,n$, the \emph{rotation volumes} $\mathring{\intvol}_i(\set{K})$ and the \emph{total rotation volume} $\mathring{\wills}(\set{K})$ are the numbers \begin{align} \mathring{\intvol}_i(\set{K}) &:= \frac{\omega_{n+1}}{\omega_{i+1}} \mathrm{V}_{n-i}(\set{K}) &\text{and}&& \mathring{\wills}(\set{K}) &:= \sum_{i=0}^n \mathring{\intvol}_i(\set{K}). \label{eqn:rotvol-def-intro} \intertext{The \emph{rigid motion volumes} $\bar{\intvol}_i(\set{K})$ and the \emph{total rigid motion volume} $\bar{\wills}(\set{K})$ are} \bar{\intvol}_i(\set{K}) &:= \frac{\omega_{n+1}}{\omega_{i+1}} \mathrm{V}_i(\set{K}) &\text{and}&& \bar{\wills}(\set{K}) &:= \sum_{i=0}^n \bar{\intvol}_i(\set{K}). \label{eqn:rmvol-def-intro} \end{align} The notation---a ring for rotations and a bar for rigid motions---is intended to be mnemonic. \end{definition} Evidently, the rotation volumes and the rigid motion volumes depend on the dimension $n$; they are \emph{not} intrinsic. In the case of rigid motion volumes, this is merely because of the dispensable factor $\omega_{n+1}$, which is included to simplify some expressions. Nevertheless, to lighten notation, we only specify the ambient dimension when it is required for clarity. Otherwise, the rotation volumes and rigid motion volumes enjoy the same basic properties as the intrinsic volumes (Appendix~\ref{sec:intvol-properties}). In particular, they are nonnegative, monotone, and rigid motion invariant. The weighted intrinsic volumes also inherit homogeneity properties from the intrinsic volumes. Note, however, that the $i$th rotation volume $\mathring{\intvol}_i$ is homogeneous of degree $n - i$ because its indexing reverses that of the intrinsic volumes. As we will see, the core formulas of integral geometry simplify dramatically when they are written using the rotation volumes and the rigid motion volumes. We believe that the elegance of the resulting phase transitions justifies these unusual normalizations. \subsection{Relationship with Intrinsic Volumes} There are many ways to connect intrinsic volumes with rotation volumes and rigid motion volumes. In particular, we note two integral formulas that relate the total volumes. For a convex body $\set{K} \subset \mathbbm{R}^n$, \begin{equation}\label{eqn:wills-as-moments} \mathring{\wills}(\set{K}) = \omega_{n+1}\int_0^\infty \mathrm{W}(s^{-1} \set{K}) \, s^n \mathrm{e}^{-\pi s^2} \idiff{s} \quad\text{and}\quad \bar{\wills}(\set{K}) = \omega_{n+1}\int_0^\infty \mathrm{W}(s\set{K}) \, \mathrm{e}^{-\pi s^2} \idiff{s}. \end{equation} The identities~\eqref{eqn:wills-as-moments} are obtained from Definition~\ref{def:wintvol} and the formula~\eqref{eqn:ball-sphere} for the surface area of a ball by expanding the gamma function in $1/\omega_{i+1}$ using the Euler integral. In words, the total rotation volume and the total rigid motion volume reflect how the total intrinsic volume varies with dilation. \subsection{Characteristic Polynomials} As with many other sequences, it can be useful to summarize the volumes using a generating function. For a convex body $\set{K} \subset \mathbbm{R}^n$ and a variable $t \in \mathbbm{R}$, define the characteristic polynomials of the (weighted) intrinsic volumes: \begin{align} \chi_{\set{K}}(t) &:= \sum_{i=0}^n t^{n-i} \, \mathrm{V}_i(\set{K}) = t^n \, \mathrm{W}(t^{-1}\set{K}); \label{eqn:intvol-charpoly} \\ \mathring{\chi}_{\set{K}}(t) &:= \sum_{i=0}^n t^{n-i} \,\mathring{\intvol}_i(\set{K}) = \mathring{\wills}(t \set{K}); \label{eqn:rotvol-charpoly} \\ \bar{\chi}_{\set{K}}(t) &:= \sum_{i=0}^n t^{n-i}\, \bar{\intvol}_i(\set{K}) = t^n \,\bar{\wills}(t^{-1} \set{K}). \label{eqn:rmvol-charpoly} \end{align} On a couple occasions, we will use these polynomials to streamline computations. The main results of this paper can also be framed in terms of characteristic polynomials. For brevity, we will not pursue this observation. \subsection{Examples} It is usually quite difficult to compute all of the intrinsic volumes of a convex body, but we can provide explicit formulas in some simple cases. The discussion in this section supports our numerical work, but it is tangential to the main arc of development. \begin{example}[Euclidean Ball: Intrinsic Volumes] \label{ex:ball} The intrinsic volumes of the scaled Euclidean ball $\lambda \ball{n}$ take the form \begin{equation} \label{eqn:intvol-ball} \mathrm{V}_i(\lambda \ball{n}) = \binom{n}{i} \frac{\kappa_n}{\kappa_{n-i}} \cdot \lambda^i \quad\text{for all $\lambda \geq 0$.} \end{equation} The relation~\eqref{eqn:intvol-ball} is an easy consequence of the Steiner formula~\eqref{eqn:steiner-intro}. The total intrinsic volume~\eqref{eqn:wills} of the ball satisfies \begin{equation} \label{eqn:ball-wills} \mathrm{W}(\lambda\ball{n}) = \kappa_n \lambda^n + \omega_n \int_0^\infty (\lambda+s)^{n-1} \,\mathrm{e}^{-\pi s^2} \idiff{s}. \end{equation} This expression can be derived by expanding the gamma function in the factor $1/\kappa_{n-i}$ as an Euler integral; it also follows from the integral representation in Corollary~\ref{cor:intvol-metric}. While this integral cannot be evaluated in closed form, it can be approximated using Laplace's method or numerical quadrature. \end{example} \begin{example}[Euclidean Ball: Rigid Motion Volumes]\label{ex:ball-rm} By definition~\eqref{eqn:rmvol-def-intro}, the rigid motion volumes of the scaled Euclidean ball $\lambda \ball{n}$ satisfy \begin{equation*} \frac{\bar{\intvol}_i(\lambda\ball{n})}{\omega_{n+1}}= \binom{n}{i}\frac{\kappa_n}{\kappa_{n-i}\omega_{i+1}} \cdot \lambda^i. \end{equation*} After some gymnastics with gamma functions, this expression converts into a form that is reminiscent of a binomial distribution or a beta distribution. To obtain a compact expression for the total rigid motion volume, use the integral representation~\eqref{eqn:wills-as-moments}, the total intrinsic volume computation~\eqref{eqn:ball-wills}, and the volume computations~\eqref{eqn:ball-sphere} to arrive at \begin{equation} \label{eqn:rmvol-ball-integral} \bar{\wills}(\lambda \ball{n}) = \omega_n \left[\frac{\lambda^n}{n}+\int_0^{\pi/2} (\lambda \sin\theta + \cos\theta )^{n-1} \idiff{\theta} \right]. \end{equation} If desired, one may invoke Laplace's method to obtain an asymptotic approximation for the total rigid motion volume. \end{example} \begin{example}[Parallelotopes: Intrinsic Volumes] \label{ex:ptope} Given a vector $\vct{\lambda} := (\lambda_1, \dots, \lambda_n)$ of nonnegative side lengths, we construct the parallelotope $\set{R}_{\vct{\lambda}} := [0,\lambda_1] \times \cdots \times [0, \lambda_n] \subset \mathbbm{R}^n$. Its intrinsic volumes are \begin{equation*} \label{eqn:ptope-intvol} \mathrm{V}_i(\set{R}_{\vct{\lambda}}) = e_i(\vct{\lambda}) \quad\text{for $i = 0, 1, 2, \dots, n$.} \end{equation*} We have written $e_i$ for the $i$th elementary symmetric polynomial. Thus, the total intrinsic volume is $$ \mathrm{W}(\set{R}_{\vct{\lambda}}) = \prod_{i=1}^n (1 + \lambda_i). $$ These formulas follow from an expression (Fact~\ref{fact:product-intvol}) for the intrinsic volumes of a Cartesian product. \end{example} \begin{example}[Scaled Cube: Weighted Intrinsic Volumes] \label{ex:cube} Let $\set{Q}^n := [0,1]^n \subset \mathbbm{R}^n$ be the standard cube. For each $\lambda \geq 0$ and each index $i = 0,1,2, \dots, n$, \begin{equation} \label{eqn:intvol-cube} \mathrm{V}_i(\lambda \set{Q}^n) = \binom{n}{i} \cdot \lambda^i \quad\text{and}\quad \mathrm{W}(\lambda \set{Q}^n) = (1+\lambda)^n \quad\text{for all $\lambda \geq 0$.} \end{equation} This result specializes Example~\ref{ex:ptope}. It is also straightforward to determine the weighted intrinsic volumes of a scaled cube: \begin{align*} \mathring{\intvol}_i(\lambda \set{Q}^n) &= \binom{n+1}{n-i} \frac{\kappa_{n+1}}{\kappa_{i+1}} \cdot \lambda^{n-i} = \mathrm{V}_{n-i}(\lambda \ball{n+1}); \\ \bar{\intvol}_i(\lambda \set{Q}^n) &= \binom{n+1}{n-i} \frac{\kappa_{n+1}}{\kappa_{i+1}} \cdot \lambda^i = \lambda^{n} \, \mathrm{V}_{n-i}(\lambda^{-1} \ball{n+1}). \end{align*} These calculations follow from Definition~\ref{def:wintvol}, the homogeneity properties of the intrinsic volumes, and the calculations~\eqref{eqn:intvol-ball} and~\eqref{eqn:intvol-cube}. In view of~\eqref{eqn:ball-wills}, the total volumes satisfy \begin{align} \mathring{\wills}(\lambda \set{Q}^n) &= \mathrm{W}(\lambda \ball{n+1})-\mathrm{V}_{n+1}(\lambda \ball{n+1}) = \omega_{n+1}\int_0^\infty (\lambda+s)^{n} \,\mathrm{e}^{-\pi s^2} \idiff{s}; \label{eqn:cube-rotwills} \\ \bar{\wills}(\lambda \set{Q}^n) &= \lambda^{n} \cdot \big[\mathrm{W}(\lambda^{-1} \ball{n+1})-\mathrm{V}_{n+1}(\lambda^{-1} \ball{n+1}) \big] = \omega_{n+1}\int_0^\infty (1+\lambda s)^{n} \,\mathrm{e}^{-\pi s^2} \idiff{s}. \label{eqn:cube-rmwills} \end{align} These identities also follow from~\eqref{eqn:wills-as-moments}. \end{example} \section{Main Results I: Concentration} \label{sec:main-conc} With our coauthors Michael McCoy, Ivan Nourdin, and Giovanni Peccati, we recently established the surprising fact that the sequence of intrinsic volumes concentrates sharply around its centroid~\cite[Thm.~1.11]{LMNPT20:Concentration-Euclidean}. The main technical achievement of this paper is a set of concentration properties for the rotation volumes and the rigid motion volumes. In Section~\ref{sec:main-phase}, we present phase transition formulas that follow from these concentration results. \subsection{Weighted Intrinsic Volume Random Variables} It is convenient to construct random variables that reflect the shape of the sequences of weighted intrinsic volumes (Definition~\ref{def:wintvol}). This approach gives us access to the language and methods of probability theory. \begin{definition}[Weighted Intrinsic Volume Random Variables] \label{def:wintvol-rv} Let $\set{K} \subset \mathbbm{R}^n$ be a nonempty convex body. The \emph{rotation volume random variable} $\mathring{I}_{\set{K}}$ and the \emph{rigid motion volume random variable} $\bar{I}_{\set{K}}$ take values in $\{0,1,2,\dots,n\}$ according to the distributions \begin{align} \label{eqn:wvol-rv-def} \Prob{ \mathring{I}_{\set{K}} = n - i } &= \frac{\mathring{\intvol}_i(\set{K})}{\mathring{\wills}(\set{K})} &\text{and}&& \Prob{ \bar{I}_{\set{K}} = n - i } &= \frac{\bar{\intvol}_i(\set{K})}{\bar{\wills}(\set{K})}. \end{align} The \emph{intrinsic volume random variable} $I_{\set{K}}$ is defined analogously. Note that each random variable reverses the indexing of the corresponding sequence. \end{definition} Figure~\ref{fig:intvoldist} illustrates the distribution of the weighted intrinsic volume random variables for several convex bodies. The picture indicates that each of the random variables is concentrated. In each case, the point of concentration is the expectation of the random variable, which merits its own terminology. \begin{definition}[Central Volumes] \label{def:central-vols} Let $\set{K} \subset \mathbbm{R}^n$ be a nonempty convex body. The \emph{central rotation volume} $\mathring{\delta}(\set{K})$ is the expectation of the rotation volume random variable $\mathring{I}_{\set{K}}$, while the \emph{central rigid motion volume} $\bar{\delta}(\set{K})$ is the expectation of the rigid motion random variable $\bar{I}_{\set{K}}$: \begin{align*} \label{eqn:central-volumes} \mathring{\delta}(\set{K}) &:= \operatorname{\mathbb{E}} \mathring{I}_{\set{K}} = \frac{1}{\mathring{\wills}(\set{K})} \sum_{i=0}^{n} (n-i) \, \mathring{\intvol}_i(\set{K}) = \frac{\diff{}}{\diff{t}}\Big\vert_{t=1} \log \mathring{\chi}_{\set{K}}(t); \\ \bar{\delta}(\set{K}) &:= \operatorname{\mathbb{E}} \bar{I}_{\set{K}} = \frac{1}{\bar{\wills}(\set{K})} \sum_{i=0}^{n} (n-i) \, \bar{\intvol}_i(\set{K}) = \frac{\diff{}}{\diff{t}}\Big\vert_{t=1} \log \bar{\chi}_{\set{K}}(t). \end{align*} The central intrinsic volume $\delta(\set{K})$ is defined analogously. The characteristic polynomials $\mathring{\chi}$ and $\bar{\chi}$ are given in~\eqref{eqn:rotvol-charpoly} and~\eqref{eqn:rmvol-charpoly}. \end{definition} Each of the central volumes lies in the interval $[0, n]$. If we rescale a fixed body $\set{K}$, the central rotation volume $\mathring{\delta}(\lambda \set{K})$ increases monotonically from $0$ to $n$ as the scale parameter $\lambda$ increases from $0$ to $\infty$. Oppositely, the central rigid motion volume $\bar{\delta}(\lambda \set{K})$ decreases monotonically from $n$ to $0$. This feature may be confusing, but it reflects a duality between the two notions of volume. \begin{figure} \caption{\textbf{Weighted Intrinsic Volume Random Variables.} The distribution of the weighted intrinsic volume random variables (Definition~\ref{def:wintvol-rv}) for scaled balls and scaled cubes in $\mathbbm{R}^{32}$, calculated using Examples~\ref{ex:ball}, ~\ref{ex:ball-rm}, and~\ref{ex:cube}. The horizontal axis is the value of the random variable; the vertical axis is the {natural logarithm} of the probability that it takes that value. } \label{fig:intvoldist} \end{figure} \begin{example}[Scaled Cube: Central Rotation Volume]\label{ex:scaled-cube-delta} From the discussion in Example~\ref{ex:cube}, we can derive expressions for the central rotation volumes of the scaled cube: \begin{equation*} \mathring{\delta}(\lambda \set{Q}^n) = \frac{\diff{}}{\diff{t}}\Big\vert_{t=1} \log \mathring{\chi}_{\lambda \set{Q}^n}(t) = \lambda n \cdot \frac{\int_0^\infty (\lambda+s)^{n-1} \,\mathrm{e}^{-\pi s^2} \idiff{s}}{\int_0^\infty (\lambda+s)^{n} \,\mathrm{e}^{-\pi s^2} \idiff{s}}. \end{equation*} We have used the expression~\eqref{eqn:cube-rotwills} for the total rotation volume. This formula can be evaluated asymptotically in an appropriate parameter regime. For each $\zeta > 0$, \begin{equation}\label{eqn:central-scaled-cube} \frac{\mathring{\delta}\big(\zeta \sqrt{2n/\pi} \, \set{Q}^n \big)}{n} \longrightarrow \frac{2}{1+\sqrt{1+\zeta^{-2}}} \quad\text{as $n\to \infty$.} \end{equation} The statement~\eqref{eqn:central-scaled-cube} follows from a routine application of Laplace's method. There is a related expression for the asymptotics of the central rigid motion volume of a scaled cube. \end{example} \begin{example}[Scaled Ball: Central Rigid Motion Volume]\label{ex:scaled-ball-delta} From the discussion in Example~\ref{ex:ball-rm}, we can derive expressions for the central rigid volume volumes of the scaled ball. Indeed, \begin{equation*} \bar{\delta}(\lambda \ball{n}) = \frac{\diff{}}{\diff{t}}\Big\vert_{t=1} \log \bar{\chi}_{\lambda \ball{n}}(t) = \frac{\lambda^n + (n-1)\lambda \int_0^{\pi/2} ( \lambda \sin\theta + \cos\theta )^{n-2} \sin\theta \idiff{\theta}} {n^{-1} \lambda^n + \int_0^{\pi/2} ( \lambda \sin\theta + \cos\theta )^{n-1} \idiff{\theta}}. \end{equation*} We have used the expression~\eqref{eqn:rmvol-ball-integral}. For each $\lambda>0$, \begin{equation}\label{eqn:central-scaled-ball} \frac{\bar{\delta}(\lambda \ball{n} )}{n} \longrightarrow \frac{1}{1 + \lambda^2} \quad\text{as $n\to \infty$.} \end{equation} The statement~\eqref{eqn:central-scaled-ball} follows after a careful application of Laplace's method. \end{example} \subsection{Weighted Intrinsic Volumes Concentrate} \label{sec:wvol-conc-intro} The concentration behavior visible in Figure~\ref{fig:intvoldist} is a generic property of every convex body. The following theorems quantify the rate at which the distribution of the weighted intrinsic volume random variable decays away from the central volume. \begin{bigthm}[Rotation Volumes: Concentration] \label{thm:rotvol-intro} Consider a nonempty convex body $\set{K} \subset \mathbbm{R}^n$ with rotation volume random variable $\mathring{I}_{\set{K}}$ and central rotation volume $\mathring{\delta}(\set{K})$. For all $t \geq 0$, $$ \Prob{ \abs{ \phantom{\big|\!} \smash{\mathring{I}_{\set{K}} - \mathring{\delta}(\set{K}) }} \geq t } \leq 2 \exp\left( \frac{-t^2/2}{\mathring{\sigma}^2(\set{K}) + t/3} \right) \quad\text{where}\quad \mathring{\sigma}^2(\set{K}) := \mathring{\delta}(\set{K}). $$ \end{bigthm} \noindent Theorem~\ref{thm:rotvol-intro} is a consequence of Theorem~\ref{thm:rotvol-conc}. According to Theorem~\ref{thm:rotvol-intro}, the rotation volume random variable $\mathring{I}_{\set{K}}$ exhibits Bernstein-type concentration around the central rotation volume $\mathring{\delta}(\set{K})$. Initially, for small $t$, the probability decays at least as fast as a Gaussian random variable with variance $\mathring{\delta}(\set{K})$. In other words, the bulk of the rotation volume distribution is concentrated on about $\mathring{\delta}(\set{K})^{1/2}$ indices near $\mathring{\delta}(\set{K})$. When $t \approx \mathring{\delta}(\set{K})$, the decay shifts from Gaussian to exponential with variance one. In fact, the proof of the result demonstrates that stronger Poisson-type decay occurs. A parallel result, with a similar interpretation, holds for the rigid motion volumes. \begin{bigthm}[Rigid Motion Volumes: Concentration] \label{thm:rmvol-intro} Consider a nonempty convex body $\set{K} \subset \mathbbm{R}^n$ with rigid motion volume random variable $\bar{I}_{\set{K}}$ and central rigid motion volume $\bar{\delta}(\set{K})$. For all $t \geq 0$, $$ \Prob{ \abs{ \bar{I}_{\set{K}} - \bar{\delta}(\set{K}) } \geq t } \leq 2 \exp\left( \frac{-t^2/4}{\bar{\sigma}^2(\set{K}) + t/3} \right) \quad\text{where}\quad \bar{\sigma}^2(\set{K}) := \bar{\delta}(\set{K}) \wedge \big((n + 1) - \bar{\delta}(\set{K})\big). $$ \end{bigthm} \noindent Theorem~\ref{thm:rmvol-intro} follows from Theorem~\ref{thm:rmvol-conc}. \subsection{Metric Formulations} Although it is difficult to calculate weighted intrinsic volumes, the concentration theory demonstrates that the central volume is already an adequate summary of the entire distribution. Fortunately, the central volume has an alternative expression that is more tractable. In the following results, $\dist_{\set{K}}(\vct{x})$ reports the Euclidean distance from the point $\vct{x}$ to the set $\set{K}$. \begin{proposition}[Rotation Volumes: Metric Formulation] \label{prop:rotvol-metric} The total rotation volume and the central rotation volume of a nonempty convex body $\set{K} \subset \mathbbm{R}^n$ may be calculated as \begin{align*} \mathring{\wills}(\set{K}) &= \frac{\omega_{n+1}}{2} \int_{\mathbbm{R}^n} \mathrm{e}^{-2\pi \dist_{\set{K}}(\vct{x})} \idiff{\vct{x}}; \\ n - \mathring{\delta}(\set{K}) &= \frac{\omega_{n+1}}{2 \mathring{\wills}(\set{K})} \int_{\mathbbm{R}^n} 2\pi\dist_{\set{K}}(\vct{x})\, \mathrm{e}^{-2\pi\dist_{\set{K}}(\vct{x})} \idiff{\vct{x}}. \end{align*} \end{proposition} \begin{proposition}[Rigid Motion Volumes: Metric Formulation] \label{prop:rmvol-metric} The total rigid motion volume and the central rigid motion volume of a nonempty convex body $\set{K} \subset \mathbbm{R}^n$ may be calculated as \begin{align*} \bar{\wills}(\set{K}) &= \int_{\mathbbm{R}^n} \big[ 1 + \dist_{\set{K}}^2(\vct{x}) \big]^{-(n+1)/2} \idiff{\vct{x}}; \\ (n+1) - \bar{\delta}(\set{K}) &= \frac{n+1}{\bar{\wills}(\set{K})} \int_{\mathbbm{R}^n} \big[ 1 + \dist_{\set{K}}^2(\vct{x}) \big]^{-(n+3)/2} \idiff{\vct{x}}. \end{align*} \end{proposition} \noindent The proof of Proposition~\ref{prop:rotvol-metric} appears in Section~\ref{sec:rotvol-distint}. The proof of Proposition~\ref{prop:rmvol-metric} is in Section~\ref{sec:rmvol-distint}. An analogous result for intrinsic volumes appears as Corollary~\ref{cor:intvol-metric}. The central rotation volume is essentially given by the entropy of the log-concave probability measure induced by the distance to the body. The central rigid motion volume has a related structure, but it is expressed in terms of a concave measure. Owing to the strikingly different form of these measures, the concentration results require distinct technical ingredients. \subsection{Proof Strategy} Both Theorem~\ref{thm:rotvol-intro} and Theorem~\ref{thm:rmvol-intro} follow from the same species of argument, which we execute in Sections~\ref{sec:genfun}--\ref{sec:rmvol-conc}. Let us give a summary. To study the behavior of the weighted intrinsic volume random variable, we use a refined version of the entropy method. The argument depends on the insight, from~\cite{LMNPT20:Concentration-Euclidean}, that Steiner's formula~\eqref{eqn:steiner-intro} allows us to pass from the discrete random variable on $\{0, 1, 2, \dots, n\}$ to a continuous random variable on $\mathbbm{R}^n$ with a concave measure. To understand the fluctuations of the continuous distribution, we apply modern variance bounds for concave measures, in the spirit of the classic Borell--Brascamp--Lieb inequality~\cite{Bor75:Convex-Set,BL76:Extensions-Brunn-Minkowski}. In each case, the details are somewhat different. \subsection{Intrinsic Volumes Concentrate Better} Using the same methodology, we have also established a concentration inequality for the intrinsic volume random variable that improves significantly over our results in~\cite{LMNPT20:Concentration-Euclidean}. This material is not directly relevant to our phase transition project, so we have postponed the statement and proof to Appendix~\ref{app:intvol}. \subsection{Log-Concavity of Weighted Intrinsic Volumes} As a counterpoint to our main results, we remark that the weighted intrinsic volume sequences are log-concave. \begin{proposition}[Weighted Intrinsic Volumes: Log-Concavity] \label{prop:lc} Let $\set{K} \subset \mathbbm{R}^n$ be a convex body. For each index $i = 1, 2, 3, \dots, n - 1$, \begin{align} \mathring{\intvol}_i(\set{K})^2 &\geq \mathring{\intvol}_{i-1}(\set{K}) \cdot \mathring{\intvol}_{i+1}(\set{K}); \\ \bar{\intvol}_i(\set{K})^2 &\geq \bar{\intvol}_{i-1}(\set{K}) \cdot \bar{\intvol}_{i+1}(\set{K}). \end{align} \end{proposition} Proposition~\ref{prop:lc} gives an alternative sense in which the weighted intrinsic volume sequences concentrate. Among other things, it ensures that both sequences are unimodular. See Section~\ref{sec:intvol-ulc} for related results about the intrinsic volume sequence. \begin{proof}[Proof Sketch] The Alexandrov--Fenchel inequalities~\cite[Sec.~7.3]{Sch14:Convex-Bodies} imply that the intrinsic volumes of a convex body form an ultra-log-concave sequence~\cite{Che76:Processus-Gaussiens,McM91:Inequalities-Intrinsic}. Using Definition~\ref{def:wintvol} and the volume calculation~\eqref{eqn:ball-sphere}, we quickly conclude that the weighted intrinsic volumes are log-concave. \end{proof} After this paper was written, Aravind et al.~\cite{AMM21:Concentration-Inequalities} proved that all ultra-log-concave sequences exhibit Poisson-type concentration. This provides an alternative approach to proving slightly weaker forms of some of the concentration theorems in this paper. \section{Main Results II: Phase Transitions} \label{sec:main-phase} The primary goal of this paper is to reveal that several classic formulas from integral geometry exhibit sharp phase transitions in high dimensions. In each setting, the formula collapses into two complementary cases when we invoke the concentration properties of weighted intrinsic volumes. The width of the transition region between the two cases becomes negligible as the dimension increases. \subsection{Rotations} We begin with two models that involve averaging over random rotations. The first involves projection onto a random subspace, and the second involves the sum of a convex body and a randomly rotated convex body. In each case, the concentration of rotation volumes leads to phase transition phenomena. \begin{figure} \caption{\textbf{Projection onto a Random Subspace.} This diagram shows the orthogonal projections of a two-dimensional convex body $\set{K}$ onto the one-dimensional subspaces $\set{L}_1$ and $\set{L}_2$. The projection formula, Fact~\ref{fact:proj-formula-intro}, states the expected length of the projection, averaged over all one-dimensional subspaces, is proportional to the perimeter of $\set{K}$. The formula also treats problems in higher dimensions.} \label{fig:random-projection} \end{figure} \begin{figure} \caption{\textbf{Phase Transitions for Moving Flats.} \textsc{[left]} The function $\mathrm{RandProj}_m$, defined in~\eqref{eqn:randproj-intro}, compares the total rotation volume of a random $m$-dimensional projection of a scaled cube $s\set{Q}^n$ with the total rotation volume of the scaled cube. The transition moves \emph{right} with increasing scale $s$. \textsc{[right]} The function $\mathrm{RandSlice}_m$, defined in~\eqref{eqn:randslice-intro}, compares the total rigid motion volume of a random $m$-dimensional affine slice of a scaled cube $s \set{Q}^n$ with the total rigid motion volume of the scaled cube. The transition moves \emph{left} with increasing scale $s$. \textsc{[top]} $\mathbbm{R}^{32}$ and \textsc{[bottom]} $\mathbbm{R}^{128}$. Note that the relative transition width becomes narrower as the ambient dimension increases.} \label{fig:randproj-randslice} \end{figure} \subsubsection{Projection onto a Random Subspace} The first question concerns the interaction between a set and a random subspace: \begin{quotation} \textcolor{dkblue}{ \textbf{Suppose that we project a convex body onto a random linear subspace of a given dimension. How big is the image relative to the original set?}} \end{quotation} \noindent See Figure~\ref{fig:random-projection} for a schematic. The integral geometry literature contains a detailed answer to this question. The result is usually written in terms of the intrinsic volumes (Fact~\ref{fact:projection-intvol}), but we have discovered that the rotation volumes lead to a simpler statement: \begin{fact}[Projection Formula] \label{fact:proj-formula-intro} Let $\set{K} \subset \mathbbm{R}^n$ be a nonempty convex body. For each subspace dimension $m = 0, 1, 2, \dots, n$ and each index $i = 0, 1, 2, \dots, m$, \begin{equation} \label{eqn:proj-formula-intro} \int_{\mathrm{Gr}(m,n)} \mathring{\intvol}_i^{m}(\set{K}|\set{L}) \, {\nu}_m(\diff{\set{L}}) = \mathring{\intvol}_{n-m+i}^{n}(\set{K}). \end{equation} The superscript indicates the ambient dimension of the space in which the rotation volumes are calculated. The Grassmannian $\mathrm{Gr}(m,n)$ of $m$-dimensional subspaces in $\mathbbm{R}^n$ is equipped with its rotation-invariant probability measure $\nu_m$; see Appendix~\ref{sec:invariant-grass}. \end{fact} Among other things, the projection formula allows us to study the total rotation volume of a random projection onto an $m$-dimensional subspace. To do so, we define the average \begin{equation} \label{eqn:randproj-intro} \mathrm{RandProj}_m(\set{K}) := \int_{\mathrm{Gr}(m,n)} \frac{\mathring{\wills}^m(\set{K}|\set{L})}{\mathring{\wills}^n(\set{K})} \,\nu_m(\diff{\set{L}}) = \sum_{i = 0}^m \frac{\mathring{\intvol}_{n-i}^n(\set{K})}{\mathring{\wills}^n(\set{K})}. \end{equation} The second relation follows when we sum~\eqref{eqn:proj-formula-intro} over indices $i = 0, 1,2,\dots, m$ and divide by the total rotation volume $\mathring{\wills}^n(\set{K})$. Figure~\ref{fig:randproj-randslice} illustrates the behavior of the function $m \mapsto \mathrm{RandProj}_m(s\set{Q}^n)$ for some scaled cubes $s \set{Q}^n$. Evidently, the function jumps from zero to one as the subspace dimension $m$ increases, and the relative width of the jump becomes narrower as the ambient dimension $n$ grows. We have developed a complete mathematical explanation for this empirical observation. In view of~\eqref{eqn:randproj-intro}, the function $\mathrm{RandProj}_m$ can be written using the rotation volume random variable~\eqref{eqn:wvol-rv-def}: $$ \mathrm{RandProj}_m(\set{K}) = \Prob{ \mathring{I}_{\set{K}} \leq m }. $$ Second, invoke Theorem~\ref{thm:rotvol-conc}, on the concentration of rotation volumes, to see that $m \mapsto \Prob{ \smash{\mathring{I}_{\set{K}} \leq m} }$ makes a transition from zero to one around the value $m = \mathring{\delta}(\set{K})$. Here is a precise statement. \begin{bigthm}[Random Projections: Phase Transition]\label{thm:randproj-intro} Consider a nonempty convex body $\set{K} \subset \mathbbm{R}^n$ with central rotation volume $\mathring{\delta}(\set{K})$. For each subspace dimension $m \in \{0,1,2, \dots ,n\}$, form the average \begin{equation*}\mathrm{RandProj}_m(\set{K}) := \int_{\mathrm{Gr}(m,n)} \frac{\mathring{\wills}^m(\set{K}|\set{L})}{\mathring{\wills}^n(\set{K})} \,\nu_m(\diff{\set{L}}) \in [0, 1]. \end{equation*} For a proportion $\alpha \in (0,1)$, set the transition width $$ t_{\star}(\alpha) := \big[ 2 \mathring{\delta}(\set{K}) \log(1/\alpha) \big]^{1/2} + \tfrac{2}{3} \log(1/\alpha). $$ Then $\mathrm{RandProj}_m(\set{K})$ undergoes a phase transition at $m = \mathring{\delta}(\set{K})$ with width controlled by $t_{\star}(\alpha)$: \begin{equation*} \begin{aligned} m &\leq \mathring{\delta}(\set{K}) - t_{\star}(\alpha) &&\quad\text{implies}\quad &\mathrm{RandProj}_m(\set{K}) &\leq \alpha; \\ m &\geq \mathring{\delta}(\set{K}) + t_{\star}(\alpha) &&\quad\text{implies}\quad &\mathrm{RandProj}_m(\set{K}) &\geq 1 - \alpha. \end{aligned} \end{equation*} \end{bigthm} \noindent We have already given the majority of the proof; see Section~\ref{sec:randproj} for the remaining details. Here is an interpretation. Random projection of a convex body onto a subspace either obliterates or preserves its total rotation volume, depending on whether the dimension $m$ of the subspace is smaller or larger than $\mathring{\delta}(\set{K})$. Thus, the central rotation volume $\mathring{\delta}(\set{K})$ measures the ``dimension'' of the set $\set{K}$. The transition between the two cases takes place over a very narrow range of subspace dimensions. For example, when $\alpha = 1/\mathrm{e}^{2}$, the transition width is around $2\mathring{\delta}(\set{K})^{1/2} + 1$, which is usually much smaller than the location $\mathring{\delta}(\set{K})$ of the transition. Keeping in mind that the central rotation volume $\mathring{\delta}(\set{K}) \in [0, n]$, we see that this transition takes place over at most $2\sqrt{n} + 1$ dimensions, which should be compared with the range $n$ of possible subspace dimensions. \begin{figure} \caption{\textbf{Phase Transition for Projections of a Cube.} For a scaled cube $\set{Q}(\zeta) = \zeta \sqrt{2n / \pi} \, \set{Q}^n$, these plots display $\mathrm{RandProj}_m(\mathsf{Q}(\zeta))$ as a function of the scaling parameter $\zeta$ and the dimension $m$ of the projection. The black curve traces the asymptotic expression~\eqref{eqn:central-scaled-cube} for the central rotation volume $\mathring{\delta}(\set{Q}(\zeta)) / n$. The function $\mathrm{RandProj}_m$ is defined in~\eqref{eqn:randproj-intro}. The plots are similar, except that the transition region narrows when the ambient dimension increases from $n = 16$ to $n = 128$.} \label{fig:randproj-phase-map} \end{figure} \begin{example}[Scaled Cube: Random Projection] Consider the family $\set{Q}(\zeta;n) = \zeta \sqrt{2n/\pi} \, \set{Q}^n$ of scaled cubes that were introduced in Example~\ref{ex:scaled-cube-delta}. When $n$ is large, Theorem~\ref{thm:randproj-intro} and the asymptotic formula~\eqref{eqn:central-scaled-cube} show that \begin{equation*} \begin{aligned} \frac{m}{n} &\lessapprox \frac{2}{1+\sqrt{1+\zeta^{-2}}} &&\quad\text{implies}\quad &\mathrm{RandProj}_m(\set{Q}(\zeta;n)) &\approx 0; \\ \frac{m}{n} &\gtrapprox \frac{2}{1+\sqrt{1+\zeta^{-2}}} &&\quad\text{implies}\quad &\mathrm{RandProj}_m(\set{Q}(\zeta;n)) &\approx 1. \end{aligned} \end{equation*} To confirm this result, Figure~\ref{fig:randproj-phase-map} displays the numerical value $\mathrm{RandProj}_m(\set{Q}(\zeta; n))$ as a function of $\zeta$ and $m$ for two choices of dimension $n$. The asymptotic phase transition is superimposed. We remark that each projection of the cube $\set{Q}^n$ is a zonotope with at most $n$ facets. \end{example} \begin{remark}[Related Work] Milman~\cite{Mil90:Note-Low} established the the \emph{diameter} of a random projection of a norm ball undergoes a change in behavior as the dimension $m$ of the subspace increases. When $m$ is much smaller than the mean width of the body, random projections are isomorphic to Euclidean balls whose radius is the mean width. When $m$ is much larger than the mean width of the body, the diameter of the random projection is comparable to $\sqrt{m/n}$ times the diameter of the body. See~\cite[Sec.~5.7.1]{artstein2015asymptotic} or~\cite[Sec.~7.7, Sec.~9.2.2]{Ver18:High-Dimensional-Probability} for more details. \end{remark} \begin{figure} \caption{\textbf{Rotation Means.} This picture illustrates the Minkowski sum of a convex body $\set{K}$ and two rotations $\mtx{O}_1 \set{M}$ and $\mtx{O}_2 \set{M}$ of a convex body $\set{M}$. The rotation mean value formula, Fact~\ref{fact:rotmean}, expresses the expected area of the sum, averaged over all rotations, in terms of the perimeters and areas of $\set{K}$ and $\set{M}$.} \label{fig:rotation-mean} \end{figure} \subsubsection{Rotation Means} Next, we consider a problem that involves the interaction between two different convex bodies: \begin{quotation} \textcolor{dkblue}{ \textbf{Suppose that we add a randomly rotated convex body to a fixed convex body. How big is the sum relative to the size of the original sets?}} \end{quotation} \noindent See Figure~\ref{fig:rotation-mean} for an illustration. One version of this question admits an exact answer, which is contained in a classic result called the rotation mean value formula (Fact~\ref{fact:rotmean}). We have discovered that rotation means exhibit a sharp phase transition. \begin{bigthm}[Rotation Means: Phase Transition] \label{thm:rotmean-intro} Consider two nonempty convex bodies $\set{K}, \set{M} \subset \mathbbm{R}^n$, and define the sum $\mathring{\Delta}(\set{K}, \set{M}) := \mathring{\delta}(\set{K}) + \mathring{\delta}(\set{M})$ of the central rotation volumes. Form the average \begin{equation} \label{eqn:rotmean-intro} \mathrm{RotMean}(\set{K}, \set{M}) := \int_{\mathrm{SO}(n)} \frac{\mathring{\wills}(\set{K} + \mtx{O} \set{M})}{\mathring{\wills}(\set{K}) \, \mathring{\wills}(\set{M})} \, \nu(\diff{\mtx{O}}) \in [0, 1]. \end{equation} The special orthogonal group $\mathrm{SO}(n)$ is equipped with its invariant probability measure $\nu$; see Section~\ref{sec:invariant-SO}. For a proportion $\alpha \in (0,1)$, set the transition width $$ t_{\star}(\alpha) := \big[ 2 \mathring{\Delta}(\set{K},\set{M}) \log(1/\alpha) \big]^{1/2} + \tfrac{2}{3} \log(1/\alpha). $$ Then $\mathrm{RotMean}(\set{K}, \set{M})$ undergoes a phase transition at $\mathring{\Delta}(\set{K}, \set{M}) = n$ with width controlled by $t_{\star}(\alpha)$: \begin{equation*} \begin{aligned} \mathring{\Delta}(\set{K},\set{M}) &\leq n - t_{\star}(\alpha) &&\quad\text{implies}\quad &\mathrm{RotMean}(\set{K},\set{M}) &\geq 1 - \alpha; \\ \mathring{\Delta}(\set{K}, \set{M}) &\geq n + t_{\star}(\alpha) &&\quad\text{implies}\quad &\mathrm{RotMean}(\set{K},\set{M}) &\leq \alpha. \end{aligned} \end{equation*} \end{bigthm} \noindent The proof of Theorem~\ref{thm:rotmean-intro} appears in Section~\ref{sec:rotmean}. This result states that the total rotation volume of the sum $\set{K} + \mtx{O} \set{M}$, averaged over rotations $\mtx{O}$, does not exceed the product of the total rotation volumes of the two sets. When the total ``dimension'' $\mathring{\Delta}(\set{K}, \set{M})$ of the two sets is smaller than the ambient dimension $n$, the average total rotation volume of the sum is nearly as large as possible. The situation changes when the total ``dimension'' of the two sets is larger than the ambient dimension, in which case the average total rotation volume of the sum is far smaller than the upper bound. The width of the transition region between the two regimes is not more than $2\sqrt{n}$, which is negligible since $\mathring{\Delta}(\set{K}, \set{M}) \in [0, 2n]$. We can make an analogy with linear algebra. For two subspaces in general position, the dimension of the sum is equal to the total dimension of the subspaces when the total dimension is smaller than the ambient dimension. Otherwise, the dimension of the sum equals the ambient dimension. \begin{figure} \caption{\textbf{Phase Transition for Rotation Mean of Two Cubes.} For scaled cubes $\set{Q}(\zeta) = \zeta \sqrt{2n / \pi} \, \set{Q}^n$ with $n \in \{16,128\}$, this plot displays the value of $\mathrm{RotMean}(\set{Q}(\zeta_1), \set{Q}(\zeta_2))$ as a function of the scaling parameters $\zeta_1$ and $\zeta_2$. The black curve traces the asymptotic phase transition~\eqref{eqn:rotmean-asymp}. The function $\mathrm{RotMean}$ is defined in~\eqref{eqn:rotmean-intro}.} \label{fig:rotvol-phase-map} \end{figure} \begin{example}[Scaled Cubes: Rotation Mean] Consider the family $\set{Q}(\zeta; n) = \zeta \sqrt{2n/\pi} \, \set{Q}^n$ of scaled cubes. When $n$ is large, Theorem~\ref{thm:rotmean-intro} and the asymptotic formula~\eqref{eqn:central-scaled-cube} demonstrate that the function $(\zeta_1, \zeta_2) \mapsto \mathrm{RotMean}(\set{Q}(\zeta_1; n), \set{Q}(\zeta_2; n))$ undergoes a phase transition along the curve \begin{equation} \label{eqn:rotmean-asymp} \frac{2}{1 + \sqrt{1 + {\zeta_1}^{-2}}} + \frac{2}{1 + \sqrt{1 + {\zeta_2}^{-2}}} = 1. \end{equation} See Figure~\ref{fig:rotvol-phase-map} for a numerical illustration. \end{example} \begin{remark}[Related Work] The global form of Dvoretsky's theorem, due to Bourgain et al.~\cite{BLM88:Minkowski-Sums}, states that a rotation mean $\mtx{O}_1 \set{K} + \dots + \mtx{O}_r \set{K}$ is isomorphic to a Euclidean ball when $r$ is sufficiently large. Here, the convex body $\set{K}$ is a norm ball, and the random rotations $\mtx{O}_i$ are independent and uniform on $\mathrm{SO}(n)$. A bound for the number $r$ of summands can be obtained from the geometry of the set $\set{K}$. See~\cite[Sec.~5.6]{artstein2015asymptotic}. \end{remark} \subsection{Rigid Motions} Next, we discuss two problems where we integrate over all Euclidean proper rigid motions (i.e., translations and rotations). The first involves random affine slices of a convex body. The second involves the intersection of randomly positioned convex bodies. These models are trickier to interpret than the models involving rotations because the integrals cannot be regarded as averages. \begin{figure} \caption{\textbf{Intersection with Affine Spaces.} This picture exhibits slices of a convex body $\set{K}$ by one-dimensional affine spaces $\set{E}_1$, $\set{E}_2$, etc. The slicing formula, Fact~\ref{fact:slicing}, asserts that an appropriate integral of the length of all such affine slices is proportional to the area of the set $\set{K}$. Furthermore, the measure of the set of affine lines that hit $\set{K}$ is proportional to the perimeter of $\set{K}$.} \label{fig:crofton} \end{figure} \subsubsection{Intersection with Random Affine Spaces} The first problem is dual to the problem of projecting onto a random subspace: \begin{quotation} \textcolor{dkblue}{ \textbf{Suppose we slice a convex body with a random affine space of a given dimension. How does the total size of the slices compare with the size of the original body?}} \end{quotation} \noindent Figure~\ref{fig:crofton} contains a schematic. This problem has an exact solution in terms of intrinsic volumes, a result known as Crofton's formula (Fact~\ref{fact:slicing}). We have discovered a sharp phase transition here too. \begin{bigthm}[Random Slices: Phase Transition] \label{thm:crofton-intro} Consider a nonempty convex body $\set{K} \subset \mathbbm{R}^n$ with central rigid motion volume $\bar{\delta}(\set{K})$ and variance proxy $\bar{\sigma}^2(\set{K})$, as defined in~Theorem~\ref{thm:rmvol-intro}. For each affine space dimension $m \in \{0,1,2, \dots, n\}$, form the integral \begin{equation} \label{eqn:randslice-intro} \mathrm{RandSlice}_m(\set{K}) := \int_{\operatorname{Af}(m,n)} \frac{\bar{\wills}(\set{K} \cap \set{E})}{\bar{\wills}(\set{K})} \, \mu_m(\diff{\set{E}}) \in [0,1]. \end{equation} The set $\operatorname{Af}(m,n)$ of $m$-dimensional affine spaces in $\mathbbm{R}^n$ is equipped with a canonical rigid-motion-invariant measure $\mu_m$; see Section~\ref{sec:invariant-Af}. For a proportion $\alpha \in (0,1)$, set the transition width $$ t_{\star}(\alpha) := \big[ 4 \bar{\sigma}^2(\set{K}) \log(1/\alpha) \big]^{1/2} + \tfrac{4}{3} \log(1/\alpha). $$ Then $m \mapsto \mathrm{RandSlice}_m(\set{K})$ undergoes a phase transition at $m = \bar{\delta}(\set{K})$ with width controlled by $t_{\star}(\alpha)$: \begin{equation*} \begin{aligned} m &\leq \bar{\delta}(\set{K}) - t_{\star}(\alpha) &&\quad\text{implies}\quad &\mathrm{RandSlice}_m(\set{K}) &\leq \alpha; \\ m &\geq \bar{\delta}(\set{K}) + t_{\star}(\alpha) &&\quad\text{implies}\quad &\mathrm{RandSlice}_m(\set{K}) &\geq 1 - \alpha. \end{aligned} \end{equation*} \end{bigthm} \noindent See Section~\ref{sec:crofton} for the proof of Theorem~\ref{thm:crofton-intro}. Figure~\ref{fig:randproj-randslice} contains a numerical demonstration. A distinctive feature of the affine slicing model is that the total rigid motion volume $\bar{\wills}(\set{K} \cap \set{E})$ of an affine slice \emph{never} exceeds $\bar{\wills}(\set{K})$; this point follows from monotonicity of volumes. The theorem states that the total rigid motion volume of a random slice is either a negligible proportion of the maximum or an overwhelming proportion, depending on whether the dimension $m$ of the affine space is smaller or larger than the central rigid motion volume $\bar{\delta}(\set{K})$. The transition occurs over a narrow change in the subspace dimension, on the order of $[ \bar{\delta}(\set{K}) \wedge ( (n+1) - \bar{\delta}(\set{K}) ) ]^{1/2}$. Therefore, we can interpret the central rigid motion volume $\bar{\delta}(\set{K})$ as a measure of the ``codimension'' of the body $\set{K}$. (Recall that the central rigid motion volume \emph{decreases} as the set $\set{K}$ gets larger!) \begin{figure} \caption{\textbf{Phase Transition for Slices of a Ball.} For a scaled ball $\lambda\ball{n}$, these plots display $\mathrm{RandSlice}_m(\lambda \ball{n})$ as a function of the scaling parameter $\lambda$ and the dimension $m$ of the slice. The black curve is the asymptotic expression~\eqref{eqn:central-scaled-ball} for the central rigid motion volume $\bar{\delta}(\lambda \ball{n}) / n$. The function $\mathrm{RandSlice}_m$ is defined in~\eqref{eqn:randslice-intro}.} \label{fig:crofton-phase-map} \end{figure} \begin{example}[Scaled Ball: Random Slice] Consider the family $\lambda\ball{n}$ of scaled Euclidean balls from Example~\ref{ex:scaled-ball-delta}. When $n$ is large, Theorem~\ref{thm:crofton-intro} and the asymptotic formula~\eqref{eqn:central-scaled-ball} show that \begin{equation*} \begin{aligned} \frac{m}{n} &\lessapprox \frac{1}{1 + \lambda^2} &&\quad\text{implies}\quad &\mathrm{RandSlice}_m(\lambda \ball{n}) &\approx 0; \\ \frac{m}{n} &\gtrapprox \frac{1}{1 + \lambda^2} &&\quad\text{implies}\quad &\mathrm{RandSlice}_m(\lambda \ball{n}) &\approx 1. \end{aligned} \end{equation*} Figure~\ref{fig:crofton-phase-map} displays the numerical value $\mathrm{RandSlice}_m(\lambda\ball{n})$ as a function of $\lambda$ and $m$ for two choices of dimension $n$. The asymptotic phase transition is superimposed. \end{example} \begin{remark}[Related Work] Dvoretsky's theorem states that every norm ball in $\mathbbm{R}^n$ has an affine slice, with dimension on the order of $\log n$, that is isomorphic to a Euclidean ball. Milman's proof~\cite{Mil71:New-Proof} shows that a random slice has the desired property with high probability. Research in asymptotic convex geometry has elucidated how the structure of the body affects the dimension of Euclidean slices. In particular, there is a phase transition when the codimension $m$ of the affine space passes the squared Gaussian width of the set. See~\cite[Chaps.~5, 9]{artstein2015asymptotic} or~\cite[Sec.~9.4]{Ver18:High-Dimensional-Probability} for details. \end{remark} \subsubsection{The Kinematic Formula} The last problem is dual to the question about rotation means: \begin{quotation} \textcolor{dkblue}{ \textbf{Suppose that we intersect a fixed convex body with a randomly positioned convex body. How does the total size of the intersections compare with the size of the original sets?}} \end{quotation} \noindent See Figure~\ref{fig:kinematic} for an illustration. The celebrated kinematic formula (Fact~\ref{fact:kinematic}) gives the exact solution to this problem. The kinematic formula also exhibits a sharp phase transition. \begin{figure} \caption{\textbf{Kinematic Formula.} This diagram shows the intersection of a convex body $\set{K}$ with each of two rigid transformations $g_1 \set{M}$ and $g_2 \set{M}$ of a convex body $\set{M}$. The kinematic formula, Fact~\ref{fact:kinematic}, states that an appropriate integral of the area of all such intersections is proportional to the product of the area of $\set{K}$ and the area of $\set{M}$. Moreover, the measure of the set of rigid motions that bring $\set{M}$ into contact with $\set{K}$ can be expressed in terms of the area and perimeter of the two sets.} \label{fig:kinematic} \end{figure} \begin{bigthm}[Kinematic Formula: Phase Transition] \label{thm:kinematic-intro} Consider two nonempty convex bodies $\set{K}, \set{M} \subset \mathbbm{R}^n$, and define the sum $\bar{\Delta}(\set{K},\set{M}) := \bar{\delta}(\set{K}) + \bar{\delta}(\set{M})$ of the central rigid motion volumes. Form the integral \begin{equation}\label{eqn:kinematic-intro} \mathrm{Kinematic}(\set{K}, \set{M}) := \int_{\mathrm{SE}(n)} \frac{\bar{\wills}(\set{K} \cap g \set{M})}{\bar{\wills}(\set{K}) \, \bar{\wills}(\set{M})} \, \mu(\diff g) \in [0, 1]. \end{equation} The group $\mathrm{SE}(n)$ of proper rigid motions on $\mathbbm{R}^n$ is equipped with a canonical motion-invariant measure $\mu$; see Section~\ref{sec:invariant-RM}. For a proportion $\alpha \in (0,1)$, set the transition width $$ t_{\star}(\alpha) := \big[ 4 \bar{\Delta}(\set{K},\set{M}) \log(1/\alpha) \big]^{1/2} + \tfrac{4}{3} \log(1/\alpha). $$ Then $\mathrm{Kinematic}(\set{K}, \set{M})$ undergoes a phase transition at $\bar{\Delta}(\set{K},\set{M}) = n$ with width controlled by $t_{\star}(\alpha)$: \begin{equation*} \begin{aligned} \bar{\Delta}(\set{K},\set{M}) &\leq n - t_{\star}(\alpha) &&\quad\text{implies}\quad & \mathrm{Kinematic}(\set{K},\set{M}) &\geq 1 - \alpha; \\ \bar{\Delta}(\set{K},\set{M}) &\geq n + t_{\star}(\alpha) &&\quad\text{implies}\quad & \mathrm{Kinematic}(\set{K},\set{M}) &\leq \alpha. \end{aligned} \end{equation*} \end{bigthm} \noindent Section~\ref{sec:kinematic} contains the proof of Theorem~\ref{thm:kinematic-intro}. Theorem~\ref{thm:kinematic-intro} asserts that the total rigid motion volume of an intersection $\set{K} \cap g \set{M}$, integrated over proper motions $g$, does not exceed the product of the total rigid motion volumes of the two bodies. When the total ``codimension'' $\bar{\Delta}(\set{K}, \set{M})$ of the two sets is smaller than the ambient dimension $n$, the integral of the total rotation volume is nearly as large as possible. In the complementary case, it is negligible. The transition takes place as the total ``codimension'' changes by about $\sqrt{n}$, or less. \begin{example}[Scaled Balls: Kinematics] Consider the family $\lambda \ball{n}$ of scaled Euclidean balls. For large $n$, Theorem~\ref{thm:kinematic-intro} and the asymptotic formula~\eqref{eqn:central-scaled-ball} demonstrate that the function $(\lambda_1,\lambda_2) \mapsto \mathrm{Kinematic}(\lambda_1\ball{n}, \lambda_2\ball{n})$ undergoes a phase transition at the curve \begin{equation} \label{eqn:kinematic-asymp} \frac{1}{1 + \lambda_1^2}+\frac{1}{1 + \lambda_2^2} = 1 \quad\text{or equivalently}\quad \lambda_1\lambda_2=1. \end{equation} See Figure~\ref{fig:kinematic-phase-map} for a numerical illustration. \end{example} \subsection{Proof Strategy} All four of the phase transition results follow from considerations that are similar to the argument behind Theorem~\ref{thm:randproj-intro}, the result on random projections. First, we use the appropriate weighted intrinsic volumes to rewrite a classical integral geometry formula as an ordinary convolution. This step allows us to express the value of the geometric integral as a probability involving weighted intrinsic volume random variables. Last, we invoke concentration of the weighted intrinsic volumes to see that the probability undergoes a phase transition. The proofs appear in Sections~\ref{sec:moving-flats} and~\ref{sec:moving-bodies}. \begin{figure} \caption{\textbf{Phase Transition for Intersection of Two Moving Balls.} For two scaled balls, this plot displays $\mathrm{Kinematic}(\lambda_1 \ball{n}, \lambda_2\ball{n})$ as a function of the scaling parameters $\lambda_1$ and $\lambda_2$. The black curve is the asymptotic phase transition~\eqref{eqn:kinematic-asymp}. The function $\mathrm{Kinematic}$ is defined in~\eqref{eqn:kinematic-intro}.} \label{fig:kinematic-phase-map} \end{figure} \section{Concentration via Generating Functions} \label{sec:genfun} We use the entropy method to study the fluctuations of the weighted intrinsic volume random variables about their mean. This approach depends on relationships between the cumulant generating function and its derivatives. The modern role of these ideas in probability can be traced to the work of Ledoux~\cite{Led96:Talagrands-Deviation}, but the approach has deeper roots in statistical mechanics and information theory. Our presentation is based on Maurer's article~\cite{Mau12:Thermodynamics-Concentration}. The development in this section does not depend on properties of the intrinsic volumes, so it is more transparent to state the basic results for an arbitrary bounded random variable. We include detailed cross-references, so many readers will prefer to continue to the novel material in the next section. \subsection{Moment Generating Functions}\label{sub:mgf-cgf} We begin with the central objects of study: the moment generating function of a random variable and the logarithm of the moment generating function. \begin{definition}[Moment Generating Functions] Let $Y$ be a bounded, real-valued random variable. The \emph{moment generating function} (mgf) of $Y$ is defined as \begin{equation} \label{eqn:mgf} m_{Y}(\theta) := \operatorname{\mathbb{E}} \mathrm{e}^{\theta Y} \quad\text{for $\theta \in \mathbbm{R}$.} \end{equation} The \emph{cumulant generating function} (cgf) of $Y$ is the logarithm of the mgf: \begin{equation} \label{eqn:cgf} \xi_{Y}(\theta) := \log m_Y(\theta) = \log \operatorname{\mathbb{E}} \mathrm{e}^{\theta Y} \quad\text{for $\theta \in \mathbbm{R}$.} \end{equation} \end{definition} We can link the cgf $\xi_Y$ to bounds on the tails of the random variable $Y$ using a classic result, often called the Cram\'er--Chernoff method. \begin{fact}[Tails and Cgfs] \label{fact:cgf-tail} Let $Y$ be a bounded random variable with cgf $\xi_Y$. For all $t \geq 0$, \begin{align*} \log \Prob{ Y \geq +t } &\leq \inf_{\theta > 0}\ \left[ \xi_Y(\theta) - \theta t \right]; \\ \log \Prob{ Y \leq -t } &\leq \inf_{\theta < 0}\ \left[ \xi_Y(\theta) + \theta t \right]. \end{align*}\end{fact} \noindent The proof is easy. Just apply Markov's inequality to the random variable $\exp(\theta Y)$. For example, see~\cite[Sec.~2.2]{boucheron2013concentration}. Let us record a simple but important translation identity for the cgf: \begin{equation}\label{eqn:cgf-shift} \xi_{Y+c}(\theta) = \xi_Y(\theta)+c\theta \quad\text{for each $c \in \mathbbm{R}$.} \end{equation} This result allows us to focus on zero-mean random variables. Cgfs are particularly useful for studying a sum of independent random variables. Indeed, if $Y$ and $Z$ are independent, then \begin{equation} \label{eqn:cgf-indep} \xi_{Y+Z}(\theta) = \xi_Y(\theta) + \xi_Z(\theta) \quad\text{for all $\theta \in \mathbbm{R}$.} \end{equation} This identity is easy consequence of the definition~\eqref{eqn:cgf} and independence. \subsection{The Gibbs Measure} To extract concentration inequalities from Fact~\ref{fact:cgf-tail}, it suffices to obtain estimates for the cgf $\xi_Y$. A powerful method for controlling the cgf $\xi_Y$ is to bound its derivatives. The derivatives of $\xi_Y$ can be expressed compactly in terms of the moments of a probability distribution related to $Y$. Let $\varrho_Y$ be the probability measure on the real line that gives the distribution of the bounded real random variable $Y$. For a parameter $\theta \in \mathbbm{R}$, we can define another probability distribution via $$ \diff{\varrho_Y^{\theta}}(y) := \frac{\mathrm{e}^{\theta y}\idiff{\varrho_Y(y)}}{m_Y(\theta)} \quad\text{for $y \in \mathbbm{R}$.} $$ In the thermodynamics literature, the distribution $\varrho_Y^{\theta}$ is called the \emph{Gibbs measure} at inverse temperature $\theta$, or the \emph{canonical ensemble}. Applied probabilists often call this object an \emph{exponential tilting} of the distribution. Note that we allow the inverse temperature $\theta$ to take nonpositive values. Since the random variable $Y$ is bounded, we can compute derivatives of the generating functions $m_Y$ and $\xi_Y$ by passing the derivative through the expectation. The first derivative of the cgf satisfies \begin{equation} \label{eqn:thermal-expect} \xi_Y'(\theta) = \frac{m_Y'(\theta)}{m_Y(\theta)} = \frac{\operatorname{\mathbb{E}}[ Y \mathrm{e}^{\theta Y} ]}{\operatorname{\mathbb{E}} \mathrm{e}^{\theta Y} }. \end{equation} The quantity $\xi_Y'(\theta)$ coincides with the mean of the Gibbs measure $\varrho_Y^{\theta}$. It is also called the \emph{thermal mean} of $Y$ at inverse temperature $\theta$. Turning to the second derivative, we find that \begin{equation} \label{eqn:thermal-var} \xi_Y''(\theta) = \frac{m_Y''(\theta)}{m_Y(\theta)} - \left( \frac{m_Y'(\theta)}{m_Y(\theta)} \right)^2 \\ = \frac{\operatorname{\mathbb{E}}[ Y^2 \mathrm{e}^{\theta Y} ]}{\operatorname{\mathbb{E}} \mathrm{e}^{\theta Y}} - \left( \frac{\operatorname{\mathbb{E}}[ Y \mathrm{e}^{\theta Y} ]}{\operatorname{\mathbb{E}} \mathrm{e}^{\theta Y} } \right)^2. \end{equation} The quantity $\xi_Y''(\theta)$ coincides with the variance of the Gibbs measure $\varrho_Y^{\theta}$. It is also known as the \emph{thermal variance} of $Y$ at inverse temperature $\theta$. The derivatives of the cgf $\xi_Y$ at zero have special significance. Indeed, \begin{equation} \label{eqn:cgf-zero} \xi_Y(0) = 0; \qquad \xi_Y'(0) = \operatorname{\mathbb{E}} Y; \qquad \xi_Y''(0) = \Var[Y]. \end{equation} These relations follow instantly by specializing the formulas~\eqref{eqn:cgf},~\eqref{eqn:thermal-expect}, and~\eqref{eqn:thermal-var}. The interpretation of $\xi_Y'$ as a mean has further consequences. In particular, \begin{equation} \label{eqn:thermal-mean-extreme} \inf Y \leq \xi_Y'(\theta) \leq \sup Y \quad\text{for all $\theta \in \mathbbm{R}$.} \end{equation} The interpretation of $\xi_Y''$ as a variance also has ramifications. Since variances are nonnegative, $\xi_Y''(\theta) \geq 0$, which implies that $\xi_Y$ is a convex function. Moreover, the thermal variance is invariant under shifts of the random variable: \begin{equation} \label{eqn:thermal-var-shift} \xi_{Y + c}''(\theta) = \xi_{Y}''(\theta) \quad\text{for all $c \in \mathbbm{R}$ and all $\theta \in \mathbbm{R}$.} \end{equation} This point follows from~\eqref{eqn:cgf-shift}. \subsection{Differential Inequalities} As mentioned, a natural mechanism for controlling the cgf $\xi_Y$ is to bound the thermal mean $\xi_Y'$ over an interval, which limits the growth of the cgf. The thermal mean, in turn, can be calculated from the thermal variance via integration: \begin{equation}\label{eqn:thermal-mean} \xi_Y'(\theta)-\operatorname{\mathbb{E}} Y \overset{\eqref{eqn:cgf-zero}}{=} \xi_Y'(\theta)-\xi_Y'(0) = \int_0^\theta \xi_Y''(s) \idiff{s} = - \int_{\theta}^0 \xi_Y''(s) \idiff{s} \quad\text{for $\theta \in \mathbbm{R}$.} \end{equation} In many situations, we can bound the thermal variance $\xi_Y''$ in terms of the thermal mean $\xi_Y'$. For instance, suppose that \begin{equation*} \xi_Y''(\theta) \leq a\cdot \xi_Y'(\theta) \quad\text{for some $a > 0$.} \end{equation*} Then \eqref{eqn:thermal-mean} leads to an inequality relating the thermal mean $\xi_Y'$ and the cgf $\xi_Y$. To solve this kind of differential inequality, we rely on a 1901 theorem of Petrovitch~\cite{Pet01:Sur-Maniere}. Compare this statement with the more familiar result that Gr{\"o}nwall derived in 1919. \begin{fact}[Petrovitch] \label{fact:petrovitch} Let $g : \mathbbm{R} \to \mathbbm{R}$ be the solution to the ordinary differential equation $$ \begin{cases} g'(t) = u( t; g(t) ) & \text{for $t \in \mathbbm{R}$}; \\ g(0) = 0, \end{cases} $$ where $u : \mathbbm{R}^2 \to \mathbbm{R}$ is any function. For a function $f : \mathbbm{R} \to \mathbbm{R}$ with the boundary condition $f(0) = 0$, the differential inequality \begin{align*} f'(t) \leq u( t; f(t) ) \quad\text{on $t > 0$} \quad\text{implies}\quad f(t) &\leq g(t) \quad\text{on $t \geq 0$}; \\ f'(t) \geq u( t; f(t) ) \quad\text{on $t < 0$} \quad\text{implies}\quad f(t) &\leq g(t) \quad\text{on $t \leq 0$}. \end{align*}\end{fact} In other words, we can solve a differential inequality by means of the ordinary differential equation that arises when we replace the inequality with an equality. This step is analogous with the Herbst argument in the entropy method~\cite[Chap.~6]{boucheron2013concentration}. \subsection{Poisson Cgf Bound} We illustrate this strategy by showing how a simple differential inequality leads to a Poisson-type cgf bound. \begin{lemma}[Poisson Cgf Bound] \label{lem:psi'topsi} Suppose that $Y$ is a zero-mean random variable. When $v \geq 0$, \begin{align} &\xi_Y'(\theta) \leq a \xi_Y(\theta) + v \theta \quad\text{on $\theta > 0$} \quad\text{implies}\quad \xi_Y(\theta) \leq v \cdot \frac{\mathrm{e}^{+a\theta} - a\theta - 1}{a^2} \quad\text{on $\theta \geq 0$}; \label{eqn:mgfY+} \\ &\xi_Y'(\theta) \geq a \xi_Y(\theta) + v \theta \quad \text{on $\theta < 0$} \quad\text{implies}\quad \xi_Y(\theta) \leq v \cdot \frac{\mathrm{e}^{-a\theta} + a\theta - 1}{a^2} \quad\text{on $\theta \leq 0$}. \label{eqn:mgfY-} \end{align} \end{lemma} \begin{proof} Consider the first differential inequality for the cgf $\xi_Y$. Since $\xi_Y(0) = \operatorname{\mathbb{E}} Y = 0$, the associated ordinary differential equation is $$ g(0) = 0 \quad\text{and}\quad g'(\theta) = a g(\theta) + v \theta \quad\text{on $\theta > 0$.} $$ By the method of integrating factors, we quickly arrive at the solution: $$ g(\theta) = v \cdot \frac{\mathrm{e}^{a \theta} - a \theta - 1}{a^2} \quad\text{on $\theta \geq 0$.} $$ Fact~\ref{fact:petrovitch} now implies the bound~\eqref{eqn:mgfY+}. The bound~\eqref{eqn:mgfY-} follows from the parallel argument. \end{proof} \subsection{Poisson Concentration} The cgf bounds that appear in Lemma~\ref{lem:psi'topsi} have the same form as the cgf bounds that lead to the Bennett inequality for a sum of independent, bounded real random variables~\cite[Sec.~2.7]{boucheron2013concentration}. We arrive at the following tail inequalities. \begin{fact}[Poisson Concentration] \label{fact:poisson} Let $Y$ be a (zero-mean) random variable. For $v \geq 0$ and $t \geq 0$, \begin{align} \xi_Y(\theta) \leq v \cdot \frac{\mathrm{e}^{+a\theta} - a\theta - 1}{a^2} \quad\text{on $\theta \geq 0$} \quad\text{implies}\quad \log \Prob{ Y \geq +t } &\leq \frac{-v}{a^2} \psi\left(\frac{+at}{v}\right); \label{eqn:ProbY+} \\ \xi_Y(\theta) \leq v \cdot \frac{\mathrm{e}^{-a\theta} + a\theta - 1}{a^2} \quad\text{on $\theta \leq 0$} \quad\text{implies}\quad \log \Prob{ Y \leq -t } &\leq \frac{-v}{a^2} \psi\left(\frac{-at}{v}\right). \label{eqn:ProbY-} \end{align} The function $\psi(u) := (1 + u) \log(1 + u) - u$ for all $u \in \mathbbm{R}$, with the convention $\psi(u) = + \infty$ for $u < -1$. \end{fact} \begin{proof} Suppose that~\eqref{eqn:mgfY+} is in force. Then Fact~\ref{fact:cgf-tail} implies the tail bound~\eqref{eqn:ProbY+}: $$ \log \Prob{ Y \geq t } \leq \inf_{\theta > 0}\ \left[ \xi_Y(\theta ) - \theta t \right] \leq \inf_{\theta > 0}\ \left[ v \cdot \frac{\mathrm{e}^{a \theta} - a \theta - 1}{a^2} - \theta t \right] = \frac{v}{a^2} \psi\left( \frac{at}{v} \right). $$ The infimum is attained at $\theta = a^{-1} \log(1 + at/v)$. The tail bound~\eqref{eqn:ProbY-} follows from the parallel argument. \end{proof} The tail function $\psi$ satisfies the inequality $\psi(u) \geq (u^2/2)/(1+u/3)$ for all $u \in \mathbbm{R}$. Therefore, each of the bounds~\eqref{eqn:ProbY+} and~\eqref{eqn:ProbY-} implies weaker, but more interpretable, results. For all $t \geq 0$, \begin{equation} \label{eqn:bernstein} \log \Prob{ Y \geq + t} \leq \frac{-t^2/2}{v + at / 3} \quad\text{and}\quad \log \Prob{ Y \leq - t} \leq \frac{-t^2/2}{(v - at / 3)_+}. \end{equation} The tail bounds in~\eqref{eqn:bernstein} are called Bernstein inequalities; it is also common to combine them into a single formula. \section{From Thermal Variance Bounds to Concentration} Pursuing the ideas in Section~\ref{sec:genfun}, we can now derive concentration inequalities for the weighted intrinsic volume random variables as a consequence of bounds on the thermal variance. For the moment, we will take the thermal variance bounds for granted. The subsequent sections develop the machinery required to control the thermal variance of this type of random variable. \subsection{Rotation Volumes} To begin, we present detailed concentration results for rotation volume random variables, introduced in Section~\ref{sec:wvol}. \begin{theorem}[Rotation Volumes: Variance and Cgf] \label{thm:rotvol-conc} Let $\set{K} \subset \mathbbm{R}^n$ be a nonempty convex body with rotation volume random variable $\mathring{I}_{\set{K}}$ and central rotation volume $\mathring{\delta}(\set{K}) = \operatorname{\mathbb{E}} \mathring{I}_{\set{K}}$. The variance of $\mathring{I}_{\set{K}}$ satisfies $$ \Var[ \mathring{I}_{\set{K}} ] \leq \mathring{\delta}(\set{K}). $$ The cgf of $\mathring{I}_{\set{K}}$ satisfies $$ \xi_{\mathring{I}_{\set{K}}}(\theta) \leq \mathring{\delta}(\set{K}) \cdot (\mathrm{e}^{\theta} - \theta - 1) \quad\text{for all $\theta \in \mathbbm{R}$.} $$ \end{theorem} \noindent The proof of Theorem~\ref{thm:rotvol-conc} begins in Section~\ref{sec:rotvol-from-tv} and is completed in Section~\ref{sec:rotvol-conc} Combining Theorem~\ref{thm:rotvol-conc} with Fact~\ref{fact:poisson}, we immediately obtain probability bounds for the rotation volume random variable $\mathring{I}_{\set{K}}$ of a convex body $\set{K}$. \begin{align*} \log \Prob{ \mathring{I}_{\set{K}} \geq \mathring{\delta}(\set{K}) + t } &\leq -\mathring{\delta}(\set{K})\cdot \psi(+t/\mathring{\delta}(\set{K})) && \quad\text{for $0 \leq t$;} \\ \log \Prob{ \mathring{I}_{\set{K}} \leq \mathring{\delta}(\set{K}) - t } &\leq -\mathring{\delta}(\set{K})\cdot \psi(-t/\mathring{\delta}(\set{K})) && \quad\text{for $0 \leq t \leq \mathring{\delta}(\set{K})$.} \end{align*} The tail function $\psi$ is defined in Fact~\ref{fact:poisson}. The Poisson-type bound implies a weaker Bernstein inequality of the form~\eqref{eqn:bernstein}: \begin{equation} \label{eqn:rotvol-bernstein} \log \Prob{ \frac{\mathring{I}_{\set{K}} - \mathring{\delta}(\set{K})}{t} \geq 1 } \leq \frac{-t^2/2}{\mathring{\delta}(\set{K}) + \abs{t}/3} \quad\text{for all $t \neq 0$.} \end{equation} The unusual formulation~\eqref{eqn:rotvol-bernstein} captures both the lower and upper tail in a single expression. Theorem~\ref{thm:rotvol-intro}, in the introduction, rephrases the latter inequality in a more standard way. It leads to the phase transition for random projections, Theorem~\ref{thm:randproj-intro}. As a further consequence, we can deduce probability inequalities for sums. Let $\mathring{I}_{\set{K}}$ and $\mathring{I}_{\set{M}}$ be \emph{independent} rotation volume random variables, associated with two convex bodies $\set{K}, \set{M} \subset \mathbbm{R}^n$. Define the sum $\mathring{\Delta}(\set{K},\set{M}) := \mathring{\delta}(\set{K}) + \mathring{\delta}(\set{M})$ of central rotation volumes. Then \begin{equation} \label{eqn:rotvol-sum} \log \Prob{ \frac{(\mathring{I}_{\set{K}} + \mathring{I}_{\set{M}}) - \mathring{\Delta}(\set{K},\set{M}) }{t} \geq 1 } \leq \frac{-t^2/2}{\mathring{\Delta}(\set{K},\set{M}) + \abs{t}/3} \quad\text{for $t \neq 0$.} \end{equation} This result follows from the fact~\eqref{eqn:cgf-indep} that cgfs are additive, from Theorem~\ref{thm:rotvol-conc}, and from Fact~\ref{fact:poisson}. It plays a key role in the proof of Theorem~\ref{thm:rotmean-intro}, the phase transition for rotation means. \subsubsection{Reduction to Thermal Variance Bound} \label{sec:rotvol-from-tv} Theorem~\ref{thm:rotvol-conc} follows from a bound on the thermal variance of the rotation volume random variable. \begin{proposition}[Rotation Volumes: Thermal Variance] \label{prop:rotvol-tvar} Let $\set{K} \subset \mathbbm{R}^n$ be a nonempty convex body. The thermal variance of the rotation volume random variable $\mathring{I}_{\set{K}}$ satisfies $$ \xi_{\mathring{I}_{\set{K}}}''(\theta) \leq \xi_{\mathring{I}_{\set{K}}}'(\theta) \quad\text{for all $\theta \in \mathbbm{R}$.} $$ \end{proposition} \noindent The proof of Proposition~\ref{prop:rotvol-tvar} is the object of Section~\ref{sec:rotvol-conc}. Right now, we can use it to derive concentration for the rotation volumes. \begin{proof}[Proof of Theorem~\ref{thm:rotvol-conc} from Proposition~\ref{prop:rotvol-tvar}] To obtain the variance bound, note that $$ \Var[ \mathring{I}_{\set{K}} ] \stackrel{\eqref{eqn:cgf-zero}}{=} \xi_{\mathring{I}_{\set{K}}}''(0) \leq \xi_{\mathring{I}_{\set{K}}}'(0) \stackrel{\eqref{eqn:cgf-zero}}{=} \operatorname{\mathbb{E}} \mathring{I}_{\set{K}} = \mathring{\delta}(\set{K}). $$ The inequality is Proposition~\ref{prop:rotvol-tvar}. To derive the cgf bound, we pass to the zero-mean random variable \begin{equation*} Y := \mathring{I}_{\set{K}} - \operatorname{\mathbb{E}} \mathring{I}_{\set{K}} = \mathring{I}_{\set{K}} - \mathring{\delta}(\set{K}). \end{equation*} The cgfs of $\mathring{I}_{\set{K}}$ and $Y$ are related as \begin{equation*} \label{eqn:cgf-I-Y} \xi_{\mathring{I}_{\set{K}}}(\theta) = \xi_{Y + \mathring{\delta}(\set{K})}(\theta) \overset{\eqref{eqn:cgf-shift}}{=} \xi_Y(\theta) + \mathring{\delta}(\set{K}) \,\theta \quad\text{for all $\theta \in \mathbbm{R}$.} \end{equation*} Assume that $\theta>0$. Using Proposition~\ref{prop:rotvol-tvar} again, we find that \begin{equation*} \xi'_Y(\theta) \stackrel{\eqref{eqn:thermal-mean}}{=} \int_0^\theta \xi''_Y(s) \idiff{s} \stackrel{\eqref{eqn:thermal-var-shift}}{=} \int_0^\theta \xi''_{\mathring{I}_{\set{K}}}(s) \idiff{s} \leq \int_0^{\theta} \xi'_{\mathring{I}_{\set{K}}}(s) \idiff{s} \stackrel{\eqref{eqn:cgf-zero}}{=} \xi_Y(\theta) + \mathring{\delta}(\set{K}) \, \theta. \end{equation*} For $\theta < 0$, the inequality is reversed. An application of Lemma~\ref{lem:psi'topsi} with $a = 1$ and $v = \mathring{\delta}(\set{K})$ furnishes the result. \end{proof} \subsection{Rigid Motion Volumes} We continue with a refined concentration result for the rigid motion volume random variable, introduced in Section~\ref{sec:wvol-conc-intro}. \begin{theorem}[Rigid Motion Volumes: Variance and Cgf] \label{thm:rmvol-conc} Let $\set{K} \subset \mathbbm{R}^n$ be a nonempty convex body with rigid motion random variable $\bar{I}_{\set{K}}$ and central rigid motion volume $\bar{\delta}(\set{K}) := \operatorname{\mathbb{E}} \bar{I}_{\set{K}}$. Define the complement $ \bar{\delta}_{\circ}(\set{K}) := (n+1) - \bar{\delta}(\set{K}). $ The variance of $\bar{I}_{\set{K}}$ satisfies $$ \Var[ \bar{I}_{\set{K}} ] \leq \frac{2 \bar{\delta}(\set{K}) \bar{\delta}_{\circ}(\set{K})}{n+1} =: \bar{v}(\set{K}). $$ The cgf of $\bar{I}_{\set{K}}$ satisfies \begin{align*} \xi_{\bar{I}_{\set{K}}}(\theta) &\leq \bar{v}(\set{K}) \cdot \frac{\mathrm{e}^{\beta_{\circ} \theta} - \beta_{\circ} \theta - 1}{\beta_{\circ}^2} \quad\text{for $\theta \geq 0$} &\text{where}&& \beta_{\circ} := \frac{2\bar{\delta}_{\circ}(\set{K})}{n+1} < 2; \\ \xi_{\bar{I}_{\set{K}}}(\theta) &\leq \bar{v}(\set{K}) \cdot \frac{\mathrm{e}^{\beta \theta} - \beta \theta - 1}{\beta^2} \quad\text{for $\theta \leq 0$} &\text{where}&& \beta := \frac{2\bar{\delta}(\set{K})}{n+1} < 2. \end{align*} \end{theorem} \noindent The proof of Theorem~\ref{thm:rmvol-conc} appears in Section~\ref{sec:rmvol-from-tv} and Section~\ref{sec:rmvol-conc}. Together, Theorem~\ref{thm:rmvol-conc} and Fact~\ref{fact:poisson} deliver concentration inequalities for the rigid motion volumes. For all $t \geq 0$, \begin{align*} \log \Prob{ \bar{I}_{\set{K}} \geq \bar{\delta}(\set{K}) + t } &\leq \frac{-(n+1)}{2\bar{\delta}_{\circ}(\set{K})} \cdot \bar{\delta}(\set{K}) \cdot \psi( t/\bar{\delta}(\set{K}) ); \\ \log \Prob{ \bar{I}_{\set{K}} \leq \bar{\delta}(\set{K}) - t } &\leq \frac{-(n+1)}{2\bar{\delta}(\set{K})} \cdot \bar{\delta}_{\circ}(\set{K}) \cdot \psi( t /\bar{\delta}_{\circ}(\set{K}) ). \end{align*} For $t \neq 0$, the rigid motion random variable satisfies a Bernstein inequality~\eqref{eqn:bernstein} of the form \begin{align} \log \Prob{ \frac{\bar{I}_{\set{K}} - \bar{\delta}(\set{K})}{t} \geq 1 } \leq \frac{-t^2/4}{(\bar{\delta}(\set{K}) \wedge \bar{\delta}_{\circ}(\set{K})) + \abs{t}/3}. \label{eqn:rmvol-bernstein} \end{align} Theorem~\ref{thm:rmvol-intro}, in the introduction, contains a slightly weaker variant of the last display. The inequality~\eqref{eqn:rmvol-bernstein} contributes to the phase transition for random slices that appears in Theorem~\ref{thm:crofton-intro}. We can also obtain bounds for sums of rigid motion volume random variables. Let $\bar{I}_{\set{K}}$ and $\bar{I}_{\set{M}}$ be \emph{independent} rigid motion volume random variables associated with convex bodies $\set{K}, \set{M} \subset \mathbbm{R}^n$. Define \begin{equation*} \bar{\Delta}(\set{K}, \set{M}) := \bar{\delta}(\set{K}) + \bar{\delta}(\set{M}) \quad\text{and}\quad \bar{v}(\set{K}, \set{M}) := (\bar{\delta}(\set{K}) \wedge \bar{\delta}_{\circ}(\set{K})) + (\bar{\delta}(\set{M}) \wedge \bar{\delta}_{\circ}(\set{M})). \end{equation*} Using the additivity~\eqref{eqn:cgf-indep} of cgfs and making some simple bounds in Theorem~\ref{thm:rmvol-conc}, we find that \begin{equation} \label{eqn:rmvol-sum} \log \Prob{ \frac{(\bar{I}_{\set{K}} + \bar{I}_{\set{M}}) - \bar{\Delta}(\set{K}, \set{M}) }{t} \geq 1 } \leq \frac{-t^2/4}{\bar{v}(\set{K}, \set{M}) + \abs{t} / 3}. \end{equation} This result yields the phase transition for the kinematic formula, reported in Theorem~\ref{thm:kinematic-intro}. \subsubsection{Reduction to Thermal Variance Bound} \label{sec:rmvol-from-tv} Theorem~\ref{thm:rmvol-conc} follows from a thermal variance bound. \begin{proposition}[Rigid Motion Volumes: Thermal Variance] \label{prop:rmvol-tvar} Let $\set{K} \subset \mathbbm{R}^n$ be a nonempty convex body. The thermal variance of the rigid motion random variable $\bar{I}_{\set{K}}$ satisfies $$ \xi_{\bar{I}_{\set{K}}}''(\theta) \leq \frac{2}{n+1} \cdot \xi_{\bar{I}_{\set{K}}}'(\theta) \cdot \big[ (n+1) - \xi_{\bar{I}_{\set{K}}}'(\theta) \big] \quad\text{for all $\theta \in \mathbbm{R}$.} $$ \end{proposition} \noindent We establish Proposition~\ref{prop:rmvol-tvar} below in Section~\ref{sec:rmvol-conc}. Here, we show how the bound leads to concentration of the rigid motion volumes. \begin{proof}[Proof of Theorem~\ref{thm:rmvol-conc} from Proposition~\ref{prop:rmvol-tvar}] To derive the variance bound, apply Proposition~\ref{prop:rmvol-tvar} to obtain $$ \Var[ \bar{I}_{\set{K}} ] \stackrel{\eqref{eqn:cgf-zero}}{=} \xi_{\bar{I}_{\set{K}}}''(0) \leq \frac{2}{n+1} \cdot \xi_{\bar{I}_{\set{K}}}'(0) \cdot \big[ (n+1) - \xi_{\bar{I}_{\set{K}}}'(0) \big] \stackrel{\eqref{eqn:cgf-zero}}{=} \frac{2}{n+1} \cdot \bar{\delta}(\set{K}) \cdot \bar{\delta}_{\circ}(\set{K}). $$ We have used the definition $\bar{\delta}(\set{K}) = \operatorname{\mathbb{E}} \bar{I}_{\set{K}}$ twice. For the concentration result, we pass to the zero-mean random variable $Y:=\bar{I}_{\set{K}}-\bar{\delta}(\set{K})$. First, assume that $\theta>0$. Integrating the bound from Proposition~\ref{prop:rmvol-tvar}, we find that \begin{align*} \xi_Y'(\theta) &\stackrel{\eqref{eqn:thermal-mean}}{\leq} \frac{2}{n+1} \int_0^\theta \big[ (n + 1) - \xi_{\bar{I}_{\set{K}}}'(s) \big] \cdot \xi_{\bar{I}_{\set{K}}}'(s) \idiff{s}\\ & \stackrel{\phantom{\eqref{eqn:thermal-mean}}}{\leq} \frac{2}{n+1} \big[ (n + 1) - \xi_{\bar{I}_{\set{K}}}'(0) \big] \int_{0}^\theta \xi_{\bar{I}_{\set{K}}}'(s) \idiff{s} \stackrel{\eqref{eqn:cgf-zero}}{=} \frac{2 \bar{\delta}_\circ(\set{K})}{n+1} \cdot \xi_{\bar{I}_{\set{K}}}(\theta). \end{align*} The second inequality depends on the property that the derivative $\xi_{\bar{I}_{\set{K}}}'$ is increasing because the cgf $\xi_{\bar{I}_{\set{K}}}$ is convex. We have also used fact~\eqref{eqn:thermal-mean-extreme} to infer that the thermal mean is nonnegative: $\xi_{\bar{I}_{\set{K}}}'(s) \geq \inf \bar{I}_{\set{K}} \geq 0$. Now, invoke~\eqref{eqn:cgf-shift} to pass to the cgf $\xi_Y$ of the zero-mean variable $Y$: \begin{equation} \label{eqn:diff-ineq-pos} \xi_Y'(\theta) \leq \frac{2 \bar{\delta}_{\circ}(\set{K})}{n+1} \big[ \xi_Y(\theta) + \bar{\delta}(\set{K})\, \theta \big] = \beta_\circ \xi_Y(\theta) + \bar{v}(\set{K}) \, \theta \quad\text{for $\theta > 0$.} \end{equation} We can invoke the first case of Lemma~\ref{lem:psi'topsi} with $a = \beta_{\circ}$ and $v = \bar{v}(\set{K})$ to obtain the cgf bound for $\theta \geq 0$. If $\theta< 0$, a straightforward variant of the same argument implies that \begin{equation} \label{eqn:diff-ineq-neg} \xi_Y'(\theta) \geq \frac{2\bar{\delta}(\set{K})}{n+1} \left[ \bar{\delta}_\circ(\set{K})\,\theta - \xi_Y(\theta) \right] = - \beta \xi_Y(\theta) + \bar{v}(\set{K}) \, \theta \quad\text{for $\theta < 0$.} \end{equation} The cgf bound for $\theta \leq 0$ follows from the second case of Lemma~\ref{lem:psi'topsi} with $a=-\beta$ and with $v = \bar{v}(\set{K})$. \end{proof} \subsection{Intrinsic Volumes} Using the same methodology, we can prove a concentration inequality for the intrinsic volume random variable that improves significantly over the results in our prior work~\cite{LMNPT20:Concentration-Euclidean}. These results are not immediately relevant, so we have postponed them to Appendix~\ref{app:intvol}. \section{From Distance Integrals to Generating Functions} \label{sec:distance-integral} As we have seen, the concentration results for weighted intrinsic volumes depend on estimates for the thermal variance of the associated random variable. To obtain these bounds, the first step is to find an alternative expression for the mgf of the random variable. The key observation is that the Steiner formula~\eqref{eqn:steiner-intro} allows us to pass from the discrete weighted intrinsic volume random variable, taking values in $\{0, 1, \dots, n\}$, to a continuous random variable, taking values in $\mathbbm{R}^n$. This section contains the background for this argument. The next two sections execute the approach for the two sequences of weighted intrinsic volumes. \subsection{The Distance to a Convex Body} To develop this approach, we first introduce some functions related to the Euclidean distance between a point and a convex body. \begin{definition}[Distance to a Convex Body] \label{def:dist} The \emph{distance} to the nonempty convex body $\set{K} \subset \mathbbm{R}^n$ is the function $$ \dist_{\set{K}}(\vct{x}) := \min \{ \norm{ \vct{y} - \vct{x}} : \vct{y} \in \set{K} \} \quad\text{for $\vct{x} \in \mathbbm{R}^n$.} $$ The \emph{projection} onto the convex body $\set{K}$ is the (unique) point where the distance is realized: $$ \proj_{\set{K}}(\vct{x}) := \arg \min \{ \norm{ \vct{y} - \vct{x}} : \vct{y} \in \set{K} \} \quad\text{for $\vct{x} \in \mathbbm{R}^n$.} $$ The \emph{normal vector} to the convex body, induced by a point $\vct{x} \in \mathbbm{R}^n$, is $$ \vct{n}_{\set{K}}(\vct{x}) := \frac{\vct{x} - \proj_{\set{K}}(\vct{x})}{\|\vct{x}-\proj_{\set{K}}(\vct{x})\|} = \frac{\vct{x} - \proj_{\set{K}}(\vct{x})}{\dist_{\set{K}}(\vct{x})} \quad\text{for $\vct{x} \in \mathbbm{R}^n\backslash \set{K}$.} $$ If $\vct{x}\in \set{K}$, we set $\vct{n}_{\set{K}}=\zerovct$. \end{definition} \subsection{The Steiner Formula as a Distance Integral}\label{sub:steiner-dist} We can reinterpret Steiner's formula~\eqref{eqn:steiner-intro} as a statement that integrals of the distance to a convex body can be evaluated in terms of the intrinsic volumes of the convex body. This perspective dates at least as far back as Hadwiger's influential paper~\cite{Had75:Willssche}. \begin{fact}[Generalized Steiner Formula] \label{fact:distance-integral} For every function $f : \mathbbm{R}_+ \to \mathbbm{R}$ where the integrals are finite, \begin{equation} \label{eqn:gen-steiner} \int_{\mathbbm{R}^n} f(\dist_{\set{K}}(\vct{x})) \idiff{\vct{x}} = f(0) \cdot \mathrm{V}_n(\set{K}) + \sum_{i=0}^{n-1} \left( \int_0^\infty f(r) \cdot r^{n-i-1} \idiff{r} \right) \omega_{n-i} \cdot \mathrm{V}_i(\set{K}). \end{equation} \end{fact} \noindent See~\cite[Prop.~2.4]{LMNPT20:Concentration-Euclidean} for a short proof of this result. Following~\cite{LMNPT20:Concentration-Euclidean}, we observe that the right-hand side of the formula~\eqref{eqn:gen-steiner} is a linear function of the sequence of intrinsic volumes of $\set{K}$. By a careful choice of the function $f$, we can express any moment of the intrinsic volume sequence. \emph{A fortiori}, we can also express any moment of the rotation volumes or the rigid motion volumes. The value of this approach is that we can treat the distance integrals that arise using methods from geometric functional analysis. In particular, we will exploit variance inequalities for log-concave and concave measures to obtain bounds for distance integrals and, thereby, for weighted intrinsic volumes. \begin{remark}[Related Work] A rudimentary form of this argument first appeared in our paper~\cite{ALMT14:Living-Edge} on phase transitions in conic geometry. Later, the papers~\cite{MT14:Steiner-Formulas} and~\cite{GNP17:Gaussian-Phase} demonstrated the full power of this approach for treating the conic intrinsic volumes. Our recent work~\cite{LMNPT20:Concentration-Euclidean} contains an initial attempt to execute a similar method for Euclidean intrinsic volumes. In this paper, we have found a seamless implementation of the idea. \end{remark} \subsection{Properties of the Distance Function} The distance function enjoys a number of elegant properties, which we record for later reference. \begin{fact}[Properties of the Distance Function] \label{fact:dist} The maps appearing in Definition~\ref{def:dist} satisfy the following properties. \begin{enumerate} \item\label{dist:1} The function $\dist_{\set{K}}$ and its square $\dist_{\set{K}}^2$ are convex; \item\label{dist:2} The function $\dist_{\set{K}}^2$ is everywhere differentiable, and $\grad \dist_{\set{K}}^2(\vct{x}) = 2 \dist_{\set{K}}(\vct{x}) \, \vct{n}_{\set{K}}(\vct{x})$; \item\label{dist:3} The Hessian of the squared distance satisfies $(\operatorname{Hess} \dist^2_{\set{K}}(\vct{x})) \, \vct{n}_{\set{K}}(\vct{x}) = 2\vct{n}_{\set{K}}(\vct{x})$. \end{enumerate} \end{fact} \begin{proof}[Proof Sketch] The convexity of $\dist_{\set{K}}(\vct{x})$ follows because it is the minimum of a jointly convex function of $(\vct{x}, \vct{y})$ with respect to the variable $\vct{y}$, and the convexity of $\dist_{\set{K}}^2$ is a standard consequence of the convexity and nonnegativity of $\dist_{\set{K}}$. For Claim~\eqref{dist:2}, the differentiability and the gradient of the squared distance function, see~\cite[Thm.~2.26(b)]{RW98:Variational-Analysis}. Claim~\eqref{dist:3} follows by computing the gradient on both sides of the identity $4\dist^2_{\set{K}}(\vct{x}) = \|\grad \dist_{\set{K}}^2(\vct{x})\|^2$ and equating terms. \end{proof} \section{Rotation Volumes: Thermal Variance Bound} \label{sec:rotvol-conc} In this section, we establish Proposition~\ref{prop:rotvol-tvar}, the thermal variance bound for the rotation volumes. The basic idea is to invoke the generalized Steiner formula (Fact~\ref{fact:distance-integral}) to rewrite the mgf of the rotation volume random variable in terms of a distance integral. After some further manipulations, we can bound the distance integral using a functional inequality for log-concave probability measures. We begin with the rotation volumes because the ideas shine through most brightly. The other cases follow the same pattern of argument, but the mass of detail becomes denser. \subsection{Setup} Fix a nonempty convex body $\set{K} \subset \mathbbm{R}^n$. This is the only convex body that appears in this section, so we will ruthlessly suppress it from the notation. For each $i = 0, 1, 2, \dots, n$, let $\mathrm{V}_i$ denote the $i$th intrinsic volume of $\set{K}$. The rotation volumes $\mathring{\intvol}_i$ and the total rotation volume $\mathring{\wills}$ are given by \begin{equation} \label{eqn:rotvol-def-pf} \mathring{\intvol}_i := \frac{\omega_{n+1}}{\omega_{i+1}} \mathrm{V}_{n-i} \quad\text{and}\quad \mathring{\wills} := \sum_{i=0}^n \mathring{\intvol}_i. \end{equation} The rotation volume random variable $\mathring{I}$ follows the distribution \begin{equation} \label{eqn:rotvol-rv-def} \Prob{ \mathring{I} = n - i } = \mathring{\intvol}_{i} / \mathring{\wills} \quad\text{for $i = 0, 1, 2, \dots, n$.} \end{equation} The central rotation volume $\mathring{\delta} := \operatorname{\mathbb{E}} \mathring{I}$. We also abbreviate $\dist := \dist_{\set{K}}$. \subsection{The Distance Integral} \label{sec:rotvol-distint} First, we express the exponential moments of the sequence of rotation volumes as a distance integral. \begin{proposition}[Rotation Volumes: Distance Integral] \label{prop:rotvol-distint} For each $\theta \in \mathbbm{R}$, define the convex potential \begin{equation} \label{eqn:rotvol-potent} \mathring{J}_{\theta}(\vct{x}) := 2 \pi \mathrm{e}^{\theta} \dist(\vct{x}) \quad\text{for $\vct{x} \in \mathbbm{R}^n$.} \end{equation} For any function $h : \mathbbm{R}_+ \to \mathbbm{R}$ where the expectations on the right-hand side are finite, \begin{equation} \label{eqn:rotvol-distint} \frac{\omega_{n+1} \mathrm{e}^{n\theta}}{2} \int_{\mathbbm{R}^n} h(\mathring{J}_{\theta}(\vct{x})) \cdot \mathrm{e}^{ - \mathring{J}_{\theta}(\vct{x}) } \idiff{\vct{x}} = \sum_{i=0}^n \operatorname{\mathbb{E}}[ h(G_{i}) ] \, \mathrm{e}^{(n-i) \theta} \cdot \mathring{\intvol}_i. \end{equation} The random variable $G_i \sim \textsc{gamma}(i, 1)$ follows the gamma distribution with shape parameter $i$ and scale parameter $1$. By convention, $G_0 = 0$. \end{proposition} \begin{proof} To verify that the potential $\mathring{J}_{\theta}$ defined in~\eqref{eqn:rotvol-potent} is convex, simply invoke Fact~\ref{fact:dist}\eqref{dist:1}, which states that the distance function is convex. To obtain the moment identity, instantiate Fact~\ref{fact:distance-integral} with the function $f(r) = h(2\pi \mathrm{e}^{\theta} r) \, \mathrm{e}^{-2\pi \mathrm{e}^\theta r}$. We ascertain that \begin{align*} \int_{\mathbbm{R}^n} h(\mathring{J}_{\theta}(\vct{x})) \cdot \mathrm{e}^{-\mathring{J}_{\theta}(\vct{x})} \idiff{\vct{x}} &= h(0) \cdot \mathrm{V}_n + \sum_{i=0}^{n-1} \left(\int_0^\infty h(2\pi \mathrm{e}^{\theta} r) \cdot \mathrm{e}^{-2\pi\mathrm{e}^{\theta}r} r^{n-i-1} \idiff{r} \right) \omega_{n-i} \cdot \mathrm{V}_i \\ &= \frac{2}{\omega_{n+1}} \left[ h(0) \cdot \mathring{\intvol}_0 + \sum_{i=1}^n \left( \int_0^\infty h(2\pi \mathrm{e}^{\theta} r) \cdot \mathrm{e}^{-2\pi\mathrm{e}^{\theta}r} r^{i-1} \idiff{r} \right) \frac{\omega_{i} \omega_{i+1}}{2} \cdot \mathring{\intvol}_i \right]. \end{align*}We have used the definition~\eqref{eqn:rotvol-def-pf} of $\mathring{\intvol}_i$ and the fact~\eqref{eqn:ball-sphere} that $\omega_1 = 2$. To handle the integrals, make the change of variables $2\pi\mathrm{e}^{\theta} r \mapsto s$. We recognize an expectation with respect to the gamma distribution: $$ \int_0^\infty h(2\pi \mathrm{e}^{\theta} r) \cdot r^{i-1} \mathrm{e}^{-2\pi\mathrm{e}^{\theta}r} \idiff{r} = \frac{1}{(2\pi \mathrm{e}^{\theta})^{i}} \int_{0}^\infty h(s) \cdot s^{i-1} \mathrm{e}^{-s} \idiff{s} = \frac{\Gamma(i)}{(2\pi \mathrm{e}^{\theta})^{i}} \operatorname{\mathbb{E}}[ h(G_i) ]. $$ Altogether, $$ \frac{\omega_{n+1}}{2} \int_{\mathbbm{R}^n} h(\mathring{J}_{\theta}(\vct{x})) \cdot \mathrm{e}^{-\mathring{J}_{\theta}(\vct{x})} \idiff{\vct{x}} = \sum_{i=0}^n \frac{\Gamma(i) \, \omega_i \omega_{i+1}}{2 (2\pi)^i} \cdot \operatorname{\mathbb{E}}[ h(G_i) ] \, \mathrm{e}^{-i \theta}\cdot \mathring{\intvol}_i. $$ Owing to the formula~\eqref{eqn:ball-sphere} for the surface area and the Legendre duplication formula, each of the fractions on the right-hand side equals one. Multiply each side by $\mathrm{e}^{n\theta}$ to complete the argument. \end{proof} As an immediate corollary, we obtain the metric representations of the total rotation volume and the central rotation volume, reported in Proposition~\ref{prop:rotvol-metric}. \begin{proof}[Proof of Proposition~\ref{prop:rotvol-metric}] Apply Proposition~\ref{prop:rotvol-distint} with $h(s) = 1$ and then with $h(s) = s$, using the fact that $\operatorname{\mathbb{E}} G_i=i$. Select the parameter $\theta = 0$. \end{proof} \subsection{A Family of Log-Concave Measures} Proposition~\ref{prop:rotvol-distint} allows us to obtain an alternative expression for the mgf $m_{\mathring{I}}$ of the rotation volume random variable. Indeed, we can choose $h(s) = 1$ to obtain \begin{equation} \label{eqn:rotvol-mgf} m_{\mathring{I}}(\theta) \stackrel{\eqref{eqn:mgf}}{=} \operatorname{\mathbb{E}} \mathrm{e}^{\theta \mathring{I}} \stackrel{\eqref{eqn:rotvol-rv-def}}{=} \frac{1}{\mathring{\wills}}\sum_{i=0}^{n} \mathrm{e}^{(n-i)\theta} \cdot \mathring{\intvol}_i = \frac{\omega_{n+1} \mathrm{e}^{n\theta}}{2 \mathring{\wills}} \int_{\mathbbm{R}^n} \mathrm{e}^{-\mathring{J}_{\theta}(\vct{x})} \idiff{\vct{x}}. \end{equation} For each $\theta \in \mathbbm{R}$, construct the probability measure $\mathring{\mu}_{\theta}$ with the log-concave density \begin{equation*} \label{eqn:rotvol-measure} \frac{\diff\mathring{\mu}_{\theta}(\vct{x})}{\diff{\vct{x}}} = \frac{\omega_{n+1}}{2 \mathring{\wills}} \cdot \frac{ \mathrm{e}^{n\theta}}{ m_{\mathring{I}}(\theta)} \cdot \mathrm{e}^{-\mathring{J}_{\theta}(\vct{x})} \quad\text{for $\vct{x} \in \mathbbm{R}^n$.} \end{equation*} Owing to~\eqref{eqn:rotvol-mgf}, the measure $\mathring{\mu}_{\theta}$ has total mass one. The density is log-concave because $\mathring{J}_{\theta}$ is convex. \begin{corollary}[Rotation Volumes: Probabilistic Formulation] \label{cor:rotvol-lc} Instate the notation from Proposition~\ref{prop:rotvol-distint}. Draw a random vector $\vct{z} \sim \mathring{\mu}_{\theta}$. Then $$ \operatorname{\mathbb{E}}[ h(\mathring{J}_{\theta}(\vct{z})) ] = \frac{1}{m_{\mathring{I}}(\theta)} \sum_{i=0}^n \operatorname{\mathbb{E}}[ h(G_{i}) ]\, \mathrm{e}^{(n-i) \theta} \cdot \Prob{ \mathring{I} = n - i }. $$ \end{corollary} \begin{proof} This result is a straightforward reinterpretation of Proposition~\ref{prop:rotvol-distint}. \end{proof} \subsection{A Thermal Variance Identity} With Corollary~\ref{cor:rotvol-lc} at hand, we quickly obtain an alternative formula for $\xi_{\mathring{I}}''(\theta)$, the thermal variance~\eqref{eqn:thermal-var} of the rotation volume random variable $\mathring{I}$. These results are phrased in terms of a variance with respect to the measure $\mathring{\mu}_{\theta}$. \begin{lemma}[Rotation Volumes: Thermal Variance Identity] \label{lem:rotvol-tmv} Draw a random vector $\vct{z} \sim \mathring{\mu}_{\theta}$. The thermal mean and variance of the rotation volume random variable satisfy \begin{align} \xi_{\mathring{I}}'(\theta) &= \operatorname{\mathbb{E}}[ n - \mathring{J}_{\theta}(\vct{z}) ]; \notag \\ \xi_{\mathring{I}}''(\theta) &= \Var[ \mathring{J}_{\theta}(\vct{z}) ] - (n - \xi_{\mathring{I}}'(\theta)). \label{eqn:rotvol-tmv} \end{align} \end{lemma} \begin{proof} The gamma random variables $G_i$ satisfy the identities $$ \operatorname{\mathbb{E}}[ n - G_{i} ] = n - i \quad\text{and}\quad \operatorname{\mathbb{E}}[ (n - G_{i})^2 - G_{i} ] = (n-i)^2. $$ First, we apply Corollary~\ref{cor:rotvol-lc} with $h(s) = n - s$ to arrive at \begin{equation} \label{eqn:rotvol-tm-pf} \xi_{\mathring{I}}'(\theta) \overset{\eqref{eqn:thermal-expect}}{=} \frac{m_{\mathring{I}}'(\theta)}{m_{\mathring{I}}(\theta)} \stackrel{\eqref{eqn:thermal-expect}}{=} \frac{1}{m_{\mathring{I}}(\theta)} \sum_{i=0}^n (n - i) \, \mathrm{e}^{(n-i)\theta} \cdot \Prob{ \mathring{I} = n - i } = \operatorname{\mathbb{E}}[ n - \mathring{J}_{\theta}(\vct{z}) ]. \end{equation} Next, we apply Corollary~\ref{cor:rotvol-lc} with $h(s) = (n-s)^2 - s$. This step yields \begin{align*} \xi_{\mathring{I}}''(\theta) \stackrel{\eqref{eqn:thermal-var}}{=} \frac{m_{\mathring{I}}''(\theta)}{m_{\mathring{I}}(\theta)} - (\xi_{\mathring{I}}'(\theta))^2 &\stackrel{\eqref{eqn:thermal-var}}{=} \left[ \frac{1}{m_{\mathring{I}}(\theta)} \sum_{i=0}^n (n-i)^2 \, \mathrm{e}^{(n-i) \theta} \cdot \Prob{ \mathring{I} = n - i } \right] - (\xi_{\mathring{I}}'(\theta))^2 \\ &\stackrel{\eqref{eqn:rotvol-tm-pf}}{=} \operatorname{\mathbb{E}}[ (n - \mathring{J}_{\theta}(\vct{z}))^2 - \mathring{J}_{\theta}(\vct{z}) ] - (\operatorname{\mathbb{E}}[n - \mathring{J}_{\theta}(\vct{z})])^2 \\ &\stackrel{\phantom{\eqref{eqn:rotvol-tm-pf}}}{=} \Var[ n - \mathring{J}_{\theta}(\vct{z}) ] + \operatorname{\mathbb{E}}[ n - \mathring{J}_{\theta}(\vct{z}) ] - n \\ &\stackrel{\eqref{eqn:rotvol-tm-pf}}{=} \Var[ \mathring{J}_{\theta}(\vct{z}) ] + (\xi_{\mathring{I}}'(\theta) - n). \end{align*} We used the invariance properties of the variance. This is the advertised result. \end{proof} \subsection{Variance of Information} We have now converted the problem of bounding the thermal variance of the rotation volume random variable into a problem about the variance of a function of a log-concave random variable. To control this variance, we require a recent result on the information content of a log-concave random variable. \begin{fact}[Nguyen, Wang] \label{fact:nguyen-wang} Let $J : \mathbbm{R}^n \to \mathbbm{R}$ be a convex potential, and consider the log-concave probability measure $\mu$ on $\mathbbm{R}^n$ whose density is proportional to $\mathrm{e}^{-J}$. Then \begin{equation}\label{eqn:nguyen-wang} \Var_{\vct{z} \sim \mu}[J(\vct{z})] \leq n. \end{equation} \end{fact} Fact~\ref{fact:nguyen-wang} was obtained independently by Nguyen~\cite{Ngu13:Inegalites-Fonctionelles} and by Wang~\cite{wang2014heat} in their doctoral theses. For short proofs of this result, we refer to~\cite[Thm.~2.3]{FMW16:Optimal-Concentration} and~\cite[Rem.~4.2]{BGG18:Dimensional-Improvements}. The variance bound~\eqref{eqn:nguyen-wang} holds with equality when the potential $J$ is 1-homogeneous, but we believe that improvements remain possible. \subsection{Proof of Proposition~\ref{prop:rotvol-tvar}} We must bound the thermal variance $\xi_{\mathring{I}}''(\theta)$ of the rotation volume random variable. To do so, we combine Lemma~\ref{lem:rotvol-tmv} and Fact~\ref{fact:nguyen-wang}: $$ \xi_{\mathring{I}}''(\theta) \overset{\eqref{eqn:rotvol-tmv}}{=} \Var[ \mathring{J}_{\theta}(\vct{z}) ] - (n - \xi_{\mathring{I}}'(\theta)) \overset{\eqref{eqn:nguyen-wang}}{\leq} n - (n - \xi_{\mathring{I}}'(\theta)) = \xi_{\mathring{I}}'(\theta). $$ This completes the proof of Proposition~\ref{prop:rotvol-tvar} and, thus, the proof of Theorem~\ref{thm:rotvol-conc}. \section{Rigid Motion Volumes: Thermal Variance Bound} \label{sec:rmvol-conc} This section contains the proof of Proposition~\ref{prop:rmvol-tvar}, following the approach from Section~\ref{sec:rotvol-conc}. This case is significantly more complicated than the case of rotation volumes. Nevertheless, the overall strategy remains the same. \subsection{Setup} Fix a nonempty convex body $\set{K} \subset \mathbbm{R}^n$, which will not appear in the notation. For each $i = 0, 1, 2, \dots, n$, let $\mathrm{V}_i$ be the $i$th intrinsic volume of $\set{K}$. Introduce the rigid motion volumes $\bar{\intvol}_i$ and the total rigid motion volume $\bar{\wills}$: \begin{equation} \label{eqn:rmvol-def-pf} \bar{\intvol}_i := \frac{\omega_{n+1}}{\omega_{i+1}} \mathrm{V}_i \quad\text{and}\quad \bar{\wills} := \sum_{i=0}^n \bar{\intvol}_i. \end{equation} The rigid motion volume random variable $\bar{I}$ has the distribution \begin{equation} \label{eqn:rmvol-rv-pf} \Prob{ \bar{I} = n-i} = \bar{\intvol}_i / \bar{\wills} \quad\text{for $i = 0, 1, 2, \dots, n$.} \end{equation} Write $\bar{\delta} := \operatorname{\mathbb{E}} \bar{I} $ for the expectation. We also abbreviate $\dist := \dist_{\set{K}}$. \subsection{The Distance Integral} \label{sec:rmvol-distint} We begin with a distance integral that generates the exponential moments of the rigid motion volumes. \begin{proposition}[Rigid Motion Volumes: Distance Integral] \label{prop:rmvol-potent} For each $\theta \in \mathbbm{R}$, define the convex potential \begin{equation} \label{eqn:rmvol-potent} \bar{J}_{\theta}(\vct{x}) := \big[ 1 + \mathrm{e}^{-2\theta} \dist^2(\vct{x}) \big]^{1/2} \quad\text{for $\vct{x} \in \mathbbm{R}^n$.} \end{equation} For any function $h : [0,1] \to \mathbbm{R}$ where the expectations on the right-hand side are finite, $$ \int_{\mathbbm{R}^n} h\big( 1 - \bar{J}_{\theta}(\vct{x})^{-2} \big) \cdot \bar{J}_{\theta}(\vct{x})^{-(n+1)} \idiff{\vct{x}} = \sum_{i=0}^n \operatorname{\mathbb{E}}[ h(B_{n-i}) ] \, \mathrm{e}^{(n-i) \theta} \cdot \bar{\intvol}_i. $$ The random variable $B_{n-i} \sim \textsc{beta}((n-i)/2, (i+1)/2)$ follows a beta distribution. By convention $B_0 = 0$. \end{proposition} \begin{proof} The potential $\bar{J}_{\theta}$ defined in~\eqref{eqn:rmvol-potent} is convex because of Fact~\ref{fact:dist}\eqref{dist:1} and elementary convexity arguments. To obtain the identity, we invoke Fact~\ref{fact:distance-integral} with the function $$ f(r) = \frac{h\big(1 - (1+\mathrm{e}^{-2\theta} r^2 )^{-1} \big)}{(1+\mathrm{e}^{-2\theta} r^2)^{(n+1)/2}} = \frac{h\big( \mathrm{e}^{-2\theta} r^2 / (1 + \mathrm{e}^{-2\theta} r^2) \big)}{(1+\mathrm{e}^{-2\theta} r^2)^{(n+1)/2}}. $$ This yields a formula for the moments of the rigid motion volume sequence: \begin{align*} \int_{\mathbbm{R}^n} &h\big(1 - \bar{J}_{\theta}(\vct{x})^{-2} \big) \cdot \bar{J}_{\theta}(\vct{x})^{-(n+1)} \idiff{\vct{x}} \\ &= h(0) \cdot \mathrm{V}_n + \sum_{i=0}^{n-1} \left( \int_0^\infty h\left(\frac{\mathrm{e}^{-2\theta} r^2}{1 + \mathrm{e}^{-2\theta} r^2}\right) \cdot \frac{r^{n-i-1}}{(1 + \mathrm{e}^{-2\theta} r^2)^{(n+1)/2}} \idiff{r} \right) \omega_{n-i} \cdot \mathrm{V}_i \\ &= h(0) \cdot \bar{\intvol}_n + \sum_{i=0}^{n-1} \left( \frac{\mathrm{e}^{(n-i)\theta}}{2} \int_0^1 h(s) \cdot s^{(n-i)/2 - 1} (1 - s)^{(i+1)/2 - 1} \idiff{s} \right) \frac{ \omega_{n-i}\omega_{i+1}}{ \omega_{n+1} }\cdot \bar{\intvol}_i \end{align*}We have used the definition~\eqref{eqn:rmvol-def-pf} of $\bar{\intvol}_i$ and made the change of variables $\mathrm{e}^{-2\theta} r^2 / (1+ \mathrm{e}^{-2\theta} r^2) \mapsto s$. Using the formula~\eqref{eqn:ball-sphere} for the surface area of a sphere, we see that the weight in each term coincides with a beta function: $$ \frac{\omega_{n-i} \omega_{i+1}}{2 \omega_{n+1}} = \frac{\Gamma((n+1)/2)}{\Gamma((n-i)/2) \, \Gamma((i+1)/2)} = \frac{1}{\mathrm{B}((n-i)/2, (i+1)/2)}. $$ Now, we recognize that the integral and the surface area constants combine to produce the expectation $\operatorname{\mathbb{E}}[ h(B_{n-i}) ]$ with respect to the beta random variable $B_{n-i}$. Last, combine the displays. \end{proof} As a consequence, we arrive at metric representations for the total rigid motion volume and the central rigid motion volume, stated in Proposition~\ref{prop:rmvol-metric}. \begin{proof}[Proof of Proposition~\ref{prop:rmvol-metric}] Apply Proposition~\ref{prop:rmvol-potent} with $h(s) = 1$ and then with $h(s) = (n+1) s$, using the fact that $\operatorname{\mathbb{E}}[B_{n-i}]=(n-i)/(n+1)$. Select the parameter $\theta = 0$. \end{proof} \subsection{A Family of Concave Measures} Using Proposition~\ref{prop:rmvol-potent}, we can express the mgf $m_{\bar{I}}$ of the rigid motion volume random variable as a distance integral. Choosing $h(s) = 1$, we find that \begin{equation} \label{eqn:rmvol-mgf} m_{\bar{I}}(\theta) \stackrel{\eqref{eqn:mgf}}{=} \operatorname{\mathbb{E}} \mathrm{e}^{\theta \bar{I}} \stackrel{\eqref{eqn:rmvol-rv-pf}}{=} \frac{1}{\bar{\wills}} \sum_{i=0}^n \mathrm{e}^{(n-i)\theta} \cdot \bar{\intvol}_i = \int_{\mathbbm{R}^n} \bar{J}_{\theta}(\vct{x})^{-(n+1)} \idiff{\vct{x}}. \end{equation} For each $\theta \in \mathbbm{R}$, construct the concave probability measure $\bar{\mu}_{\theta}$ with the density $$ \frac{\diff\bar{\mu}_{\theta}(\vct{x})}{\diff{\vct{x}}} = \frac{1}{\bar{\wills}} \cdot \frac{1}{m_{\bar{I}}(\theta)} \cdot \bar{J}_{\theta}(\vct{x})^{-(n+1)} \quad\text{for $\vct{x} \in \mathbbm{R}^n$.} $$ The identity~\eqref{eqn:rmvol-mgf} ensures that $\bar{\mu}_{\theta}$ is normalized. \begin{corollary}[Rigid Motion Volumes: Probabilistic Formulation] \label{cor:rmvol-lc} Instate the notation of Proposition~\ref{prop:rmvol-potent}. Draw a random vector $\vct{z} \sim \bar{\mu}_{\theta}$. Then $$ \operatorname{\mathbb{E}}\big[ h\big(1 - \bar{J}_{\theta}(\vct{z})^{-2} \big) \big] = \frac{1}{m_{\bar{I}}(\theta)} \sum_{i=0}^n \operatorname{\mathbb{E}}[ h(B_{n-i}) ] \, \mathrm{e}^{(n-i)\theta} \cdot \Prob{ \bar{I} = n - i }. $$ \end{corollary} \begin{proof} This is just a restatement of Proposition~\ref{prop:rmvol-potent}. \end{proof} \subsection{A Thermal Variance Identity} Using Corollary~\ref{cor:rmvol-lc}, we can evaluate the thermal mean $\xi_{\bar{I}}'$ and the thermal variance $\xi_{\bar{I}}''$ of the rigid motion volume random variable $\bar{I}$ in terms of integrals against the concave measure $\bar{J}_{\theta}$. \begin{lemma}[Rigid Motion Volumes: Thermal Variance Identity] \label{lem:rmvol-tmv} Draw a random vector $\vct{z} \sim \bar{\mu}_{\theta}$. Then \begin{align} \xi_{\bar{I}}'(\theta) &= (n+1) \cdot \operatorname{\mathbb{E}}[ 1 - \bar{J}_{\theta}(\vct{z})^{-2} ]; \label{eqn:rmvol-tm} \\ \xi_{\bar{I}}''(\theta) &= (n+1)(n+3) \cdot \Var[ \bar{J}_{\theta}(\vct{z})^{-2} ] - \frac{2 \xi_{\bar{I}}'(\theta) \cdot [(n+1) - \xi_{\bar{I}}'(\theta)]}{n+1}. \label{eqn:rmvol-tv} \end{align} \end{lemma} \begin{proof} The beta random variable $B_{n-i}$ satisfies $$ (n+1) \, \operatorname{\mathbb{E}}[ B_{n-i} ] = n-i \quad\text{and}\quad (n+1) \operatorname{\mathbb{E}}[ (n+3) B_{n-i}^2 - 2 B_{n-i} ] = (n-i)^2. $$ Invoke Corollary~\ref{cor:rmvol-lc} with $h(s) = (n+1) s$ to obtain \begin{equation} \label{eqn:rmvol-tm-pf} \xi_{\bar{I}}'(\theta) \stackrel{\eqref{eqn:thermal-expect}}{=} \frac{1}{m_{\bar{I}}(\theta)} \sum_{i=0}^n (n-i) \,\mathrm{e}^{(n-i)\theta} \cdot \Prob{ \bar{I} = n-i } = (n+1) \cdot \operatorname{\mathbb{E}} [ 1 - \bar{J}_{\theta}(\vct{z})^{-2} ]. \end{equation} Apply Corollary~\ref{cor:rmvol-lc} with $h(s) = (n+1)((n+3) s^2 - 2s )$ to obtain \begin{align*} \xi_{\bar{I}}''(\theta) &\stackrel{\eqref{eqn:thermal-var}}{=} \left[ \frac{1}{m_{\bar{I}}(\theta)} \sum_{i=0}^n (n-i)^2 \,\mathrm{e}^{(n-i)\theta} \cdot \Prob{ \bar{I} = n-i } \right] - (\xi_{\bar{I}}'(\theta))^2 \\ &\stackrel{\eqref{eqn:rmvol-tm-pf}}{=} (n+1) \operatorname{\mathbb{E}} \big[ (n+3) \big(1 - \bar{J}_{\theta}(\vct{z})^{-2} \big)^2 - 2 \big(1-\bar{J}_{\theta}(\vct{z})^{-2}\big) \big] - (n+1)^2 \big(\operatorname{\mathbb{E}}[ 1 - \bar{J}_{\theta}(\vct{z})^{-2} ] \big)^2 \\ &\stackrel{\phantom{\eqref{eqn:rmvol-tm-pf}}}{=} (n+1)(n+3) \Var[ 1 - \bar{J}_{\theta}(\vct{z})^{-2} ] - 2 (n+1)\left( \operatorname{\mathbb{E}}[ 1 - \bar{J}_{\theta}(\vct{z})^{-2} ] - (\operatorname{\mathbb{E}}[ 1 - \bar{J}_{\theta}(\vct{z})^{-2} ])^2 \right) \\ &\stackrel{\eqref{eqn:rmvol-tm-pf}}{=} (n+1)(n+3) \Var[ \bar{J}_{\theta}(\vct{z})^{-2} ] - \frac{2 \xi_{\bar{I}}'(\theta) \cdot[(n+1) - \xi_{\bar{I}}'(\theta)]}{n+1}. \end{align*} This is the required result. \end{proof} \subsection{A Variance Bound} To compute exponential moments of the rigid motion volume random variable $\bar{I}$, we can use variance inequalities for concave measures. We will establish the following estimate. \begin{lemma}[Rigid Motion Volumes: Variance Bound] \label{lem:rmvol-varbd} Instate the notation of Lemma~\ref{lem:rmvol-tmv}. For a random variable $\vct{z} \sim \bar{\mu}_{\theta}$, \begin{equation} \label{eqn:rmvol-varbd} \Var[ \bar{J}_{\theta}(\vct{z})^{-2} ] \leq \frac{4}{n+4} \operatorname{\mathbb{E}}[ \bar{J}_{\theta}(\vct{z})^{-2}] \cdot \operatorname{\mathbb{E}}[ 1 - \bar{J}_{\theta}(\vct{z})^{-2} ]. \end{equation} \end{lemma} To prove this result, we will use the following statement. \begin{fact}[Nguyen] \label{fact:nguyen} Let $J : \mathbbm{R}^n \to \mathbbm{R}_{++}$ be a twice differentiable and strongly convex potential. Consider the concave probability measure $\mu$ on $\mathbbm{R}^n$ whose density is proportional to $J^{-(n+1)}$. For any differentiable function $f : \mathbbm{R}^n \to \mathbbm{R}$, $$ \Var_{\vct{z} \sim \mu}[ f(\vct{z}) ] \leq \frac{1}{n} \int_{\mathbbm{R}^n} \ip{ (\operatorname{Hess} J(\vct{x}))^{-1} \grad f(\vct{x}) }{ \grad f(\vct{x}) } J(\vct{x}) \idiff{\mu}(\vct{x}). $$ \end{fact} Fact~\ref{fact:nguyen} extends the Brascamp--Lieb variance inequality to a special class of concave measures. Nguyen~\cite{Ngu14:Dimensional-Variance} proved this result using H{\"o}rmander's $L_2$ method. It improves on a bound that Bobkov \& Ledoux~\cite{BL09:Weighted-Poincare-Type} derived from the Borell--Brascamp--Lieb variance inequality~\cite{Bor75:Convex-Set,BL76:Extensions-Brunn-Minkowski}. \begin{proof}[Proof of Lemma~\ref{lem:rmvol-varbd}] Fix the parameter $\theta \in \mathbbm{R}$. To apply Nguyen's variance inequality, we need to perturb the potential $\bar{J}_{\theta}$ so that it is strongly convex. For each $\varepsilon > 0$, set \begin{equation} \label{eqn:igJ-perturb} J^{\varepsilon}(\vct{x}) := \bar{J}_{\theta}(\vct{x}) + \varepsilon \normsq{\vct{x}} / 2, \quad\text{recalling that}\quad \bar{J}_{\theta}(\vct{x}) = \big[ 1 + \mathrm{e}^{-2\theta} \dist^2(\vct{x}) \big]^{1/2}. \end{equation} Next, define the function $f$ and compute its gradient: \begin{equation} \label{eqn:igf-grad} f(\vct{x}) = \bar{J}_{\theta}(\vct{x})^{-2} \quad\text{and}\quad \grad f(\vct{x}) = \frac{-2}{\bar{J}_{\theta}(\vct{x})^3} \grad \bar{J}_{\theta}(\vct{x}). \end{equation} Let us continue to the main argument. Our duty is to evaluate the quadratic form induced by the inverse Hessian of the perturbed potential $J^{\varepsilon}$ at the vector $\grad f$. First, use the definition~\eqref{eqn:rmvol-potent} of the potential $\bar{J}_{\theta}$ and Fact~\ref{fact:dist} to calculate that \begin{equation} \label{eqn:igJ-gradnorm} \norm{ \smash{\grad \bar{J}_{\theta}(\vct{x})} }^2 = \mathrm{e}^{-2\theta} \big( 1 - \bar{J}_{\theta}(\vct{x})^{-2} \big). \end{equation} Differentiate~\eqref{eqn:igJ-gradnorm} again: \begin{equation} \label{eqn:igJ-hessian} (\operatorname{Hess} \bar{J}_{\theta}(\vct{x})) \, \grad \bar{J}_{\theta}(\vct{x}) = \frac{1}{2} \grad \norm{ \smash{\grad \bar{J}_{\theta}(\vct{x})} }^2 = \frac{1}{2} \grad \big[ \mathrm{e}^{-2\theta} (1 - \bar{J}_{\theta}(\vct{x})^{-2}) \big] = \frac{-\mathrm{e}^{-2\theta}}{2} \grad f(\vct{x}). \end{equation} We have used~\eqref{eqn:igf-grad} in the last step. Consequently, the perturbed potential~\eqref{eqn:igJ-perturb} satisfies $$ (\operatorname{Hess} J^{\varepsilon}(\vct{x})) \, \grad \bar{J}_{\theta}(\vct{x}) = \frac{-\mathrm{e}^{-2\theta}}{2} \grad f(\vct{x}) + \varepsilon \grad \bar{J}_{\theta}(\vct{x}) = \frac{-1}{2} \big[ \mathrm{e}^{-2\theta} + \varepsilon \bar{J}_{\theta}(\vct{x})^3 \big] \grad f(\vct{x}). $$ Multiply through by the inverse Hessian and rearrange: $$ (\operatorname{Hess} J^{\varepsilon}(\vct{x}))^{-1} \grad f(\vct{x}) = \frac{-2}{\mathrm{e}^{-2\theta} + \varepsilon \bar{J}_{\theta}(\vct{x})^3} \grad \bar{J}_{\theta}(\vct{x}). $$ Take the inner product with $\grad f$ and multiply by $J^{\varepsilon}$: \begin{align*} &\ip{ (\operatorname{Hess} J^{\varepsilon}(\vct{x}))^{-1} \grad f(\vct{x}) }{ \grad f(\vct{x}) } J^{\varepsilon}(\vct{x}) \\ &\qquad= \frac{4}{\mathrm{e}^{-2\theta} + \varepsilon \bar{J}_{\theta}(\vct{x})^3} \bar{J}_{\theta}(\vct{x})^{-3} \norm{ \smash{ \grad\bar{J}_{\theta}(\vct{x})}}^2 J^{\varepsilon}(\vct{x}) \\ &\qquad= \frac{4 \mathrm{e}^{-2\theta}}{\mathrm{e}^{-2\theta} + \varepsilon \bar{J}_{\theta}(\vct{x})^3} \big[ \bar{J}_{\theta}(\vct{x})^{-2} \big(1 - \bar{J}_{\theta}(\vct{x})^{-2}\big) + \varepsilon \norm{\vct{x}}^2 \big(1 - \bar{J}_{\theta}(\vct{x})^{-2}\big) / 2 \big]. \end{align*} We have used~\eqref{eqn:igf-grad} to reach the second line. The third line requires~\eqref{eqn:igJ-gradnorm} and the definition~\eqref{eqn:igJ-perturb} of the perturbed potential $J^{\varepsilon}$. Taking the limit as $\varepsilon \downarrow 0$, $$ \ip{ (\operatorname{Hess} \bar{J}_{\theta}(\vct{x}))^{-1} \grad f(\vct{x}) }{ \grad f(\vct{x} } \bar{J}_{\theta}(\vct{x}) = 4 \bar{J}_{\theta}(\vct{x})^{-2} \big(1 - \bar{J}_{\theta}(\vct{x})^{-2}\big) $$ This computation gives us the integrand in Nguyen's variance inequality. Fact~\ref{fact:nguyen} applies to the strongly convex potential $J^{\varepsilon}$ and the function $f$. By dominated convergence, the inequality also remains valid in the limit as $\varepsilon \downarrow 0$. Thus, for $\vct{z} \sim \bar{\mu}_{\theta}$, $$ \Var[ \bar{J}_{\theta}(\vct{z})^{-2} ] = \Var[ f(\vct{z}) ] \leq \frac{4}{n} \operatorname{\mathbb{E}}\big[ \bar{J}_{\theta}(\vct{z})^{-2} \big( 1 - \bar{J}_{\theta}(\vct{z})^{-2} \big) \big]. $$ The rest is simple algebra. Rewrite the expectation on the right-hand side as $$ \operatorname{\mathbb{E}}\big[ \bar{J}_{\theta}(\vct{z})^{-2} \big( 1 - \bar{J}_{\theta}(\vct{z})^{-2} \big) \big] = \operatorname{\mathbb{E}}[ \bar{J}_{\theta}(\vct{z})^{-2} ] \cdot \operatorname{\mathbb{E}}[ 1 - \bar{J}_{\theta}(\vct{z})^{-2} ] - \Var[ \bar{J}_{\theta}(\vct{z})^{-2} ]. $$ Some manipulations yield the bound $$ \Var[ \bar{J}_{\theta}(\vct{z})^{-2} ] \leq \frac{4}{n+4} \operatorname{\mathbb{E}}[ \bar{J}_{\theta}(\vct{z})^{-2} ] \cdot \operatorname{\mathbb{E}}[ 1 - \bar{J}_{\theta}(\vct{z})^{-2} ]. $$ This is the advertised result. \end{proof} \subsection{Proof of Proposition~\ref{prop:rmvol-tvar}} We may now complete the proof of the thermal variance bound, Proposition~\ref{prop:rmvol-tvar}. First, Lemmas~\ref{lem:rmvol-tmv} and~\ref{lem:rmvol-varbd} imply that \begin{align} \label{eqn:rmvol-tvar-pf} (n+1)(n+3) \cdot \Var[\bar{J}_{\theta}(\vct{z})^{-2}] &\stackrel{\eqref{eqn:rmvol-varbd}}{\leq} 4(n+1) \cdot \operatorname{\mathbb{E}}[ \bar{J}_{\theta}(\vct{z})^{-2}] \cdot \operatorname{\mathbb{E}}[ 1 - \bar{J}_{\theta}(\vct{z})^{-2} ] \\ &\stackrel{\eqref{eqn:rmvol-tm}}{=} \frac{4 \xi_{\bar{I}}'(\theta) \cdot[(n+1) - \xi_{\bar{I}}'(\theta)]}{n+1}. \end{align} Applying Lemma~\ref{lem:rmvol-tmv} again, \begin{align*} \xi_{\bar{I}}''(\theta) &\stackrel{\eqref{eqn:rmvol-tv}}{=} (n+1)(n+3) \cdot \Var[\bar{J}_{\theta}(\vct{z})^{-2}] - \frac{2\xi_{\bar{I}}'(\theta)((n+1) - \xi_{\bar{I}}'(\theta))}{n+1} \\ &\stackrel{\eqref{eqn:rmvol-tvar-pf}}{\leq} \frac{2 \xi_{\bar{I}}'(\theta) \cdot [(n+1) - \xi_{\bar{I}}'(\theta)]}{n+1}. \end{align*} We have finished the argument. \section{Moving Flats: Phase Transitions} \label{sec:moving-flats} We are now prepared to demonstrate the existence of phase transitions in classic integral geometry problems. The first step in our program is to rewrite integral geometry formulas in terms of weighted intrinsic volumes (Appendix~\ref{sec:formulas}). Then we can apply concentration of the weighted intrinsic volumes to obtain new insights on integral geometry. This section treats two questions involving moving flats. There are strong formal parallels between these two results, reflecting the duality between projections and slices. In Section~\ref{sec:moving-bodies}, we turn to problems involving moving convex bodies. \subsection{Random Projections} \label{sec:randproj} First, we take up the question about projecting a convex body onto a random subspace; see Figure~\ref{fig:random-projection} for a schematic. \subsubsection{The Projection Formula} The projection formula describes the average rotation volumes of a projection of a convex body onto a random subspace of a given dimension. \begin{fact}[Projection Formula] \label{fact:projection-formula} Consider a nonempty convex body $\set{K} \subset \mathbbm{R}^n$. For each subspace dimension $m =0,1,2,\dots, n$, and each index $i = 0,1,2,\dots, m$, \begin{equation} \label{eqn:proj-pf} \int_{\mathrm{Gr}(m, n)} \mathring{\intvol}_i^m(\set{K}|\set{L}) \, {\nu}_m(\diff{\set{L}}) = \mathring{\intvol}_{n-m + i}^n(\set{K}) \quad\text{for $i = 0, 1, 2, \dots, m$.} \end{equation} The superscripts indicate the ambient dimension in which the rotation volumes are computed, viz. $$ \mathring{\intvol}_i^m(\set{M}) := \frac{\omega_{m+1}}{\omega_{i+1}} \, \mathrm{V}_{m - i}(\set{M}), \quad\text{while}\quad \mathring{\intvol}_i^n(\set{M}) := \frac{\omega_{n+1}}{\omega_{i+1}} \, \mathrm{V}_{n - i}(\set{M}) $$ The Grassmannian $\mathrm{Gr}(m, n)$ is equipped with its invariant probability measure $\nu_m$; see Appendix~\ref{sec:invariant-grass}. \end{fact} Fact~\ref{fact:projection-formula} follows from Fact~\ref{fact:projection-intvol}, the formulation in terms of intrinsic volumes, and Lemma~\ref{lem:structure}. Roughly speaking, it shows that random projection acts as a shift operator on the sequence of rotation volumes. The case $i = 0$ corresponds to Kubota's formula. The case $i = 0$ and $m = n - 1$ is equivalent to Cauchy's surface area formula. We can give a probabilistic interpretation of the random projection formula in terms of the rotation volume random variable. Define \begin{equation} \label{eqn:proj-wills-pf} \mathrm{RandProj}_m(\set{K}) := \int_{\mathrm{Gr}(m, n)} \frac{\mathring{\wills}^m(\set{K}|\set{L})}{\mathring{\wills}^n(\set{K})} \, {\nu}_m(\diff{\set{L}}) = \sum_{i=0}^m \frac{\mathring{\intvol}_{n-i}^n(\set{K})}{\mathring{\wills}^n(\set{K})} = \Prob{ \mathring{I}_{\set{K}} \leq m }. \end{equation} The superscript on the total rotation volumes $\mathring{\wills}^m$ and $\mathring{\wills}^n$ reflects the ambient dimension. The second relation follows when we sum~\eqref{eqn:proj-pf} over indices $i = 0, 1, 2, \dots, m$ and then divide by the total rotation volume $\mathring{\wills}^n(\set{K})$. In the last step, we have identified a probability involving $\mathring{I}_{\set{K}}$, the rotation volume random variable~\eqref{eqn:wvol-rv-def}. The exact formula~\eqref{eqn:proj-wills-pf} also implies that $\mathrm{RandProj}_m(\set{K})$ is bounded between zero and one. \subsubsection{The Approximate Projection Formula} In this section, we complete the proof of Theorem~\ref{thm:randproj-intro}, the phase transition for random projections. To do so, we reparameterize the dimension $m$ of the random subspace as $m = \mathring{\delta}(\set{K}) + t$ for $t \in \mathbbm{R}$. The Bernstein inequality~\eqref{eqn:rotvol-bernstein} for rotation volume random variable states that, for $t \neq 0$, $$ \Prob{ \frac{\mathring{I}_{\set{K}} - \mathring{\delta}(\set{K})}{t} \geq 1 } \leq \exp \left( \frac{-t^2/2}{\mathring{\delta}(\set{K}) + \abs{t}/3} \right) =: p(t). $$ For a proportion $\alpha \in (0, 1)$, invert the tail probability function to see that $$ p(t) \leq \alpha \quad\text{if and only if}\quad \abs{t} \geq \big[ 2 \mathring{\delta}(\set{K}) \log(1/\alpha) + \tfrac{1}{9} \log^2(1/\alpha) \big]^{1/2} + \tfrac{1}{3} \log(1/\alpha). $$ Using the subadditivity of the square root, we can relax this to a simpler bound. Define the transition width $$ t_{\star}(\alpha) := \big[ 2\mathring{\delta}(\set{K}) \log(1/\alpha)\big]^{1/2} + \tfrac{2}{3} \log(1/\alpha). $$ We have shown that $$ \abs{t} \geq t_{\star}(\alpha) \quad\text{implies}\quad \Prob{ \frac{\mathring{I}_{\set{K}} - \mathring{\delta}(\set{K})}{t} \geq 1 } \leq \alpha. $$ Use the relation $m = \mathring{\delta}(\set{K}) + t$ and rearrange to obtain the pair of implications \begin{align*} m \leq \mathring{\delta}(\set{K}) - t_{\star}(\alpha) \quad\text{implies}\quad \Prob{ \mathring{I}_{\set{K}} \leq m } &\leq \alpha; \quad\text{and} \\ m \geq \mathring{\delta}(\set{K}) + t_{\star}(\alpha) \quad\text{implies}\quad \Prob{ \mathring{I}_{\set{K}} \geq m } &\leq \alpha \quad\text{implies}\quad \Prob{ \mathring{I}_{\set{K}} \leq m } \geq 1 - \alpha. \end{align*} Last, invoke the formula~\eqref{eqn:proj-wills-pf} to replace the probability, $\Prob{ \mathring{I}_{\set{K}} \leq m }$, by the integral geometric quantity, $\mathrm{RandProj}_m(\set{K})$. This completes the proof of Theorem~\ref{thm:randproj-intro}. \subsection{Random Slices} \label{sec:crofton} Next, we consider the dual problem of slicing a convex body by a random affine space. An illustration appears in Figure~\ref{fig:crofton}. \subsubsection{The Slicing Formula} The slicing formula, which is attributed to Crofton, describes the total content of the slices of a convex body by affine spaces of a given dimension. \begin{fact}[Slicing Formula] \label{fact:slicing} Consider a nonempty convex body $\set{K} \subset \mathbbm{R}^n$. For each affine space dimension $m =0,1,2,\dots,n$ and each index $i = 0,1,2,\dots, m$, \begin{equation} \label{eqn:slicing-pf} \int_{\operatorname{Af}(m,n)} \bar{\intvol}_i(\set{K} \cap \set{E}) \, \mu_m(\diff{\set{E}}) = \bar{\intvol}_{n-m+i}(\set{K}). \end{equation} The rigid motion volumes are calculated with respect to the ambient dimension $n$. The affine Grassmannian $\operatorname{Af}(m,n)$ is equipped with its invariant measure $\mu_m$; see Appendix~\ref{sec:invariant}. \end{fact} Fact~\ref{fact:slicing} follows from Fact~\ref{fact:slicing-intvol} and Lemma~\ref{lem:structure}. Our formulation shows that random slicing acts as a shift operator on the sequence of rigid motion volumes; note the formal similarity between~\eqref{eqn:slicing-pf} and the analogous result~\eqref{eqn:proj-pf} for random projections. Of particular interest is the case $i = 0$, which expresses the measure of the set of affine spaces that hit $\set{K}$. As above, the formula~\eqref{eqn:slicing-pf} has a probabilistic interpretation. Define \begin{equation} \label{eqn:crofton-wills-pf} \mathrm{RandSlice}_m(\set{K}) := \int_{\operatorname{Af}(m,n)} \frac{\bar{\wills}(\set{K} \cap \set{E})}{\bar{\wills}(\set{K})} \, \mu_m(\diff{\set{E}}) = \sum_{i=0}^m \frac{\bar{\intvol}_{n-i}(\set{K})}{\bar{\wills}(\set{K})} = \Prob{ \bar{I}_{\set{K}} \leq m }. \end{equation} To reach the second relation, sum~\eqref{eqn:slicing-pf} over $i = 0,1,2,\dots, m$, and divide by the total rigid motion volume $\bar{\wills}(\set{K})$. We have recognized a probability involving $\bar{I}_{\set{K}}$, the rigid motion random variable~\eqref{eqn:wvol-rv-def}. In particular, the formula~\eqref{eqn:crofton-wills-pf} demonstrates that $\mathrm{RandSlice}_m(\set{K})$ is bounded between zero and one. A striking feature of the slicing problem is that a much stronger condition holds: $$ 0 \leq \frac{\bar{\wills}(\set{K} \cap \set{E})}{\bar{\wills}(\set{K})} \leq 1 \quad\text{for all $\set{E} \in \operatorname{Af}(m,n)$.} $$ This relation is a straightforward consequence of the fact that intrinsic volumes are monotone increasing with respect to set inclusion. \subsubsection{The Approximate Slicing Formula} Let us continue with the proof of Theorem~\ref{thm:crofton-intro}. The argument is almost identical with the proof of Theorem~\ref{thm:randproj-intro}, so we will leave out some details. Rewrite the dimension $m$ of the affine space as $m = \bar{\delta}(\set{K}) + t$ for $t \in \mathbbm{R}$. The Bernstein inequality~\eqref{eqn:rmvol-bernstein} for the rigid motion volume random variable states that, for $t \neq 0$, $$ \Prob{ \frac{\bar{I}_{\set{K}} - \bar{\delta}(\set{K})}{t} \geq 1 } \leq \exp\left( \frac{-t^2/4}{\bar{\sigma}^2(\set{K}) + \abs{t}/3} \right) \quad\text{where}\quad \bar{\sigma}^2(\set{K}) := \bar{\delta}(\set{K}) \wedge \big((n+1) - \bar{\delta}(\set{K})\big). $$ For a proportion $\alpha \in (0,1)$, the transition width is $$ t_{\star}(\alpha) := \big[ 4 \bar{\sigma}^2(\set{K}) \log(1/\alpha) \big]^{1/2} + \tfrac{4}{3} \log(1/\alpha). $$ Repeating the arguments from Section~\ref{sec:randproj}, \begin{equation*} \begin{aligned} m &\leq \bar{\delta}(\set{K}) - t_{\star}(\alpha) &\quad\text{implies}\quad&& \mathrm{RandSlice}_m(\set{K}) &\leq \alpha; \\ m &\geq \bar{\delta}(\set{K}) + t_{\star}(\alpha) &\quad\text{implies}\quad&& \mathrm{RandSlice}_m(\set{K}) &\geq 1 - \alpha. \end{aligned} \end{equation*} Collect the results to obtain Theorem~\ref{thm:crofton-intro}. \section{Moving Bodies: Phase Transitions} \label{sec:moving-bodies} In this section, we consider problems involving two moving bodies. The approach is similar to the argument in Section~\ref{sec:moving-flats}. The main technical difference is that the integral geometry formulas involve weighted intrinsic volumes of both convex bodies. \subsection{Rotation Means} \label{sec:rotmean} First, we consider what happens when we add a randomly rotated convex body to a fixed convex body. See Figure~\ref{fig:rotation-mean} for a picture. \subsubsection{The Rotation Mean Formula} The rotation mean formula gives an exact account of the rotation volumes of the Minkowski sum of a fixed convex body and a randomly rotated convex body. \begin{fact}[Rotation Mean Value Formula] \label{fact:rotmean} Consider two nonempty convex bodies $\set{K}, \set{M} \subset \mathbbm{R}^n$. Then \begin{equation} \label{eqn:rotmean-pf} \int_{\mathrm{SO}(n)} \mathring{\intvol}_i(\set{K} + \mtx{O} \set{M}) \, \nu(\diff{\mtx{O}}) = \sum_{j \geq i} \mathring{\intvol}_j(\set{K}) \, \mathring{\intvol}_{n + i-j}(\set{M}) \quad\text{for $i = 0,1,2,\dots, n$.} \end{equation} The rotation volumes are computed with respect to the ambient dimension $n$. The special orthogonal group $\mathrm{SO}(n)$ is equipped with its invariant probability measure $\nu$; see Appendix~\ref{sec:invariant-SO}. \end{fact} Fact~\ref{fact:rotmean} is attributed to Hadwiger. Our formulation follows from Fact~\ref{fact:rotmean-intvol} and Lemma~\ref{lem:structure}. It shows that a rotation mean corresponds with a convolution of the rotation volumes. The special case $\set{M} = \lambda \ball{n}$ is equivalent to the Steiner formula~\eqref{eqn:steiner-intro}. We can easily obtain a probabilistic interpretation of the rotation mean value formula. Define \begin{equation} \label{eqn:rotmean-wills-pf} \mathrm{RotMean}(\set{K}, \set{M}) := \int_{\mathrm{SO}(n)} \frac{\mathring{\wills}(\set{K} + \mtx{O} \set{M})}{\mathring{\wills}(\set{K}) \, \mathring{\wills}(\set{M})} \, \nu(\diff{\mtx{O}}) = \sum_{i + j \geq n} \frac{\mathring{\intvol}_i(\set{K}) \, \mathring{\intvol}_j(\set{M})}{\mathring{\wills}(\set{K}) \, \mathring{\wills}(\set{M})} = \Prob{ \mathring{I}_{\set{K}} + \mathring{I}_{\set{M}} \geq n }. \end{equation} To obtain the second relation, sum~\eqref{eqn:rotmean-pf} over $i = 0, 1,2, \dots, n$, and divide by the product $\mathring{\wills}(\set{K}) \, \mathring{\wills}(\set{M})$ of the total rotation volumes. In this expression, $\mathring{I}_{\set{K}}$ and $\mathring{I}_{\set{M}}$ are \emph{independent} rotation volume random variables~\eqref{eqn:wvol-rv-def} associated with the convex bodies $\set{K}$ and $\set{M}$. As a consequence of~\eqref{eqn:rotmean-wills-pf}, we immediately recognize that the integral $\mathrm{RotMean}(\set{K}, \set{M})$ is bounded between zero and one. \subsubsection{The Approximate Rotation Mean Formula} We may now establish Theorem~\ref{thm:rotmean-intro}. Define the total rotation volume $$ \mathring{\Delta}(\set{K}, \set{M}) := \mathring{\delta}(\set{K}) + \mathring{\delta}(\set{M}) \in [0, 2n]. $$ Parameterize the phase transition by writing $\mathring{\Delta}(\set{K}, \set{M}) = n + t$ for $t \in \mathbbm{R}$. The Bernstein inequality~\eqref{eqn:rotvol-sum} for the sum of rotation mean random variables states that $$ \Prob{ \frac{\mathring{I}_{\set{K}} + \mathring{I}_{\set{M}} - \mathring{\Delta}(\set{K},\set{M})}{t} \geq 1 } \leq \exp\left( \frac{-t^2/2}{\mathring{\Delta}(\set{K},\set{M}) + \abs{t} / 3} \right) \quad\text{for $t \neq 0$.} $$ For a proportion $\alpha \in (0,1)$, set the transition width $$ t_{\star}(\alpha) := \big[ 2 \mathring{\Delta}(\set{K},\set{M}) \log(1/\alpha) \big]^{1/2} + \tfrac{2}{3} \log(1/\alpha). $$ We find that $$ \abs{t} \geq t_{\star}(\alpha) \quad\text{implies}\quad \Prob{ \frac{\mathring{I}_{\set{K}} + \mathring{I}_{\set{M}} - \mathring{\Delta}(\set{K},\set{M})}{t} \geq 1 } \leq \alpha. $$ Rearrange this inequality to determine that \begin{equation} \begin{aligned} \mathring{\Delta}(\set{K},\set{M}) &\geq n + t_{\star}(\alpha) &\quad\text{implies}\quad&& \mathrm{RotMean}(\set{K}, \set{M}) &\leq \alpha; \\ \mathring{\Delta}(\set{K},\set{M}) &\leq n - t_{\star}(\alpha) &\quad\text{implies}\quad&& \mathrm{RotMean}(\set{K}, \set{M}) &\geq 1 - \alpha. \end{aligned} \end{equation} We have established Theorem~\ref{thm:rotmean-intro}. \subsection{The Kinematic Formula} \label{sec:kinematic} Last, we turn to the kinematic formula, which describes how two moving convex bodies intersect; see the illustration in Figure~\ref{fig:kinematic}. This setting is dual to the problem of rotation means. \subsubsection{Exact Kinematics} The kinematic formula expresses the rigid motion volumes of an intersection of a fixed convex body and a randomly transformed convex body. \begin{fact}[Kinematic Formula] \label{fact:kinematic} Consider two nonempty bodies $\set{K}, \set{M} \subset \mathbbm{R}^n$. For each $i = 0,1,2,\dots,n$, \begin{equation} \label{eqn:kinematic-pf} \int_{\mathrm{SE}(n)} \bar{\intvol}_i(\set{K} \cap g \set{M}) \, \mu(\diff{g}) = \sum_{j \geq i} \bar{\intvol}_j(\set{K}) \, \bar{\intvol}_{n+i-j}(\set{M}) \quad\text{for $i = 0, 1, 2, \dots, n$.} \end{equation} The rigid motion volumes are calculated with respect to the ambient dimension $n$. The group $\mathrm{SE}(n)$ of proper rigid motions on $\mathbbm{R}^n$ is equipped with its invariant measure $\mu$; see Section~\ref{sec:invariant}. \end{fact} Fact~\ref{fact:kinematic} goes back to work of Blaschke, Santal{\'o}, Chern and others. The statement here follows from Fact~\ref{fact:kinematic-intvol} and Lemma~\ref{lem:structure}. Our formulation states that random intersection acts as convolution on sequences of rigid motion volumes; note the formal parallel between~\eqref{eqn:kinematic-pf} and~\eqref{eqn:rotmean-pf}, the expression for rotation means. The special case $i = 0$ is called the \emph{principal kinematic formula}, and it describes the measure of the set of rigid motions that bring $\set{M}$ into contact with $\set{K}$. To obtain a probabilistic formulation, define \begin{equation} \label{eqn:kinematic-wills-pf} \mathrm{Kinematic}(\set{K}, \set{M}) := \int_{\mathrm{SE}(n)} \frac{\bar{\wills}(\set{K} \cap g \set{M})}{\bar{\wills}(\set{K}) \, \bar{\wills}(\set{M})} \, \mu(\diff{g}) = \sum_{i + j \geq n} \frac{\bar{\intvol}_i(\set{K}) \, \bar{\intvol}_j(\set{K})}{\bar{\wills}(\set{K}) \, \bar{\wills}(\set{M})} = \Prob{ \bar{I}_{\set{K}} + \bar{I}_{\set{M}} \geq n }. \end{equation} The second relation follows when we sum~\eqref{eqn:kinematic-pf} over $i = 0,1,2,\dots,n$ and divide by the product $\bar{\wills}(\set{K}) \, \bar{\wills}(\set{M})$ of the total rigid motion volumes. In this formula, $\bar{I}_{\set{K}}$ and $\bar{I}_{\set{M}}$ are \emph{independent} rigid motion volume random variables associated with the sets $\set{K}$ and $\set{M}$. The formula~\eqref{eqn:kinematic-wills-pf} also demonstrates that $\mathrm{Kinematic}(\set{K}, \set{M})$ is bounded between zero and one. \subsubsection{Approximate Kinematics} Finally, we establish Theorem~\ref{thm:kinematic-intro}. Introduce the total rigid motion volume $$ \bar{\Delta}(\set{K},\set{M}) := \bar{\delta}(\set{K}) + \bar{\delta}(\set{M}) \in [0, 2n]. $$ Define the variance proxy $$ \bar{v}(\set{K},\set{M}) := \big[ \bar{\delta}(\set{K}) \wedge ((n+1) - \bar{\delta}(\set{K})) \big] + \big[ \bar{\delta}(\set{M}) \wedge ((n+1) - \bar{\delta}(\set{M})) \big] \in [0, n+1]. $$ We parameterize the phase transition as $\bar{\Delta}(\set{K},\set{M}) = n + t$ for $t \in \mathbbm{R}$. The Bernstein inequality~\eqref{eqn:rmvol-sum} for the sum of rigid motion volume random variables ensures that $$ \Prob{ \frac{\bar{I}_{\set{K}} + \bar{I}_{\set{M}}-\bar{\Delta}(\set{K},\set{M})}{t} \geq 1 } \leq \exp\left( \frac{-t^2/4}{\bar{v}(\set{K},\set{M}) + \abs{t}/3} \right) \quad\text{for $t \neq 0$.} $$ For a proportion $\alpha \in (0,1)$, we define the transition width $$ t_{\star}(\alpha) := \big[ 4\bar{v}(\set{K},\set{M}) \log(1/\alpha) \big]^{1/2} + \tfrac{4}{3} \log(1/\alpha). $$ With this notation, $$ \abs{t} \geq t_{\star}(\alpha) \quad\text{implies}\quad \Prob{ \frac{\bar{I}_{\set{K}} + \bar{I}_{\set{M}}-\bar{\Delta}(\set{K},\set{M})}{t} \geq 1 } \leq \alpha. $$ This inequality implies that \begin{equation*} \begin{aligned} \bar{\Delta}(\set{K},\set{M}) &\geq n + t_{\star}(\alpha) &\quad\text{implies}\quad&& \mathrm{Kinematic}(\set{K}, \set{M}) &\leq \alpha; \\ \bar{\Delta}(\set{K},\set{M}) &\leq n - t_{\star}(\alpha) &\quad\text{implies}\quad&& \mathrm{Kinematic}(\set{K}, \set{M}) &\geq 1 - \alpha. \end{aligned} \end{equation*} This result is stronger than Theorem~\ref{thm:kinematic-intro}, which follows from the loose bound $\bar{v}(\set{K},\set{M}) \leq \bar{\Delta}(\set{K},\set{M})$. \subsection{Iteration} Both the rotation mean formula, Fact~\ref{fact:rotmean}, and the kinematic formula, Fact~\ref{fact:kinematic}, can be iterated to handle combinations of more than two convex bodies. For example, see~\cite[Thm.~5.1.5]{SW08:Stochastic-Integral}. It is straightforward to extend our methods to obtain phase transitions for combinations of many moving convex bodies. We omit further discussion. \section{Conclusions and Outlook} To recapitulate: We have discovered that several major problems in integral geometry exhibit sharp phase transitions. To explain these phenomena, we cast the integral geometry formulas in terms of weighted intrinsic volumes. These reformulations reveal that each result has a natural expression as a probability involving weighted intrinsic volume random variables. The phase transition arises as a consequence of the fact that these random variables concentrate around their expectations. The main technical contribution of the paper is to prove that weighted intrinsic volumes concentrate. This argument first rewrites the exponential moments of the sequence of weighted intrinsic volumes in terms of a distance integral. The distance integral can be interpreted as the variance of a function with respect to a continuous concave measure. We bound the distance integral using a variance inequality for concave measures. These variance inequalities emerged from research on functional extensions of the Brunn--Minkowski inequality, so geometry is central to our whole program. Our approach can be generalized in several different directions. On the probabilistic side, it is likely that weighted intrinsic volumes satisfy a central limit theorem, analogous to the one derived for conic intrinsic volumes in~\cite{GNP17:Gaussian-Phase}. On the geometric side, the intrinsic volume sequence is a special case of the sequence of mixed volumes of two convex bodies. We anticipate that our techniques can be used to prove that the sequence of mixed volumes also concentrates, and we plan to investigate consequences of this phenomenon in geometry, combinatorics, and algebra. \appendix \section{Invariant Measures} \label{sec:invariant} Integral geometry problems involve integration over spaces of geometric objects, equipped with invariant measures. To formulate these questions correctly, we must be quite explicit about the construction of the measures. Many mysteries in geometric probability, such as Bertrand's paradox~\cite[pp.~5--6]{Ber89:Calcul-Probabilites}, can be traced to a confusion about the random model. The material in this section is summarized from~\cite[Chaps.~5, 13]{SW08:Stochastic-Integral}. \subsection{Notational Collision} We have chosen to stick with the standard notation from~\cite{SW08:Stochastic-Integral} for invariant measures. Although this notation collides with the symbols we previously used for concave measures, we do not use these constructions simultaneously, so there is no risk of confusion. \subsection{The Special Orthogonal Group} \label{sec:invariant-SO} Recall that the special orthogonal group $\mathrm{SO}(n)$ consists of all rotations acting on $\mathbbm{R}^n$. We identify the group with its representation as the family of all $n \times n$ orthogonal matrices with determinant one, acting on itself by matrix multiplication. Since the special orthogonal group is compact (in the relative topology), it admits a unique invariant probability measure, denoted by $\nu$. That is, $$ \nu( \mtx{O} \set{S} ) = \nu( \set{S} ) \quad\text{for all $\mtx{O} \in \mathrm{SO}(n)$ and Borel $\set{S} \subseteq \mathrm{SO}(n)$.} $$ Furthermore, the measure is scaled so that $\nu(\mathrm{SO}(n)) = 1$. \subsection{The Grassmannian} \label{sec:invariant-grass} The (real) Grassmannian $\mathrm{Gr}(m, n)$ is the collection of all $m$-dimensional subspaces in the Euclidean space $\mathbbm{R}^n$. The special orthogonal group $\mathrm{SO}(n)$ acts on $\mathrm{Gr}(m,n)$ by rotation. Fix a reference subspace $\set{L}_{\star} \in \mathrm{Gr}(m,n)$; the particular choice has no downstream effect. We can construct a unique rotation-invariant probability measure $\nu_m$ on $\mathrm{Gr}(m, n)$ as the push-forward of the measure $\nu$ on $\mathrm{SO}(n)$ via the map $\mtx{O} \mapsto \mtx{O} \set{L}_{\star}$. The measure satisfies $$ \nu_m( \mtx{O} \set{S} ) = \nu_m( \set{S} ) \quad\text{for all $\mtx{O} \in \mathrm{SO}(n)$ and Borel $\set{S} \subseteq \mathrm{Gr}(m,n)$.} $$ The total measure of the Grassmannian satisfies $\nu_m(\mathrm{Gr}(m,n)) = 1$. In fact, the invariance property extends to the full orthogonal group because negation acts as the identity map on the Grassmannian. \subsection{The Group of Proper Rigid Motions} \label{sec:invariant-RM} The special Euclidean group $\mathrm{SE}(n)$ consists of proper rigid motions on $\mathbbm{R}^n$. More precisely, a proper rigid motion $g$ acts on $\mathbbm{R}^n$ first by applying a rotation $\mtx{O}$ and then applying a translation $\tau_{\vct{x}}$ by a vector $\vct{x}$. The group product is composition. Introduce the notation $\mathrm{Leb}_m$ for the Lebesgue measure on an $m$-dimensional subspace of $\mathbbm{R}^n$. We construct an invariant measure $\mu$ on $\mathrm{SE}(n)$ as the push-forward of the measure $\nu \times \mathrm{Leb}_n$ on the product space $\mathrm{SO}_n \times \mathbbm{R}^n$ under the map $(\mtx{O}, \vct{x}) \mapsto \tau_{\vct{x}} \circ \mtx{O}$. In other words, $$ \mu( g \set{S} ) = \mu( \set{S} ) \quad\text{for all $g \in \mathrm{SE}(n)$ and Borel $\set{S} \subseteq \mathrm{SE}(n)$.} $$ Unraveling the definitions, we learn that the measure $\mu$ satisfies the normalization \begin{equation} \label{eqn:RM-normalization} \mu \{ g \in \mathrm{SE}(n) : g \vct{0}_n \in \ball{n} \} = \kappa_n. \end{equation} In other words, the measure of the set of proper rigid motions that keeps the origin $\vct{0}_n$ within the Euclidean unit ball equals the Lebesgue measure of the ball. In~\eqref{eqn:RM-normalization}, we can replace the origin by any other point without changing the construction. Subject to the normalization~\eqref{eqn:RM-normalization}, the measure $\mu$ is the unique rigid-motion-invariant measure on $\mathrm{SE}(n)$. \subsection{The Affine Grassmannian} \label{sec:invariant-Af} The affine Grassmannian $\operatorname{Af}(m, n)$ consists of all affine subspaces of $\mathbbm{R}^n$ with dimension $m$. The group $\mathrm{SE}(n)$ of rigid motions acts on $\operatorname{Af}(m, n)$ in the obvious way. Fix a reference affine space $\set{E}_{\star} \in \operatorname{Af}(m,n)$. We can construct a unique measure $\mu_m$ on $\operatorname{Af}(m, n)$ that is invariant under proper rigid motions as the push-forward of the measure $\mu$ on $\mathrm{SE}(n)$ via the map $g \mapsto g \set{E}_\star$. The invariance property of the measure $\mu_m$ means that $$ \mu_m( g \set{S} ) = \mu_m( \set{S} ) \quad\text{for all $g \in \mathrm{SE}(n)$ and Borel $\set{S} \subseteq \operatorname{Af}(m,n)$.} $$ In fact, the measure is invariant over the full group of rigid motions, $\mathrm{E}(n)$, because negation is an involution on $\operatorname{Af}(m,n)$. The normalization is \begin{equation} \label{eqn:Af-normalization} \mu_m \{\set{E} \in \operatorname{Af}(m,n) : \ball{n} \cap \set{E} \neq \emptyset\} = \kappa_{n-m}. \end{equation} In other words, the measure of the set of $m$-dimensional affine spaces that hit the Euclidean unit ball equals the volume of the $(n-m)$-dimensional ball. \section{Elements of Integral Geometry} \label{sec:elements} This section summarizes the key properties of the intrinsic volumes. Then we outline Hadwiger's approach to integral geometry, which highlights why intrinsic volumes are so significant. As a concrete example of this methodology, we derive a particular case of Crofton's formula. \subsection{Properties of (Weighted) Intrinsic Volumes} \label{sec:intvol-properties} We begin with an overview of the properties of the intrinsic volume functionals. Let $\set{K}, \set{M} \subset \mathbbm{R}^n$ be convex bodies. For each index $i = 0,1,2, \dots, n$, the intrinsic volume $\mathrm{V}_i$ is: \begin{enumerate} \item \textbf{Nonnegative:} $\mathrm{V}_i(\set{K}) \geq 0$. \item \textbf{Monotone:} $\set{M} \subseteq \set{K}$ implies $\mathrm{V}_i(\set{M}) \leq \mathrm{V}_i(\set{K})$. \item \textbf{Homogeneous:}\label{eqn:homogeneous} $\mathrm{V}_i(\lambda \set{K}) = \lambda^i \, \mathrm{V}_i(\set{K})$ for each $\lambda \geq 0$. \item \textbf{Invariant:} $\mathrm{V}_i(g \set{K}) = \mathrm{V}_i(\set{K})$ for each \emph{rigid motion} $g$. That is, $g$ acts by rotation, reflection, and translation. \item \textbf{Intrinsic:} $\mathrm{V}_i(\set{K}) = \mathrm{V}_i(\set{K} \times \{ \vct{0}_r \})$ for each natural number $r$. \item \textbf{A Valuation:} $\mathrm{V}_i(\emptyset) = 0$. If $\set{M} \cup \set{K}$ is also a convex body, then $$ \mathrm{V}_i( \set{M} \cap \set{K} ) + \mathrm{V}_i( \set{M} \cup \set{K} ) = \mathrm{V}_i(\set{M}) + \mathrm{V}_i(\set{K}). $$ This is a restricted version of the additivity property satisfied by a measure. \item \textbf{Continuous:} If $\set{K}_m \to \set{K}$ in the Hausdorff metric, then $\mathrm{V}_i(\set{K}_m) \to \mathrm{V}_i(\set{K})$. \end{enumerate} It is clear from the definitions~\eqref{eqn:rotvol-def-intro} and~\eqref{eqn:rmvol-def-intro} that the rotation volumes, $\mathring{\intvol}_i$, and the rigid motion volumes, $\bar{\intvol}_i$, are not intrinsic, but they enjoy the other six properties on the list. Note that, because of the indexing, the rotation volume $\mathring{\intvol}_i$ on $\mathbbm{R}^n$ is homogeneous of degree $n - i$. Similarly, the total rotation volume, $\mathring{\wills}$, and the total rigid motion volume, $\bar{\wills}$, are neither homogeneous nor intrinsic, but they share the other five properties of the intrinsic volumes. \subsection{Hadwiger's Approach to Integral Geometry} There is a remarkable and illuminating path to the main formulas of integral geometry that proceeds via a deep theorem of Hadwiger~\cite{Had51:Funktionalsatzes,Had52:Additive-Funktionale,Had57:Vorlesungen}. \begin{fact}[Hadwiger's Functional Theorem] \label{fact:hadwiger} Suppose that $F$ is a rigid-motion-invariant, continuous valuation on the space of convex bodies in $\mathbbm{R}^n$. Then $F$ is a linear combination of the intrinsic volumes $\mathrm{V}_0, \mathrm{V}_1, \dots, \mathrm{V}_n$. \end{fact} \noindent See~\cite[Thm.~14.4.6]{SW08:Stochastic-Integral} for a proof of this result. The critical step is to verify that the $n$th intrinsic volume, $\mathrm{V}_n$, is the only rigid-motion-invariant, continuous valuation on $\mathbbm{R}^n$ that assigns the value one to the cube $\set{Q}^n$ and zero to every convex body with dimension strictly less than $n$. This is analogous to the fact that the Lebesgue measure, $\mathrm{Leb}_n$, is the unique translation-invariant measure on Borel sets in $\mathbbm{R}^n$. The technical impediment to proving Fact~\ref{fact:hadwiger} is that convex bodies form a smaller class than Borel sets, so they provide fewer constraints on the valuation. A canonical method for obtaining an invariant, continuous valuation $F$ is to integrate a continuous valuation $\varphi$ over a space of geometric objects, such as the group $\mathrm{E}(n)$ of all rigid motions. Then Fact~\ref{fact:hadwiger} demonstrates that $$ F(\set{K}) := \int_{\mathrm{E}(n)} \varphi(g \set{K}) \, \mu(\diff{g}) = \sum_{i=0}^n a_i \, \mathrm{V}_i(\set{K}) \quad\text{for $a_i \in \mathbbm{R}$.} $$ To determine the coefficients $a_i$, we typically evaluate the functional $F$ on simple convex bodies, such as Euclidean balls. This methodology can be used to obtain many results in integral geometry. Our approach to integral geometry replaces the intrinsic volumes with either the rotation volumes or the rigid motion volumes, depending on the transformation group in the problem. In the context of Hadwiger's functional theorem, this simply amounts to a change of basis for the linear space of invariant, continuous valuations. \subsection{Example: Crofton's Formula} To illustrate how Hadwiger's functional theorem is deployed, let us derive Crofton's slicing formula. For a convex body $\set{K} \subset \mathbbm{R}^n$ and an affine space $\set{E} \in \operatorname{Af}(m, n)$, introduce the functional $\varphi(\set{K}; \set{E}) := \mathrm{V}_0(\set{K} \cap \set{E})$. We easily verify that $\varphi(\cdot; \set{E})$ is a continuous valuation for each choice of $\set{E}$. Set $$ F(\set{K}) := \int_{\operatorname{Af}(m,n)} \varphi(\set{K}; \set{E}) \, \mu_m(\diff{\set{E}}) = \int_{\operatorname{Af}(m,n)} \mathrm{V}_0(\set{K} \cap \set{E}) \, \mu_m(\diff{\set{E}}). $$ The functional $F$ inherits the continuous valuation property from $\varphi$. The functional $F$ is also invariant under rigid motions. Indeed, for each $g \in \mathrm{E}(n)$, \begin{align*} F(g \set{K}) &= \int_{\operatorname{Af}(m,n)} \mathrm{V}_0( (g \set{K}) \cap \set{E}) \, \mu_m(\diff{\set{E}}) \\ &= \int_{\operatorname{Af}(m,n)} \mathrm{V}_0( \set{K} \cap (g^{-1} \set{E}) ) \, \mu_m(\diff{\set{E}}) = \int_{\operatorname{Af}(m,n)} \mathrm{V}_0( \set{K} \cap \set{E}) \, \mu_m(\diff{\set{E}}) = F(\set{K}). \end{align*} This point follows from the rigid motion invariance of the Euler characteristic $\mathrm{V}_0$ and the measure $\mu_m$. An application of Fact~\ref{fact:hadwiger} yields $$ F(\set{K}) = \int_{\operatorname{Af}(m,n)} \mathrm{V}_0( \set{K} \cap \set{E}) \, \mu_m(\diff{\set{E}}) = \sum_{i=0}^n a_i \, \mathrm{V}_i(\set{K}). $$ To calculate the coefficients $a_i$, select $\set{K} = \lambda \ball{n}$ for a scalar parameter $\lambda \geq 0$. $$ F(\lambda \ball{n}) = \int_{\operatorname{Af}(m,n)} \mathrm{V}_0((\lambda \ball{n}) \cap \set{E}) \, \mu_m(\diff{\set{E}}) = \kappa_{n-m} \cdot \lambda^{n-m}. $$ We have used the formula~\eqref{eqn:Af-normalization} to evaluate the integral. On the other hand, using Example~\ref{ex:ball}, $$ F(\lambda \ball{n}) = \sum_{i=0}^n a_i \mathrm{V}_i(\lambda \ball{n}) = \sum_{i=0}^n a_i \cdot \binom{n}{i} \frac{\kappa_{n}}{\kappa_{n-i}} \cdot \lambda^i $$ Matching terms in the polynomials in the last two displays, we can identify the value of $\alpha_{n-m}$. The remaining coefficients vanish: $a_i = 0$ for $i \neq n - m$. In summary, $$ F(\set{K}) = \int_{\operatorname{Af}(m,n)} \mathrm{V}_0(\set{K} \cap \set{E}) \, \mu_m(\diff{\set{E}}) = \binom{n}{m} \frac{\kappa_m \kappa_{n-m}}{\kappa_n} \cdot \mathrm{V}_{n-m}(\set{K}). $$ This is Crofton's famous result. \section{Reweighting Integral Geometry Formulas} \label{sec:formulas} Our work involves nonstandard presentations of classic formulas from integral geometry. This appendix describes the simple tools we need to translate the familiar statements into our language. \subsection{Structure Coefficients} Schneider \& Weil~\cite{SW08:Stochastic-Integral} phrase integral geometry results in terms of the standard intrinsic volumes. These formulations involve a family of structure coefficients. \begin{definition}[Structure Coefficients] \label{def:structure} For nonnegative integers $i$ and $j$, introduce the \emph{structure coefficient} $$ c_j^i := \frac{i! \kappa_i}{j! \kappa_j} $$ This definition is extended to families $(i_1, \dots, i_r)$ and $(j_1, \dots, j_r)$ of nonnegative integers: $$ c_{j_1,\dots,j_r}^{i_1,\dots,i_r} := \prod_{s=1}^r c_{j_s}^{i_s} = \prod_{s=1}^r \frac{i_s! \kappa_{i_s}}{j_s! \kappa_{j_s}}. $$ \end{definition} The following simple result allows us to manipulate the structure coefficients. It is the source of the weights that appear in the rigid motion volumes and the rotation volumes. \begin{lemma}[Structure Coefficients] \label{lem:structure} Consider four nonnegative integers $i_1,i_2,j_1,j_2$ that satisfy the relation $i_1 + i_2 = j_1 + j_2$. Then the structure coefficient can be written as $$ c_{j_1, j_2}^{i_1, i_2} = \frac{\omega_{j_1+1}}{\omega_{i_1+1}} \cdot \frac{\omega_{j_2+1}}{\omega_{i_2+1}}. $$ \end{lemma} \begin{proof} This statement follows instantly from Definition~\ref{def:structure}, the formulas~\eqref{eqn:ball-sphere} for the volume and surface area of the Euclidean ball, and the Legendre duplication formula. \end{proof} \subsection{Integral Geometry with Intrinsic Volumes} For completeness, we give the usual statements of the integral geometry formulas that we have studied in this paper. These formulations provide additional intuition and may be easier to interpret. Using Lemma~\ref{lem:structure}, these results lead to the corresponding statements in terms of the rotation volumes or the rigid motion volumes. \subsubsection{Cartesian Products} First, we note that the intrinsic volumes of a direct product are given by the convolution of the intrinsic volumes. \begin{fact}[Cartesian Products] \label{fact:product-intvol} Consider nonempty convex bodies $\set{K}, \set{M} \subset \mathbbm{R}^n$. For each index $i = 0,1,2,\dots,2n$, the intrinsic volumes of the product satisfy $$ \mathrm{V}_i(\set{K} \times \set{M}) = \sum_{j \leq i} \mathrm{V}_j(\set{K}) \, \mathrm{V}_{i-j}(\set{M}). $$ \end{fact} There is a simple proof of Fact~\ref{fact:product-intvol}, due to Hadwiger~\cite{Had75:Willssche}, based on an integral representation (Corollary~\ref{cor:intvol-metric}) of the total intrinsic volume. See~\cite[Lem.~14.2.1]{SW08:Stochastic-Integral} or~\cite[Cor.~5.4]{LMNPT20:Concentration-Euclidean}. \subsubsection{Projection Formula} The projection formula appears as~\cite[Thm.~6.2.2]{SW08:Stochastic-Integral}; the simplest forms date back to work of Cauchy and Kubota. \begin{fact}[Projection Formula: Intrinsic Volumes] \label{fact:projection-intvol} Consider a nonempty convex body $\set{K} \subset \mathbbm{R}^n$. For each subspace dimension $m = 0, 1,2, \dots, n$ and each index $i = 0,1,2,\dots, m$, $$ \int_{\mathrm{Gr}(m,n)} \mathrm{V}_i(\set{K}|\set{L}) \, {\nu}_m(\diff{\set{L}}) = c^{m,n-i}_{n,m-i} \cdot \mathrm{V}_i(\set{K}). $$ The Grassmannian $\mathrm{Gr}(m, n)$ is equipped with its invariant probability measure $\nu_m$; see Appendix~\ref{sec:invariant-grass}. \end{fact} The case $m = i$ is called \emph{Kubota's formula}; it shows that the $i$th intrinsic volume of $\set{K}$ is proportional to the average $i$-dimensional volume of projections of $\set{K}$ onto $i$-dimensional subspaces. The case $i = n - 1$ is called \emph{Cauchy's formula}; it expresses the surface area of $\set{K}$ in terms of the average $(n-1)$-dimensional volume of the projections of $\set{K}$ onto hyperplanes through the origin. \subsubsection{Slicing Formula} The slicing formula is due to Crofton, and it appears as~\cite[Thm.~5.1.1]{SW08:Stochastic-Integral}. \begin{fact}[Slicing Formula: Intrinsic Volumes] \label{fact:slicing-intvol} Consider a nonempty convex body $\set{K} \subset \mathbbm{R}^n$. For each affine space dimension $m = 0, 1, 2, \dots, n$ and each index $i = 0,1,2,\dots, m$, $$ \int_{\operatorname{Af}(m,n)} \mathrm{V}_i(\set{K} \cap \set{E}) \, \mu_m(\diff{\set{E}}) = c^{m,n-m+i}_{i,n} \cdot \mathrm{V}_{n-m+i}(\set{K}). $$ The affine Grassmannian $\operatorname{Af}(m,n)$ is equipped with its invariant measure $\mu_m$; see Appendix~\ref{sec:invariant-Af}. \end{fact} Of particular interest is the case $i = 0$. Indeed, the Euler characteristic $\mathrm{V}_0(\set{K} \cap \set{E})$ registers whether the affine space $\set{E}$ intersects the set $\set{K}$. Thus, the slicing formula shows that the intrinsic volume $\mathrm{V}_{n-m}(\set{K})$ is proportional to the measure of the set of $m$-dimensional affine spaces that hit $\set{K}$. \subsubsection{Rotation Mean Formula} The rotation mean formula is attributed to Hadwiger; see~\cite[Thm.~6.1.1]{SW08:Stochastic-Integral}. \begin{fact}[Rotation Mean Value Formula: Intrinsic Volumes] \label{fact:rotmean-intvol} Consider two nonempty convex bodies $\set{K}, \set{M} \subset \mathbbm{R}^n$. Then $$ \int_{\mathrm{SO}(n)} \mathrm{V}_i(\set{K} + \mtx{O} \set{M}) \, \nu(\diff{\mtx{O}}) = \sum_{j=0}^i c_{n-i,n}^{n+j-i,n-j} \cdot \mathrm{V}_j(\set{K}) \, \mathrm{V}_{i-j}(\set{M}). $$ The special orthogonal group $\mathrm{SO}(n)$ is equipped with its invariant probability measure $\nu$; see Appendix~\ref{sec:invariant-SO}. \end{fact} The special case $\set{M} = \lambda \ball{n}$ is equivalent to the Steiner formula~\eqref{eqn:steiner-intro}. \subsubsection{Kinematic Formula} The kinematic formula goes back to work of Blaschke, Santal{\'o}, Chern and others. We have drawn the statement from~\cite[Thm.~5.1.3]{SW08:Stochastic-Integral}. \begin{fact}[Kinematic Formula] \label{fact:kinematic-intvol} Consider two nonempty bodies $\set{K}, \set{M} \subset \mathbbm{R}^n$. For each $i = 0,1,2,\dots,n$, $$ \int_{\mathrm{SE}(n)} \mathrm{V}_i(\set{K} \cap g \set{M}) \, \mu(\diff{g}) = \sum_{j=i}^n c_{i,n}^{j,n+i-j} \cdot \mathrm{V}_j(\set{K}) \, \mathrm{V}_{n+i-j}(\set{M}). $$ The group $\mathrm{SE}(n)$ of proper rigid motions on $\mathbbm{R}^n$ is equipped with its invariant measure $\mu$; see Appendix~\ref{sec:invariant-RM}. \end{fact} The special case $i = 0$ is called the \emph{principal kinematic formula}, and it describes the measure of the set of rigid motions that bring $\set{M}$ into contact with $\set{K}$. \subsection{History: Renormalization} The idea of renormalizing the intrinsic volumes to simplify integral geometry formulas is due to Nijenhuis~\cite{nijenhuis1974chern}; see also~\cite[pp.~176--177]{SW08:Stochastic-Integral}. In addition to rescaling the intrinsic volumes, Nijenhuis also rescales the invariant measures. In contrast, we work with the standard scaling of the invariant measures and only modify the normalization of the intrinsic volumes. Although our reformulations are quite trivial, we have not been able to locate a reference for this approach in the literature. Therefore, we have given an appropriate amount of detail. \section{Intrinsic Volumes: Refined Concentration} \label{app:intvol} Last, we state and prove a refined concentration inequality for the intrinsic volume random variable of a convex body, introduced in Section~\ref{sec:intvol-def}. \subsection{Variance and Cgf} Our main result for intrinsic volumes gives detailed bounds for the variance of the intrinsic volume random variable and its cgf. By standard manipulations, these statements lead to concentration inequalities. \begin{theorem}[Intrinsic Volumes: Variance and Cgf] \label{thm:intvol-conc} Let $\set{K} \subset \mathbbm{R}^n$ be a nonempty convex body with intrinsic volume random variable $I_{\set{K}}$, as in Definition~\ref{def:wintvol-rv}. Define the central intrinsic volume and its complement: $$ \delta(\set{K}) := \operatorname{\mathbb{E}}[ I_{\set{K}} ] \quad\text{and}\quad \delta_{\circ}(\set{K}) := n - \delta(\set{K}). $$ The variance of $I_{\set{K}}$ satisfies $$ \Var[ I_{\set{K}} ] \leq \frac{2 \delta(\set{K}) \delta_{\circ}(\set{K})}{n + \delta(\set{K})} \leq \frac{2 \delta(\set{K}) \delta_{\circ}(\set{K})}{n} =: \sigma^2(\set{K}). $$ The cgf of $I_{\set{K}}$ satisfies \begin{align*} \xi_{I_{\set{K}}}(\theta) &\leq \sigma^2(\set{K}) \cdot \frac{\mathrm{e}^{\beta_{\circ} \theta} - \beta_{\circ} \theta - 1}{\beta_{\circ}^2} \quad\text{for $\theta \geq 0$} &\text{where}&& \beta_{\circ} &:= \frac{2\delta_{\circ}(\set{K})}{n} \leq 2; \\ \xi_{I_{\set{K}}}(\theta) &\leq \sigma^2(\set{K}) \cdot \frac{\mathrm{e}^{\beta \theta} - \beta \theta - 1}{\beta^2} \quad\text{for $\theta \leq 0$} &\text{where}&& \beta &:= \frac{2\delta(\set{K})}{n} \leq 2. \end{align*}\end{theorem} \noindent The proof of Theorem~\ref{thm:intvol-conc} occupies the rest of this Appendix. In combination with Fact~\ref{fact:poisson}, this result yields Poisson-type tail bounds for the intrinsic volume random variable. We can also derive weaker Bernstein inequalities, as in~\eqref{eqn:bernstein}. Using the cgf bounds and~\eqref{eqn:cgf-indep}, we can develop probability inequalities for sums of independent intrinsic volume random variables; these results provide concentration for the intrinsic volumes of a Cartesian product of convex bodies via Fact~\ref{fact:product-intvol}. We omit detailed statements. \subsection{Comparison with Prior Work} \label{sec:intvol-ulc} Theorem~\ref{thm:intvol-conc} improves substantially over the the concentration of intrinsic volumes reported in our previous work~\cite{LMNPT20:Concentration-Euclidean}. For example, the earlier paper only achieves the variance bound $\Var[ I_{\set{K}} ] \leq 4n$, while our new approach comprehends that the variance does not exceed $2n$, and may be far smaller. The Alexandrov--Fenchel inequalities~\cite[Sec.~7.3]{Sch14:Convex-Bodies} imply that the intrinsic volumes of a convex body form an ultra-log-concave (ULC) sequence~\cite{Che76:Processus-Gaussiens,McM91:Inequalities-Intrinsic}. Owing to~\cite[Thm.~1.3 and Lem.~5.1]{Joh17:Discrete-Log-Sobolev}, this fact ensures that the intrinsic volume sequence of $\set{K}$ has Poisson-type concentration with variance proportional to $\mathrm{V}_1(\set{K})$. In contrast, Theorem~\ref{thm:intvol-conc} shows that the intrinsic volumes exhibit Poisson concentration with variance $\sigma^2(\set{K}) \leq 2 \delta_{\circ}(\set{K})$. Up to the factor $2$, our new result is always sharper than the bound that follows from the ULC property because $\mathrm{V}_1(\set{K}) \geq \delta_{\circ}(\set{K})$. This surprising observation can be extracted from~\cite[Lem.~5.1 and Lem.~5.3]{Joh17:Discrete-Log-Sobolev}. After this paper was written, Aravinda, Marsiglietti, and Melbourne~\cite[Cor.~1.1]{AMM21:Concentration-Inequalities} proved that all ULC sequences exhibit Poisson-type concentration. Their argument follows by a direct comparison between the ULC sequence and the Poisson distribution with the same mean. The concentration inequality they obtain, however, remains slightly weaker than the one that follows from Theorem~\ref{thm:intvol-conc}. \subsection{Reduction to Thermal Variance Bound} Theorem~\ref{thm:intvol-conc} follows from a bound on the thermal variance of the intrinsic volume random variable, which we establish in the remaining part of this Appendix. \begin{proposition}[Intrinsic Volumes: Thermal Variance] \label{prop:intvol-tvar} Let $\set{K} \subset \mathbbm{R}^n$ be a nonempty convex body. The thermal variance of the associated intrinsic volume random variable $I_{\set{K}}$ satisfies $$ \xi_{I_{\set{K}}}''(\theta) \leq \frac{2 \xi_{I_{\set{K}}}'(\theta) \cdot \big[ n - \xi_{I_{\set{K}}}'(\theta) \big]}{n + \xi_{I_{\set{K}}}'(\theta)} \leq \frac{2}{n} \cdot \xi_{I_{\set{K}}}'(\theta) \cdot \big[n - \xi_{I_{\set{K}}}'(\theta)\big] \quad\text{for all $\theta \in \mathbbm{R}$.} $$ \end{proposition} \begin{proof}[Proof of Theorem~\ref{thm:intvol-conc} from Proposition~\ref{prop:intvol-tvar}] The argument is essentially the same as the proof of Theorem~\ref{thm:rmvol-conc} from Proposition~\ref{prop:rmvol-tvar}. We use the stronger inequality from Proposition~\ref{prop:intvol-tvar} to prove the variance bound, and we back off to the weaker inequality to establish cgf bounds. \end{proof} \subsection{Setup} Let us commence with the proof of Proposition~\ref{prop:intvol-tvar}, the thermal variance bound for the intrinsic volumes. This argument follows the same pattern as the others. Fix a nonempty convex body $\set{K} \subset \mathbbm{R}^n$, which we suppress. For $i = 0, 1, 2, \dots, n$, let $\mathrm{V}_i$ be the $i$th intrinsic volume of $\set{K}$, and let $\mathrm{W}$ be the total intrinsic volume. The intrinsic volume random variable $I$ follows the distribution \begin{equation} \label{eqn:intvol-rv-pf} \Prob{ I = n - i } = I_i / \mathrm{W} \quad\text{for $i = 0, 1, 2, \dots, n$.} \end{equation} Write $\delta = \operatorname{\mathbb{E}} I $ for the expectation of the intrinsic volume random variable. \subsection{The Distance Integral} The first step is to express the exponential moments of the sequence of intrinsic volumes as a distance integral. \begin{proposition}[Intrinsic Volumes: Distance Integral] \label{prop:intvol-distint} For each $\theta \in \mathbbm{R}$, define the convex potential \begin{equation} \label{eqn:intvol-potent} J_{\theta}(\vct{x}) := \pi \mathrm{e}^{-2\theta} \dist^2(\vct{x}) \quad\text{for $\vct{x} \in \mathbbm{R}^n$.} \end{equation} For any function $h : \mathbbm{R}_+ \to \mathbbm{R}$ where the expectations on the right-hand side are finite, \begin{equation} \label{eqn:intvol-distint} \int_{\mathbbm{R}^n} h(2 J_{\theta}(\vct{x})) \, \mathrm{e}^{-J_{\theta}(\vct{x})} \idiff{\vct{x}} =\sum_{i=0}^n \operatorname{\mathbb{E}}[ h(X_{i})] \, \mathrm{e}^{(n-i)\theta} \cdot \mathrm{V}_i. \end{equation} The random variable $X_i \sim \textsc{chisq}(i)$ follows the chi-squared distribution with $i$ degrees of freedom. By convention, $X_0 = 0$. \end{proposition} \begin{proof} The potential $J_{\theta}$ defined in~\eqref{eqn:intvol-potent} is convex because of Fact~\ref{fact:dist}\eqref{dist:1}, which states that the squared distance is convex. Next, we instantiate Fact~\ref{fact:distance-integral} with the function $f(r) = h(2\pi \mathrm{e}^{-2\theta} r^2) \, \mathrm{e}^{-\pi \mathrm{e}^{-2\theta} r^2}$ to arrive at $$ \int_{\mathbbm{R}^n} h(2 J_{\theta}(\vct{x})) \, \mathrm{e}^{-J_{\theta}(\vct{x})} \idiff{\vct{x}} = h(0) \cdot \mathrm{V}_n + \sum_{i=0}^{n-1} \left( \int_0^\infty h(2\pi \mathrm{e}^{-2\theta} r^2) \cdot \mathrm{e}^{-\pi \mathrm{e}^{-2\theta} r^2} r^{n-i-1} \idiff{r} \right) \omega_{n-i} \cdot \mathrm{V}_i. $$ In the integral, making the change of variables $2\pi\mathrm{e}^{-2\theta} r^2 \mapsto s$, we recognize an expectation with respect to the chi-squared distribution: \begin{multline*} \omega_{n-i} \left( \int_0^\infty h(2\pi \mathrm{e}^{-2\theta} r^2) \cdot \mathrm{e}^{-\pi \mathrm{e}^{-2\theta} r^2} r^{n-i-1} \idiff{r} \right) \\ = \frac{2 \pi^{(n-i)/2}}{\Gamma((n-i)/2)} \cdot \frac{1}{2 (2\pi \mathrm{e}^{-2\theta})^{(n-i)/2}} \int_0^\infty h(s) \cdot \mathrm{e}^{-s/2} s^{n-i-1} \idiff{s} = \operatorname{\mathbb{E}}[ h(X_{n-i}) ] \, \mathrm{e}^{(n-i) \theta}. \end{multline*} We have also used the formula~\eqref{eqn:ball-sphere} for the surface area $\omega_{n-i}$ of the sphere. Combine the displays to complete the proof. \end{proof} This result yields alternative formulations for the total intrinsic volume and the central intrinsic volume. The result for $\mathrm{W}$ is due to Hadwiger~\cite{Had75:Willssche}. \begin{corollary}[Intrinsic Volumes: Metric Formulations] \label{cor:intvol-metric} The total intrinsic volume $\mathrm{W}$ and the central intrinsic volume $\delta$ admit the expressions \begin{align*} \mathrm{W} &= \int_{\mathbbm{R}^n} \mathrm{e}^{-\pi \dist^2(\vct{x})} \idiff{\vct{x}} &\text{and}&& n - \delta &= \frac{1}{\mathrm{W}} \int_{\mathbbm{R}^n} 2\pi \dist^2(\vct{x}) \, \mathrm{e}^{-\pi \dist^2(\vct{x})} \idiff{\vct{x}}. \end{align*} \end{corollary} \begin{proof} Apply Proposition~\ref{prop:intvol-distint} with $h(s) = 1$ and then with $h(s) = n - s$. \end{proof} \subsection{A Family of Log-Concave Measures} Proposition~\ref{prop:intvol-distint} leads to an expression for the mgf $m_{I}$ of the intrinsic volume random variable. Choose $h(s) = 1$ to find that \begin{equation} \label{eqn:intvol-mgf} m_I(\theta) \stackrel{\eqref{eqn:mgf}}{=} \operatorname{\mathbb{E}} \mathrm{e}^{\theta I} \stackrel{\eqref{eqn:intvol-rv-pf}}{=} \frac{1}{\mathrm{W}} \sum_{i=0}^n \mathrm{e}^{(n-i)\theta} \cdot \mathrm{V}_i = \int_{\mathbbm{R}^n} \mathrm{e}^{-J_{\theta}(\vct{x})} \idiff{\vct{x}}. \end{equation} For each $\theta \in \mathbbm{R}$, we introduce a log-concave probability measure $\mu_{\theta}$ with density $$ \frac{\diff \mu_{\theta}(\vct{x})}{\diff{\vct{x}}} = \frac{1}{\mathrm{W}} \cdot \frac{1}{m_{I}(\theta)} \cdot \mathrm{e}^{-J_{\theta}(\vct{x})} \quad\text{for $\vct{x} \in \mathbbm{R}^n$.} $$ Indeed, the measure $\mu_{\theta}$ has total mass one because of~\eqref{eqn:intvol-mgf}. With this notation, we can write Proposition~\ref{prop:intvol-distint} in a more probabilistic fashion. \begin{corollary}[Intrinsic Volumes: Probabilistic Formulation] \label{cor:intvol-lc} Instate the notation from Proposition~\ref{prop:intvol-distint}. Draw a random vector $\vct{z} \sim \mu_{\theta}$. Then $$ \operatorname{\mathbb{E}}[ h(2J_{\theta}(\vct{x})) ] = \frac{1}{m_I(\theta)} \sum_{i=0}^n \operatorname{\mathbb{E}}[ h(X_{n-i}) ] \, \mathrm{e}^{(n-i)\theta} \cdot \Prob{ I = n - i }. $$ \end{corollary} \begin{proof} This formulation is just a reinterpretation of Proposition~\ref{prop:intvol-distint}. \end{proof} \subsection{A Thermal Variance Identity} With Corollary~\ref{cor:intvol-lc} at hand, we quickly obtain identities for the thermal mean $\xi_I'$ and the thermal variance $\xi_I''$ of the intrinsic volume random variable $I$. These results are framed in terms of the mean and variance of the log-concave measure $\mu_{\theta}$. \begin{lemma}[Intrinsic Volumes: Thermal Variance Identity] \label{lem:intvol-tmv} Draw a random vector $\vct{z} \sim \mu_{\theta}$. Then \begin{equation} \label{eqn:intvol-tmv} \xi_I'(\theta) = \operatorname{\mathbb{E}}[ 2 J_{\theta}(\vct{z}) ] \quad\text{and}\quad \xi_{I}''(\theta) = 4 \Var[ J_{\theta}(\vct{z}) ] - 2 \xi_{I}'(\theta). \end{equation} \end{lemma} \begin{proof} The chi-squared random variable $X_{n-i}$ satisfies the identities $$ \operatorname{\mathbb{E}}[ X_{n-i} ] = n - i \quad\text{and}\quad \operatorname{\mathbb{E}}[ X_{n-i}^2 - 2X_{n-i} ] = (n-i)^2. $$ Therefore, Corollary~\ref{cor:intvol-lc} with $h(s) = s$ implies that \begin{equation} \label{eqn:intvol-tm-pf} \xi_I'(\theta) \overset{\eqref{eqn:thermal-mean}}{=} \frac{1}{m_I(\theta)} \sum_{i=0}^n (n-i) \, \mathrm{e}^{(n-i)\theta} \cdot \Prob{ I = n - i } = \operatorname{\mathbb{E}}[ 2J_{\theta}(\vct{z}) ]. \end{equation} Next, apply Corollary~\ref{cor:intvol-lc} with $h(s) = s^2 - 2s$ to obtain \begin{align*} \xi_{I}''(\theta) &\stackrel{\eqref{eqn:thermal-var}}{=} \left[ \frac{1}{m_I(\theta)} \sum_{i=0}^n (n-i)^2 \, \mathrm{e}^{(n-i)\theta} \cdot \Prob{ I = n - i } \right] - (\xi_I'(\theta))^2 \\ &\stackrel{\eqref{eqn:intvol-tm-pf}}{=} \operatorname{\mathbb{E}}[ (2 J_{\theta}(\vct{z}))^2 - 2 (2J_{\theta}(\vct{z})) ] - (\operatorname{\mathbb{E}}[ 2 J_{\theta}(\vct{z})])^2 \stackrel{\eqref{eqn:intvol-tm-pf}}{=} \Var[ 2 J_{\theta}(\vct{z}) ] - 2 \xi_I'(\theta). \end{align*}Since the variance is $2$-homogeneous, this result is equivalent to the statement. \end{proof} \subsection{A Variance Bound} We have now converted the problem of bounding the thermal variance of the intrinsic volume random variable into a problem about the variance of a function with respect to a log-concave measure. Here is the required estimate. \begin{lemma}[Intrinsic Volumes: Variance Bound] \label{lem:intvol-varbd} Instate the notation of Lemma~\ref{lem:intvol-tmv}. For a random variable $\vct{z} \sim \mu_{\theta}$, \begin{equation} \label{eqn:intvol-varbd} \Var[ J_{\theta}(\vct{z}) ] \leq \frac{n \operatorname{\mathbb{E}}[ 2J_{\theta}(\vct{z}) ]}{n + \operatorname{\mathbb{E}}[ 2 J_{\theta}(\vct{z}) ]}. \end{equation} \end{lemma} To establish Lemma~\ref{lem:intvol-varbd}, we must invoke a dimensional variance inequality for log-concave measures. \begin{fact}[Dimensional Brascamp--Lieb] \label{fact:dim-brascamp-lieb} Let $J : \mathbbm{R}^n \to \mathbbm{R}_{++}$ be a twice differentiable and strongly convex potential. Consider the log-concave probability measure $\mu$ on $\mathbbm{R}^n$ whose density is proportional to $\mathrm{e}^{-J}$. Draw a random variable $\vct{z} \sim \mu$. For any differentiable function $f : \mathbbm{R}^n \to \mathbbm{R}$, $$ \Var[ f(\vct{z}) ] \leq \int \ip{(\operatorname{Hess} J)^{-1} \grad f}{\grad f} \diff{\mu} - \frac{\left[ \int Jf \idiff{\mu} - \int J \idiff{\mu} \int f \idiff{\mu} \right]^2}{n - \Var[ J(\vct{z}) ] }. $$ We have suppressed the integration variable for legibility. \end{fact} Fact~\ref{fact:dim-brascamp-lieb} was established in~\cite[Prop.~4.1]{BGG18:Dimensional-Improvements} by means of an optimal transport argument. The second term sharpens the classic Brascamp--Lieb variance inequality~\cite[Thm.~4.1]{BL76:Extensions-Brunn-Minkowski} for a log-concave measure by taking into account the ambient dimension. We can obtain a weaker version of Lemma~\ref{lem:intvol-varbd} by a direct application of Brascamp--Lieb. \begin{proof}[Proof of Lemma~\ref{lem:intvol-varbd}] Fix the parameter $\theta \in \mathbbm{R}$. We cannot apply the dimensional Brascamp--Lieb inequality directly because the potential $J_{\theta}$ is not strongly convex. Instead, for each $\varepsilon > 0$, we construct the strongly convex potential $$ J^{\varepsilon}(\vct{x}) := J_{\theta}(\vct{x}) + \half\varepsilon \norm{\vct{x}}^2, \quad\text{recalling that}\quad J_{\theta}(\vct{x}) = \pi \mathrm{e}^{-2\theta} \dist^2(\vct{x}). $$ We will consider the function $f = J_{\theta}$, without any perturbation. Our goal is to evaluate the quadratic form induced by the inverse Hessian of $J^\varepsilon$ at the vector $\grad f$. To that end, note that the original potential $J_{\theta}$ satisfies \begin{equation} \label{eqn:intvolJ-grad} \grad J_{\theta}( \vct{x} ) = \pi \mathrm{e}^{-2 \theta} \grad \dist^2(\vct{x}) = 2\pi \mathrm{e}^{-2\theta} \dist(\vct{x}) \, \vct{n}(\vct{x}). \end{equation} We have used Fact~\ref{fact:dist}\eqref{dist:2}. Owing to Fact~\ref{fact:dist}\eqref{dist:3}, the Hessian of the perturbed potential $J^{\varepsilon}$ satisfies $$ ( \operatorname{Hess} J^{\varepsilon}(\vct{x}) ) \, \grad J(\vct{x}) = (2 \pi \mathrm{e}^{-2\theta} + \varepsilon) \grad J_{\theta}(\vct{x}). $$ Multiply both sides by the inverse Hessian, and take the inner product with $\grad J_{\theta}$ to arrive at $$ \ip{( \operatorname{Hess} J^{\varepsilon}(\vct{x}) )^{-1} \, \grad J_{\theta}(\vct{x})}{ \grad J_{\theta}(\vct{x}) } = \frac{(2\pi \mathrm{e}^{-2\theta})^2}{2\pi\mathrm{e}^{-2\theta} + \varepsilon} \dist^2(\vct{x}) = \frac{2\pi \mathrm{e}^{-2\theta}}{2\pi\mathrm{e}^{-2\theta} + \varepsilon} \cdot 2 J_{\theta}(\vct{x}). $$ We have used~\eqref{eqn:intvolJ-grad} and the property that the normal vector $\vct{n}(\vct{x})$ has unit length when $\dist(\vct{x}) > 0$. Taking the limit as $\varepsilon \downarrow 0$, $$ \ip{( \operatorname{Hess} J_{\theta}(\vct{x}) )^{-1} \, \grad J_{\theta}(\vct{x})}{ \grad J_{\theta}(\vct{x}) } = 2J_{\theta}(\vct{x}). $$ This computation delivers the integrand in the dimensional Brascamp--Lieb inequality. Fact~\ref{fact:dim-brascamp-lieb} applies to the strongly convex potential $J^{\varepsilon}$ and the function $f = J_{\theta}$. By dominated convergence, the inequality remains valid with the limiting potential $J_{\theta}$. Thus, for $\vct{z} \sim \mu_{\theta}$, $$ \Var[ J_{\theta}(\vct{z}) ] \leq \int 2J_{\theta} \idiff \mu - \frac{\Var[ J_{\theta}(\vct{z}) ]^2}{n - \Var[ J_{\theta}(\vct{z}) ]}. $$ The remaining integral coincides with $\operatorname{\mathbb{E}}[ 2J_{\theta}(\vct{z})]$. Solve for the variance to complete the proof. \end{proof} \subsection{Proof of Proposition~\ref{prop:intvol-tvar}} We are now prepared to bound the thermal variance $\xi_{I}''(\theta)$ of the intrinsic volume random variable. Together, Lemmas~\ref{lem:intvol-tmv} and~\ref{lem:intvol-varbd} imply that $$ \xi_{I}''(\theta) \stackrel{\eqref{eqn:intvol-tmv}}{=} 4 \Var[ J_{\theta}(\vct{z}) ] - 2 \xi_I'(\theta) \stackrel{\eqref{eqn:intvol-varbd}}{\leq} \frac{4n \operatorname{\mathbb{E}}[2J_{\theta}]}{n + \operatorname{\mathbb{E}}[2J_{\theta}]} - 2 \xi_I'(\theta) \stackrel{\eqref{eqn:intvol-tmv}}{=} \frac{2 \xi_I'(\theta) (n - \xi_I'(\theta))}{n + \xi_I'(\theta)}. $$ This completes the proof of Proposition~\ref{prop:intvol-tvar} and, with it, the proof of Theorem~\ref{thm:intvol-conc}. \section*{Acknowledgments} This paper realizes a vision that was put forth by our colleague Michael McCoy in 2013. The project turned out to be more challenging than anticipated. The authors would like to thank Jiajie Chen and De Huang for their insights on the concentration of information inequality. Franck Barthe, Arnaud Marsiglietti, Michael McCoy, James Melbourne, Ivan Nourdin, Giovanni Peccati, Rolf Schneider, and Ramon Van Handel provided valuable feedback on the first draft of this paper. MAL would like to thank the Isaac Newton Institute for Mathematical Sciences for support and hospitality during the programme ``Approximation, Sampling and Compression in Data Science'', when work on this paper was undertaken. This work was supported by EPSRC grant number EP/R014604/1. JAT gratefully acknowledges funding from ONR awards N00014-11-1002, N00014-17-12146, and N00014-18-12363. He would like to thank his family for support during these trying times. \newcommand{\etalchar}[1]{$^{#1}$} \end{document}
arXiv
\begin{document} \newtheorem{thm}{Theorem}[section] \newtheorem{corollary}[thm]{Corollary} \newtheorem{prop}[thm]{Proposition} \newtheorem{lemma}[thm]{Lemma} \theoremstyle{remark} \newtheorem{remark}[thm]{Remark} \numberwithin{equation}{section} \def\alpha{\alpha} \def\beta{\beta} \def\gamma{\gamma} \def\delta{\delta} \def\Delta{\Delta} \def\varepsilon{\varepsilon} \def\zeta{\zeta} \def\theta{\theta} \def\kappa{\kappa} \def\lambda{\lambda} \def\sigma{\sigma} \def\varphi{\varphi} \def\omega{\omega} \def\Omega{\Omega} \def\mathbb{R}{\mathbb{R}} \def\mathbb{N}{\mathbb{N}} \def\mathbb{Z}{\mathbb{Z}} \def\widetilde{\widetilde} \def\widehat{\widehat} \def\overline{\overline} \def\partial{\partial} \def\Rightarrow{\Rightarrow} \def\noindent{\bf Proof.} {\noindent{\bf Proof.} } \def $\Box${ $\Box$} \providecommand{\flr}[1]{\lfloor#1\rfloor} \title{Fluctuations of the empirical quantiles of independent Brownian motions} \begin{abstract} We consider iid Brownian motions, $B_j(t)$, where $B_j(0)$ has a rapidly decreasing, smooth density function $f$. The empirical quantiles, or pointwise order statistics, are denoted by $B_{j:n}(t)$, and we consider a sequence $Q_n(t) = B_{j(n):n}(t)$, where $j(n)/n\to\alpha\in(0,1)$. This sequence converges in probability to $q(t)$, the $\alpha$-quantile of the law of $B_j(t)$. We first show convergence in law in $C[0,\infty)$ of $F_n=n^{1/2}(Q_n-q)$. We then investigate properties of the limit process $F$, including its local covariance structure, and H\"older-continuity and variations of its sample paths. In particular, we find that $F$ has the same local properties as fBm with Hurst parameter $H=1/4$. \noindent{\bf AMS subject classifications:} Primary 60F05; secondary 60F17, 60G15, 60G17, 60G18, 60J65 \noindent{\bf Keywords and phrases:} quantile process, order statistics, fluctuations, weak convergence, fractional Brownian motion, quartic variation \end{abstract} \section{Introduction}\label{S:intro} The classical example of a Brownian motion occurring in nature is the pollen grain on the surface of the water. Bombarded by the much smaller water molecules that surround it, the pollen grain embarks on a random walk with numerous, tiny steps. Applying a law of large numbers type scaling to its path leaves it sitting right where it is, since its trajectory has mean zero. But applying a central limit theorem type scaling, which exposes its fluctuations, reveals the Brownian path that we observe in nature. If we observe a large collection of pollen grains, and approximate their density with a continuous function, then we might expect this density function to evolve according to the diffusion equation. In this setting, the diffusion equation can be derived by assuming that each individual pollen grain moves as an independent Brownian motion. However, if the pollen grains are close to one another, as they should be if we are approximating their density with a continuous function, then their motions are certainly not independent. They are interacting with one another through collisions. Moreover, the collisions would evidently provide each individual pollen grain with a drift, pushing it toward regions where the pollen density is lower. In other words, the individual pollen grains would not be performing Brownian motions. The question then arises, what do the trajectories of these colliding pollen grains look like on the mesoscopic scale (that is, under central limit theorem type scaling)? In this paper, we consider the simpler, one-dimensional question of the scaling limit of colliding Brownian motions on the real line. To motivate our model of the collision process, first consider two physical particles with equal mass and velocities $v_1$ and $v_2$. If these particle have an elastic collision, then they will exchange velocities. The effect of this will be that they exchange trajectories at the moment of collision. If $x_1(t)$ and $x_2(t)$ describe their trajectories without any interaction, then their trajectories in the presence of collisions will be $\min\{x_1(t),x_2(t)\}$ and $\max\{x_1(t),x_2(t)\}$. If we extend this reasoning to $n$ particles, then we are led naturally to consider the empirical quantiles, or order statistics, of the non-interacting trajectories, $x_1(t),\ldots,x_n(t)$. Two concepts, therefore, which are central to our approach, are the notions of an $\alpha$-quantile of a probability measure, and the order statistics of a family of random variables. Let $\nu$ be a probability measure on $\mathbb{R}$ with distribution function $\Phi_\nu (x)=\nu((-\infty,x])$. Given $\alpha\in(0,1)$, an $\alpha$-quantile of $\nu$ is any number $q$ such that $\Phi_\nu(q-)\le\alpha\le\Phi_\nu(q)$. It is easy to see that $\nu$ has at least one $\alpha$-quantile, given by $q=\inf\{x:\alpha\le \Phi_\nu(x)\}$. It is also clear that if $\Phi_\nu$ is continuous, then $\Phi_\nu(q)=\alpha$ for any $\alpha$-quantile, $q$. Given random variables, $X_1,\ldots,X_n$, let $\sigma$ be a (random) permutation of the set $\{1,\ldots,n\}$ such that $X_{\sigma(1)} \le\cdots\le X_{\sigma(n)}$ a.s. For $1\le j\le n$, we define the $j$-th order statistic of $X=(X_1, \ldots, X_n)$ to be $X_{\sigma(j)}$, and denote it by $X_{j:n}$. Note that $-X_{j:n} =(-X)_{(n-j+1):n}$. Now fix $\alpha\in(0,1)$ and let $B$ be a one-dimensional Brownian motion with a random initial position. For simplicity in the calculations that are to come, we shall assume that $B(0)$ has a density function $f$ that is a Schwartz function. That is, $f\in C^\infty(\mathbb{R})$ and \begin{equation}\label{Schwartz} \sup_{x\in\mathbb{R}}(1 + |x|^n)|f^{(m)}(x)| < \infty, \end{equation} for all nonnegative integers $n$ and $m$. We will also assume that $f(x)\,dx$ has a unique $\alpha$-quantile, $q(0)$, such that $f(q(0))>0$. Let $\{B_j\}$ be an iid sequence of copies of $B$. For fixed $n$, the trajectories $B_1,\ldots,B_n$ denote the paths of a system of $n$ particles with no interactions. When these particles are allowed to interact through collisions, their trajectories are given by the pointwise order statistics, or empirical quantile processes, $B_{1:n}, \ldots, B_{n:n}$. More specifically, $B_{j:n}$ is the process such that, for each $t\ge0$, $B_{j:n}(t)$ is the $j$-th order statistic of $(B_1(t), \ldots,B_n(t))$. Note that $B_{j:n}$ is a continuous process. We shall fix a sequence of integers $\{j(n)\}_{n=1}^\infty$ such that $1\le j(n)\le n$ and $j(n)/n=\alpha+o(n^{-1/2})$. We then consider the sequence of empirical quantile processes $\{Q_n\}$ given by $Q_n=B_{j(n):n}$. Our first result concerns the convergence in $C[0,\infty)$ of the sequence $\{Q_n\}$. To this end, we begin by defining the (deterministic) limit process, and show that it is continuous. Let $u(x,t)$ denote the density of $B(t)$. It is well-known that our assumptions on $f$ are sufficient to ensure that $u\in C^\infty (\mathbb{R}\times(0,\infty))$, $\partial_x^j u\in C(\mathbb{R}\times[0,\infty))$, and $\partial_x^j u(x,0) =f^{(j)}(x)$ for all $j\ge0$. Moreover, $\partial_x^j u(x,t)=E^x[f^{(j)}(B(t))]$ and $u$ satisfies the diffusion equation, $\partial_t u = (1/2)\partial_x^2 u$. For each $t>0$, the function $u(\cdot,t)$ is strictly positive, which implies that $u(x,t)\,dx$ has a unique $\alpha$-quantile, $q(t)$. The following lemma is easily derived by differentiating the defining equation for $q(t)$; its proof is given in the appendix. \begin{lemma}\label{L:quant_ODE} The function $q$ is in $C[0,\infty)\cap C^\infty (0, \infty)$ and satisfies \begin{equation}\label{quant_ODE} q'(t) = -\frac{\partial_x u(q(t),t)}{2u(q(t),t)} \end{equation} for all $t>0$. \end{lemma} \begin{remark} Let $C^k[0,\infty)$ denote the space of functions $g:[0,\infty)\to \mathbb{R}$ such that $g^{(j)}$ has a continuous extension to $[0,\infty)$, for all $j\le k$. Also, let $C^\infty[0,\infty)=\cap_{k\ge0}C^k[0, \infty)$. It is easy to see from Lemma \ref{L:quant_ODE} that since $u(q(t),t)>0$ for all $t\ge0$, and $\partial_x^j u\in C (\mathbb{R}\times[0, \infty))$, it follows that $q\in C^\infty[0,\infty)$. That is, $q^{(k)}$ has a continuous extension to $[0,\infty)$ for all $k$. \end{remark} \begin{remark} The diffusive flux at time $t$, in the positive spatial direction, across an arbitrary moving boundary $s(t)$ is given by $-(1/2)\partial_x u(s(t),t)-u(s(t),t)s'(t)$. The first term corresponds to Fick's first law of diffusion, and the second term comes from the motion of the boundary. By \eqref{quant_ODE}, it follows that $q(t)$ is the unique trajectory starting at $q(0)$, across which there is no diffusive flux. The vanishing of the flux can also be seen by noting that \[ \int_{-\infty}^{q(t)}u(x,t)\,dx = \alpha, \] which follows from the definition of $q(t)$. \end{remark} It will follow from later results that $Q_n\to q$ in probability in $C[0,\infty)$. Our primary interest, however, is with the fluctuations, $F_n=n^{1/2}(Q_n-q)$. Our objective in this paper is twofold: (i) to establish the convergence in law in the space $C[0,\infty)$ of the sequence $F_n$ to a limit process $F$, and (ii) to investigate properties of the limit process $F$, including its local covariance structure, and the H\"older continuity and variations of its sample paths. Regarding (i), we establish the following result, whose proof can be found in Section \ref{S:param_est}. (The notation $X_n\Rightarrow X$ means $X_n\to X$ in law.) \begin{thm}\label{T:main} There exists a continuous, centered Gaussian process $F$ with covariance function \begin{equation}\label{rho_def} \rho(s,t) = \frac{P(B(s) \le q(s), B(t) \le q(t)) - \alpha^2} {u(q(s),s)u(q(t),t)}, \end{equation} such that $F_n\Rightarrow F$ in $C[0,\infty)$. \end{thm} It is worth pointing out here that the limiting process $F$ is not deterministic at time $t=0$. In fact, $E|F(0)|^2 = (\alpha - \alpha^2) f(q(0))^{-2}$. Regarding (ii), we derive, in Section \ref{S:limit_props}, several key properties of the limit process $F$. We will show that, on compact time intervals, $E|F(t)-F(s)|^2$ is bounded above and below by constant multiples of $|t-s|^{1/2}$. In particular, the paths of $F$ are almost surely locally H\"older continuous with exponent $\gamma$ for all $\gamma<1/4$. (See Corollaries \ref{C:var_order_0} and Corollary \ref{C:Holder}.) We will show that $F$ is locally anti-persistent. More specifically, nearby, small increments of size $\Delta t$ have a negative covariance whose order of magnitude is $-|t-s|^{-3/2}\Delta t^2$. (See Corollaries \ref{C:IV.1} and \ref{C:IV.2}.) We will also show that $F$ has a nontrivial, deterministic quartic variation, given by $(6/\pi)\int_0^t |u(q(s),s)|^{-2}\,ds$. (See Theorem \ref{T:quart_var}.) All of these are local properties that $F$ shares with $B^{1/4}$, the fractional Brownian motion (fBm) with Hurst parameter $H=1/4$. (Recall that fBm, $B^H$, is a centered Gaussian process with $B^H(0)=0$ and $E|B^H(t) -B^H(s)|^2=|t-s|^{2H}$, where $H\in(0,1)$.) These local properties are also shared with the solution to the one-dimensional stochastic heat equation, which was recently studied in \cite{Sw07.1} and \cite{BS}. On the other hand, the global properties of $F$ can be quite different from $B^{1/4}$, which we will illustrate at the end of Section \ref{S:limit_props}, via the special case where $f(x)\,dx$ is a standard Gaussian distribution. A model similar to this was studied in \cite{Ha}. That model consists of a countably infinite collection of real-valued stochastic processes $x_i(t)$, $i\in\mathbb{Z}$, $t\ge0$. The points $\{x_i(0)\}_{i\in\mathbb{Z}}$ form a unit Poisson process on the real line, conditioned to have a point at 0, and labeled so that \[ \cdots < x_{-2}(0) < x_{-1}(0) < x_0(0) = 0 < x_1(0) < x_2(0) < \cdots \] The processes $\{x_i(\cdot)-x_i(0)\}$ are independent, standard Brownian motions. This family of processes represents the motion of the particles without collisions. By counting upcrossings and downcrossings, the motion of a ``tagged particle" in the collision system can be defined. Informally, the tagged particle $y(t)$ is defined as follows. It begins with $y(0)=x_0(0)$, and then continues as $y(t)=x_0(t)$ until the first time the path of $x_0$ intersects one of its neighbors, $x_1$ or $x_{-1}$. At this point, $y$ adopts the trajectory of the neighboring particle, and follows this trajectory until the first time it meets one if its two neighbors, and so on. Of course, when two Brownian particles meet, they intersect infinitely often immediately, which makes it difficult to carry out the above informal construction. This is why upcrossings and downcrossings are used instead. (See also \cite{DGL85}.) Note that, with only finitely many particles, the empirical quantiles serve as tagged particles, without any need for counting crossings. By the scaling property of Brownian motion, the process $n^{-1}y(n^2t)$ has the same law as a tagged particle in a system initially distributed according to a Poisson process with a density of $n$ particles per unit length. This is analogous to our centered empirical quantile process $Q_n(t)-q(t)$. Multiplying by $n^{1/2}$ shows that our process $F_n$ is analogous to $y_A(t)=y(At)/A^{1/4}$, where $A=n^2$. In \cite{Ha}, it is shown that $y_A(1)=y(A)/A^{1/4}\Rightarrow(2/\pi)^{1/4}N$ as $A\to \infty$, where $N$ is a standard normal random variable. This implies that for fixed $t>0$, \[ y_A(t) = t^{1/4}y_{At}(1) \Rightarrow t^{1/4}(2/\pi)^{1/4}N \overset{\mathcal{L}}{=} (2/\pi)^{1/4}B^{1/4}(t), \] where $\overset{\mathcal{L}}{=}$ denotes equality in law. In \cite{DGL85}, a substantially stronger result was proven, one case of which shows that $y_A \Rightarrow(2/\pi)^{1/4}B^{1/4}$ in $C[0,\infty)$. The proof of tightness in \cite{DGL85} relies heavily on the special properties of the initial Poisson distribution, under which the resulting particle system has a stationary distribution. Similar results hold for the simple symmetric exclusion process (SSEP) on the integer lattice. It has been known since the work of Arratia \cite{A} and Rost and Vares \cite{RVar} in the 1980s that the fluctuations of a tagged particle in a one-dimensional SSEP started in equilibrium converge, in the sense of finite-dimensional distributions (fdd), to fBm $B^{1/4}$. The proof of tightness, however, was established only recently by Peligrad and Sethuraman \cite{PS} in 2008. As with the proof in \cite{DGL85}, the proof of tightness for the tagged particle in \cite{PS} also relies on the initial distribution of the particles and does not extend to the non-equilibrium case. The motion of a tagged particle in a non-equilibrium SSEP was studied in Jara and Landim \cite{JL}. Theorem 2.6 in \cite{JL} establishes a result on the fluctuations of a tagged particle in SSEP which is analogous to our Theorem \ref{T:main}, except that in \cite{JL} convergence is established only in the sense of fdds. It should be noted that the proof methods in \cite{DGL85} do generalize to the non-equilibrium case when one considers the current process, rather than the tagged particle. In 2005, Sepp\"al\"ainen \cite{Se} studied the current process in a system of independent random walks in a non-equilibrium case. Tightness for this current process (for a certain restricted class of initial profiles) was proved by Kumar \cite{Kum} in 2008 by extending the proof of Proposition 5.7 in \cite{DGL85}. In \cite{Sw07.2}, we considered what is very nearly a special case of the present model. Suppose $B(0)=0$ a.s. and $j(n)=\flr{{(n+1)}/2}$, so that $Q_n$ is the median process, $\alpha=1/2$, and $q=0$. In that case, it was shown in \cite{Sw07.2} that $F_n\Rightarrow X$ in $C[0,\infty)$, where $X$ is a centered Gaussian process with covariance function \begin{equation}\label{med_covar} E[X(s)X(t)] = \sqrt{st}\sin^{-1}\left(\frac{s\wedge t}{\sqrt{st}}\right). \end{equation} The main difficulty in establishing this result was the proof of tightness. Unlike \cite{DGL85}, we did not have the benefit of the Poisson initial distribution. Instead, we proved tightness by making direct estimates on $P(|Q_n(t)-Q_n(s)|\ge n^{-1/2}\varepsilon)$. The method of estimation was essentially a four-step procedure. First, we established a connection between this probability and a certain random walk. Second, we derived estimates for probabilities associated with this random walk -- estimates which depend on the parameters of the walk. Third, we estimated those parameters in terms of the dynamics of the original model. And fourth, we put all of this together to get separate estimates on $P(|Q_n(t)-Q_n(s)|\ge n^{-1/2}\varepsilon)$, depending on whether $n^{-1/2} \varepsilon\ll|t-s|^{1/2}$, $n^{-1/2} \varepsilon\approx|t-s|^{1/2}$, or $n^{-1/2} \varepsilon\gg |t-s|^{1/2}$. Regarding the convergence portion of the present paper, our proof of tightness is motivated by this rough outline. It is our hope that the new ideas and techniques developed here in relation to tightness, as well as the methods employed to study the properties of the limit process, will be applicable to more general colliding particle models. \section{Finite-dimensional distributions}\label{S:fdd} We summarize here the definitions and assumptions from Section \ref{S:intro}. Let $B$ be a Brownian motion such that $B(0)$ has a density function $f\in C^\infty$ satisfying \eqref{Schwartz} for all nonnegative integers $n$ and $m$. Let $\{j(n)\}_{n=1}^\infty$ be a sequence of integers satisfying $1\le j(n)\le n$ and $j(n)/n=\alpha+o(n^{-1/2})$, where $\alpha\in(0,1)$. Assume the probability measure $f(x)\,dx$ has a unique $\alpha$-quantile, $q(0)$, satisfying $f(q(0))>0$. Let $\{B_j\}$ be a sequence of iid copies of $B$. We define $F_n=n^{1/2} (Q_n-q)$, where $Q_n=B_{j(n):n}$ and $q(t)$ is the unique $\alpha$-quantile of the law of $B(t)$. The fact that $F_n\in C[0,\infty)$ follows from Lemma \ref{L:quant_ODE}. In this section, we begin the proof of Theorem \ref{T:main} by noting that the convergence of the fdds follows as an immediate corollary of a multi-dimensional quantile central limit theorem, which is stated below as Theorem \ref{T:quant_CLT}. The proof of the quantile CLT is a straightforward exercise in the application of the Lindeberg-Feller theorem, and is given in the appendix. To state the quantile CLT, we first need some preliminaries. Let $X$ be an $\mathbb{R}^d$-valued random variable. It will be convenient to denote vector components with function notation, $X=(X(1),\ldots,X(d))$. Let $\Phi_j(x) =P(X(j)\le x)$ and $G_{ij}(x,y)=P(X(i)\le x,X(j)\le y)$. Fix $\alpha\in(0,1)^d$ and assume there exists $q\in\mathbb{R}^d$ such that $\Phi_j(q(j))=\alpha(j)$ for all $j$. Also assume that, for all $i$ and $j$, $\Phi_j'(q(j))$ exists and is strictly positive, and that $G_{ij}$ is continuous at $(q(i),q(j))$. Let $\{X_n\}$ be a sequence of iid copies of $X$. If $\kappa\in\{1,\ldots,n\}^d$, then $X_{\kappa:n}\in\mathbb{R}^d$ shall denote the componentwise order statistics of $X_1,\ldots,X_n$. That is, $X_{\kappa:n}(j)$ is the $\kappa(j)$-th order statistic of $(X_1(j),\ldots,X_n(j))$. \begin{thm}\label{T:quant_CLT} With the above notation and assumptions, define the matrix $\sigma\in\mathbb{R}^{d\times d}$ by \begin{equation}\label{quant_CLT} \sigma_{ij} = \frac{G_{ij}(q(i),q(j)) - \alpha(i)\alpha(j)} {\Phi_i'(q(i))\Phi_j'(q(j))}. \end{equation} If $\kappa(n)=(\kappa(n,1),\ldots,\kappa(n,d))\in\{1,\ldots,n\}^d$ satisfies $\kappa(n)/n=\alpha+o(n^{-1/2})$, then $n^{1/2}(X_{\kappa(n):n}-q)\Rightarrow N$, where $N$ is multinormal with mean zero and covariance matrix $\sigma$. \end{thm} \begin{corollary}\label{C:quant_CLT} There exists a centered Gaussian process $F$ with covariance function $\rho$, given by \eqref{rho_def}, such that $F_n\to F$ in the sense of finite-dimensional distributions. \end{corollary} \noindent{\bf Proof.} Given $0\le t_1<\cdots<t_d$, apply Theorem \ref{T:quant_CLT} with $X =(B(t_1),\ldots,B(t_d))$ and $\kappa(n)=(j(n),j(n),\ldots,j(n))$. $\Box$ \begin{remark} Note that one of the assumptions in the above theorem is that $\Phi_j'(q(j))$ exists and is strictly positive. In particular, $\Phi_j$ must be continuous at $q(j)$. That is, the theorem does not apply when the distribution of $X(j)$ has a jump at the quantile point $q(j)$. This, however, is never the case for the random variables considered in this paper, since for any initial distribution, the solution of the heat equation is continuous for any positive time. \end{remark} \begin{remark} To give some limited intuitive sense to \eqref{quant_CLT}, let us consider what it implies for stochastic processes. Let $\{X_n(t): t\in T\}_{n=1}^\infty$ be a sequence of iid, measurable, real-valued copies of a stochastic process $X(t)$, parameterized by some general index set $T$. Let $\Phi(x,t) =P(X(t)\le x)$. Suppose that $\Phi(\cdot,t)$ is continuous and strictly increasing, and, for $\alpha\in(0,1)$, define $q(\alpha,t)$ so that $\Phi(q(\alpha,t),t)=\alpha$. Let $\kappa:(0,1)\times\mathbb{N}\to\mathbb{N}$ satisfy $\kappa(\alpha,n) \in\{1,\ldots,n\}$ and $\kappa(\alpha,n)=\alpha n+o(n^{1/2})$ for each fixed $\alpha$. Let $X_{\kappa:n}(t)$ denote the $\kappa$-th order statistic of $X_1(t), \ldots,X_n(t)$, and define $Q_n(\alpha,t) =X_{\kappa(\alpha,n):n} (t)$. Let $F_n(\alpha,t)=n^{1/2} (Q_n(\alpha,t)-q(\alpha,t))$. From Theorem \ref{T:quant_CLT}, we find that if we make the additional assumptions that $\partial_x\Phi(x,t)>0$ for all $x$ such that $0<\Phi(x,t)<1$, and that the functions $(x,y)\mapsto P(X(s)\le x,X(t)\le y)$ are continuous for each fixed $s$ and $t$, then the fdds of the two-parameter processes $F_n$ converge to those of a centered Gaussian process $F$ on $(0,1)\times T$ with covariance function \begin{equation}\label{cov_spec} E[F(\alpha,s)F(\beta,t)] = \frac{P(X(s)\le q(\alpha,s), X(t)\le q(\beta,t)) - \alpha\beta} {\partial_x\Phi(q(\alpha,s),s)\partial_x\Phi(q(\beta,t),t)}. \end{equation} In this paper, we are considering not only the special case in which the processes are Brownian motions, but also the special case in which $\alpha$ is fixed. If we fix $t$, however, then a familiar process emerges. Namely, for fixed $t$, the process $\alpha\mapsto F(\alpha,t)$ has covariance function \begin{equation}\label{bbcov} E[F(\alpha,t)F(\beta,t)] = \frac{\alpha \wedge \beta - \alpha\beta} {\partial_x\Phi(q(\alpha,t),t)\partial_x\Phi(q(\beta,t),t)}. \end{equation} In other words, $\partial_x\Phi(q(\cdot,t),t)F(\cdot,t)$ is a Brownian bridge. This phenomenon stems from the relationship between the quantiles and the empirical processes. That is, let $\Phi^n(x,t) =\frac1n\sum_{j=1}^n1_{\{X_j(t)\le x\}}$. Then \[ n^{1/2}(\Phi^n(Q_n(\alpha,t),t) - \Phi(q(\alpha,t),t)) = n^{1/2}(n^{-1}\kappa(\alpha,n) - \alpha) = o(1). \] Hence, under suitable assumptions on $\Phi(\cdot,t)$ and its derivatives, we may use a Taylor polynomial for $\Phi(\cdot,t)$ about $q(\alpha,t)$, obtaining \begin{multline}\label{qerel} n^{1/2}(\Phi^n(Q_n(\alpha,t),t) - \Phi(Q_n(\alpha,t),t)) = -n^{1/2}(\Phi(Q_n(\alpha,t),t) - \Phi(q(\alpha,t),t)) + o(1)\\ = -\partial_x\Phi(q(\alpha,t),t)F_n(\alpha,t) + n^{-1/2}|F_n(\alpha,t)|^2 O(1) + o(1). \end{multline} For fixed $t$, it is well-known that $n^{1/2} (\Phi^n(\cdot,t) - \Phi(\cdot,t))$ converges to $B^\circ(\Phi(\cdot,t))$, where $B^\circ$ is a Brownian bridge (see Billingsley \cite{Bi}, for example). Hence, the left-hand side of \eqref{qerel} converges to $B^\circ(\Phi(q(\alpha,t),t)) = B^\circ (\alpha)$, which explains the covariance function \eqref{bbcov}. The connection to the empirical processes given by \eqref{qerel} is still valid, of course, even when $t$ is allowed to vary. In this case, \eqref{qerel} and \eqref{cov_spec} show that the two-parameter fluctuation processes $n^{1/2}(\Phi^n-\Phi)\to V$ in the fdd sense, where $V$ is centered Gaussian with covariance \begin{equation}\label{Vcov} E[V(x,s)V(y,t)] = P(X(s)\le x,X(t)\le y) - P(X(s)\le x)P(X(t)\le y). \end{equation} At this point, it is interesting to compare this with the results in Martin-L{\"o}f \cite{Mar}. When $X$ is a Markov process, \cite{Mar} considers the convergence, in the fdd sense, of fluctuations such as these, although for a slightly different model in which we have $N\sim\text{Poisson}(n)$ particles instead of $n$ particles. This subtle difference produces slightly different fdds in the limit. (For example, in the fixed $t$ case, the fluctuations of the empirical distributions of a Poisson number of variables converges to a Brownian motion rather than a Brownian bridge, which can be seen either from the heuristic demonstration below, or as a special case of Corollary 2 in \cite{Mar}.) More specifically, let $\widetilde\Phi^n(x,t)=\frac1n\sum_{j=1}^N1_{\{X_j(t)\le x\}}$. Note that \[ n^{1/2}(\widetilde\Phi^n - \Phi) = (N/n)^{1/2}N^{1/2}(\Phi^N - \Phi) + n^{1/2}(N/n - 1)\Phi. \] Hence, $n^{1/2}(\widetilde\Phi^n-\Phi)\to\widetilde V$ in the fdd sense, where $\widetilde V = V + Z\Phi$, and $Z$ is a standard normal random variable, independent of $V$. Using \eqref{Vcov}, it follows that $\widetilde V$ is centered Gaussian with covariance \[ E[\widetilde V(x,s)\widetilde V(y,t)] = P(X(s)\le x,X(t)\le y). \] This, of course, is precisely the result we obtain from Corollary 2 in \cite{Mar}. \end{remark} \section{Properties of the limit process}\label{S:limit_props} Before proceeding to the proof of tightness, we first establish some of the properties of the limit process. In what follows, $C$ shall denote a positive, finite constant, that may change value from line to line. \begin{thm}\label{T:IV} Recall $\rho$ given by \eqref{rho_def}. For each $T>0$, there exist positive constants $\delta,C_1,C_2$ such that whenever $0<s<t\le T$ and $|t-s|<\delta$, we have \begin{enumerate}[(i)] \item $C_1|t - s|^{-1/2} \le \partial_s\rho(s,t) \le C_2|t - s|^{-1/2}$, \item $-C_2|t - s|^{-1/2} \le \partial_t\rho(s,t) \le -C_1|t - s|^{-1/2}$, and \item $-C_2|t - s|^{-3/2} \le \partial^2_{st}\rho(s,t) \le -C_1|t - s|^{-3/2}$. \end{enumerate} \end{thm} \noindent{\bf Proof.} Define $\widetilde F(t)=u(q(t),t)F(t)$ and note that $\widetilde F$ is a centered Gaussian process with covariance function \begin{equation}\label{rho_tilde} \widetilde\rho(s,t) = P(B(s) \le q(s), B(t) \le q(t)) - \alpha^2. \end{equation} We will first prove the theorem for $\widetilde\rho$ instead of $\rho$. For later purposes, it will be convenient at this time to introduce the auxiliary function \begin{equation}\label{zeta_def} \zeta(s,t,w,z) = \int_{-\infty}^{q(s)+w}\int_{-\infty}^{q(t)+z} p(t-s,x,y)\partial_x^iu(x,s)\,dy\,dx, \end{equation} where $i\ge0$ and $p(t,x,y)=(2\pi t)^{-1/2}e^{-(x-y)^2/2t}$. In this proof, we use only the case $i=0$. Later, in Section \ref{S:param_est}, we will make use of the general case for arbitrary $i$. We now compute that \begin{equation}\label{newIV.1} \begin{split} \partial_t\zeta &= q'(t)\int_{-\infty}^{q(s)+w} p(t-s,x,q(t)+z)\partial_x^i u(x,s)\,dx\\ &\qquad + \int_{-\infty}^{q(s)+w}\int_{-\infty}^{q(t)+z} \partial_t p(t-s,x,y)\partial_x^i u(x,s)\,dy\,dx\\ &= q'(t)\int_{-\infty}^{q(s)+w} p(t-s,x,q(t)+z)\partial_x^i u(x,s)\,dx\\ &\qquad + \frac12\int_{-\infty}^{q(t)+z}\int_{-\infty}^{q(s)+w} \partial_x^2 p(t-s,x,y)\partial_x^i u(x,s)\,dx\,dy. \end{split} \end{equation} Using integration by parts, \begin{multline*} \int_{-\infty}^{q(s)+w}\partial_x^2 p(t-s,x,y)\partial_x^i u(x,s)\,dx\\ = \partial_x p(t-s,q(s)+w,y)\partial_x^i u(q(s)+w,s) - p(t-s,q(s)+w,y)\partial_x^{i+1} u(q(s)+w,s)\\ + \int_{-\infty}^{q(s)+w}p(t-s,x,y)\partial_x^{i+2} u(x,s)\,dx. \end{multline*} Substituting this into \eqref{newIV.1} gives \begin{multline*} \partial_t\zeta = q'(t)\int_{-\infty}^{q(s)+w} p(t-s,x,q(t)+z)\partial_x^i u(x,s)\,dx\\ + \frac12\int_{-\infty}^{q(t)+z} \partial_x p(t-s,q(s)+w,y)\partial_x^i u(q(s)+w,s)\,dy\\ - \frac12\int_{-\infty}^{q(t)+z} p(t-s,q(s)+w,y)\partial_x^{i+1} u(q(s)+w,s)\,dy\\ + \frac12\int_{-\infty}^{q(t)+z}\int_{-\infty}^{q(s)+w} p(t-s,x,y)\partial_x^{i+2} u(x,s)\,dx\,dy. \end{multline*} We now use the fact that $\partial_xp(t,x,y)=-\partial_yp(t,x,y)$ to rewrite this as \begin{multline}\label{newIV.2} \partial_t\zeta = q'(t)\int_{-\infty}^{q(s)+w} p(t-s,x,q(t)+z)\partial_x^i u(x,s)\,dx\\ - \frac12 p(t-s,q(s)+w,q(t)+z)\partial_x^i u(q(s)+w,s)\\ - \frac12\partial_x^{i+1} u(q(s)+w,s) \int_{-\infty}^{q(t)+z}p(t-s,q(s)+w,y)\,dy\\ + \frac12\int_{-\infty}^{q(t)+z}\int_{-\infty}^{q(s)+w} p(t-s,x,y)\partial_x^{i+2} u(x,s)\,dx\,dy. \end{multline} Similarly, for the other partial derivative, we have \begin{multline*} \partial_s\zeta = q'(s)\int_{-\infty}^{q(t)+z} p(t-s,q(s)+w,y)\partial_x^i u(q(s)+w,s)\,dy\\ - \int_{-\infty}^{q(s)+w}\int_{-\infty}^{q(t)+z} \partial_t p(t-s,x,y)\partial_x^i u(x,s)\,dy\,dx\\ + \int_{-\infty}^{q(s)+w}\int_{-\infty}^{q(t)+z} p(t-s,x,y)\partial_t\partial_x^i u(x,s)\,dy\,dx. \end{multline*} By \eqref{newIV.1}, this becomes \begin{multline}\label{newIV.5} \partial_s\zeta = q'(s)\int_{-\infty}^{q(t)+z} p(t-s,q(s)+w,y)\partial_x^i u(q(s)+w,s)\,dy\\ - \partial_t\zeta + q'(t)\int_{-\infty}^{q(s)+w} p(t-s,x,q(t)+z)\partial_x^i u(x,s)\,dx\\ + \int_{-\infty}^{q(s)+w}\int_{-\infty}^{q(t)+z} p(t-s,x,y)\partial_t\partial_x^i u(x,s)\,dy\,dx. \end{multline} Substituting \eqref{newIV.2} into the above, and using $\partial_tu=(1/2)\partial_x^2u$ gives \begin{multline}\label{newIV.3} \partial_s\zeta = \frac12 p(t-s,q(s)+w,q(t)+z)\partial_x^i u(q(s)+w,s)\\ + \left({q'(s)\partial_x^i u(q(s)+w,s) + \frac12\partial_x^{i+1} u(q(s)+w,s)}\right) \int_{-\infty}^{q(t)+z} p(t-s,q(s)+w,y)\,dy. \end{multline} Now observe that, taking $i=0$, $\widetilde\rho(s,t) = \zeta(s,t,0,0) - \alpha^2$. Hence, by \eqref{newIV.2} and \eqref{quant_ODE}, \begin{multline*} \partial_t\widetilde\rho(s,t) = q'(t)\int_{-\infty}^{q(s)} p(t-s,x,q(t))u(x,s)\,dx - \frac12 p(t-s,q(s),q(t))u(q(s),s)\\ + u(q(s),s)q'(s)\int_{-\infty}^{q(t)}p(t-s,q(s),y)\,dy + \frac12\int_{-\infty}^{q(t)}\int_{-\infty}^{q(s)} p(t-s,x,y)\partial_x^2 u(x,s)\,dx\,dy, \end{multline*} and by \eqref{newIV.3} and \eqref{quant_ODE}, \begin{equation}\label{IV.3} \partial_s\widetilde\rho(s,t) = \frac12 p(t-s,q(s),q(t))u(q(s),s). \end{equation} Differentiating \eqref{IV.3} with respect to $t$ gives \begin{equation}\label{IV.4} \partial_{st}^2\widetilde\rho(s,t) = \frac12\partial_t p(t-s,q(s),q(t))u(q(s),s) + \frac12\partial_y p(t-s,q(s),q(t))u(q(s),s)q'(t). \end{equation} Part (i) now follows easily from \eqref{IV.3}, using the fact that $u(q(s),s)$ is continuous and strictly positive on $[0,\infty)$, and $|q(t)-q(s)|\le C|t-s|$ for all $s,t\in[0,T]$. Part (ii) will follow from (i), once we show that \begin{equation}\label{IV.9} |\partial_s\widetilde\rho(s,t) + \partial_t\widetilde\rho(s,t)| \le C. \end{equation} By \eqref{newIV.5}, \begin{multline}\label{IV.5} \partial_s\widetilde\rho(s,t) + \partial_t\widetilde\rho(s,t) = q'(s)\int_{-\infty}^{q(t)} p(t-s,q(s),y)u(q(s),s)\,dy\\ + q'(t)\int_{-\infty}^{q(s)} p(t-s,x,q(t))u(x,s)\,dx + \int_{-\infty}^{q(s)}\int_{-\infty}^{q(t)} p(t-s,x,y)\partial_t u(x,s)\,dy\,dx. \end{multline} Note that \begin{equation}\label{IV.6} \left|{q'(s)\int_{-\infty}^{q(t)} p(t-s,q(s),y)u(q(s),s)\,dy}\right| = |q'(s)|u(q(s),s)P^{q(s)}(B(t-s) \le q(t)) \le C. \end{equation} Next, by the Markov property, $u(y,t) = \int_\mathbb{R} p(t-s,x,y)u(x,s)\,dx$. Hence, \begin{equation}\label{IV.7} \left|{q'(t)\int_{-\infty}^{q(s)}p(t-s,x,q(t))u(x,s)\,dx}\right| \le |q'(t)|u(q(t),t) \le C. \end{equation} Finally, note that $\partial_t u(x,t)=(1/2)\partial_x^2u(x,t)=(1/2)E^x[f''(B(t))]$. Therefore, \begin{multline*} \int_{-\infty}^{q(s)}\int_{-\infty}^{q(t)} p(t-s,x,y)\partial_t u(x,s)\,dy\,dx\\ = \frac12\int_{-\infty}^{q(s)}\int_{-\infty}^{q(t)}\int_\mathbb{R} p(t-s,x,y)p(s,x,z)f''(z)\,dz\,dy\,dx\\ = \frac12\int_\mathbb{R} P^z(B(s)\le q(s),B(t)\le q(t))f''(z)\,dz, \end{multline*} which implies \begin{equation}\label{IV.8} \left|{\int_{-\infty}^{q(s)}\int_{-\infty}^{q(t)} p(t-s,x,y)\partial_t u(x,s)\,dy\,dx}\right| \le \frac12\|f''\|_1. \end{equation} Applying \eqref{IV.6}, \eqref{IV.7}, and \eqref{IV.8} to \eqref{IV.5} establishes \eqref{IV.9}, and completes the proof of (ii). Part (iii) will follow from \eqref{IV.4}, once we establish that for $|t-s|$ sufficiently small, \begin{align} -C_2|t - s|^{-3/2} \le \partial_t p(t-s,q(s),q(t)) &\le -C_1|t - s|^{-3/2},\text{ and}\label{IV.10}\\ |\partial_y p(t-s,q(s),q(t))| &\le C_2|t - s|^{-1/2}.\label{IV.11} \end{align} First, note that \[ \partial_t p(t,x,y) = -\frac1{2\sqrt{2\pi}}e^{-(x-y)^2/2t} \left({1 - \frac{(x-y)^2}t}\right)\frac1{t^{3/2}}. \] Since $|q(t)-q(s)|\le C|t-s|$, it follows that for $|t-s|$ sufficiently small, $\partial_t p(t-s,q(s),q(t))<0$ with $|\partial_t p(t-s,q(s),q(t))|\le C|t-s|^{-3/2}$, which establishes the lower bound in \eqref{IV.10}. For the upper bound, note that $e^{-(q(t)-q(s))^2/2(t-s)}\ge e^{-C|t-s|}\ge e^{-CT}$. Hence, $|\partial_t p(t-s,q(s),q(t))| \ge C|t-s|^{-3/2}$ when $|t-s|$ is small enough. Finally, we observe that \[ |\partial_y p(t,x,y)| = \frac1{\sqrt{2\pi}}\frac{|x-y|}{t^{3/2}} e^{-(x-y)^2/2t} \le |x - y|t^{-3/2}, \] so that \eqref{IV.11} follows as above from $|q(t)-q(s)|\le C|t-s|$. This completes the proof of the theorem when $\rho$ is everywhere replaced by $\widetilde\rho$. We now prove the theorem as stated. By \eqref{rho_def}, $\rho(s,t)=\theta(s)\theta(t)\widetilde\rho(s,t)$, where $\theta(t) = (u(q(t),t))^{-1}$. Note that $\theta\in C^1[0,\infty)$ and \begin{align*} \partial_s\rho(s,t) &= \theta'(s)\theta(t)\widetilde\rho(s,t) + \theta(s)\theta(t)\partial_s\widetilde\rho(s,t),\\ \partial_t\rho(s,t) &= \theta(s)\theta'(t)\widetilde\rho(s,t) + \theta(s)\theta(t)\partial_t\widetilde\rho(s,t),\\ \partial_{st}^2\rho(s,t) &= \theta'(s)\theta'(t)\widetilde\rho(s,t) + \theta'(s)\theta(t)\partial_t\widetilde\rho(s,t) + \theta(s)\theta'(t)\partial_s\widetilde\rho(s,t) + \theta(s)\theta(t)\partial_{st}^2\widetilde\rho(s,t), \end{align*} from which (i), (ii), and (iii) follow. $\Box$ \begin{corollary}\label{C:var_order_0} Fix $T>0$. There exist positive constants $C_1,C_2$ such that \begin{equation}\label{var_order_0} C_1|t - s|^{1/2} \le E|F(t) - F(s)|^2 \le C_2|t - s|^{1/2} \end{equation} for all $s,t\in[0,T]$. \end{corollary} \noindent{\bf Proof.} Since $(s,t)\mapsto E|F(t)-F(s)|^2$ is continuous and strictly positive on $\{s\ne t\}$, it will suffice to show there exists $\delta_0>0$ such that the theorem holds whenever $|t-s|\le\delta_0$. Also, in the notation of the proof of Theorem \ref{T:IV}, since $F(t)=\theta(t) \widetilde F(t)$, where $\theta\in C^1 [0,\infty)$ is strictly positive, it will suffice to prove the theorem for $\widetilde F$. For this, observe that \begin{equation}\label{tilde_increment} E|\widetilde F(t) - \widetilde F(s)|^2 = \widetilde\rho(t,t) + \widetilde\rho(s,s) - 2\widetilde\rho(s,t) = 2(\widetilde\rho(t,t) - \widetilde\rho(s,t)), \end{equation} where $\widetilde\rho$ is given by \eqref{rho_tilde}. Hence, \[ E|\widetilde F(t) - \widetilde F(s)|^2 = 2\int_s^t \partial_s\widetilde\rho(u,t)\,du. \] Applying Theorem \ref{T:IV}(i) and the fact that $\int_s^t |t-u|^{-1/2} \,du=2|t-s|^{1/2}$ finishes the proof. $\Box$ \begin{corollary}\label{C:Holder} The process $F$ has a modification which is locally H\"older continuous on $[0,\infty)$ with exponent $\gamma$ for any $\gamma<1/4$. \end{corollary} \noindent{\bf Proof.} By the Kolmogorov-\v Centsov theorem (see, for example, \cite{KSh}, Thm 2.2.8), if, for each $T>0$, there exist positive constants $\beta,r,C$ such that \[ E|F(t) - F(s)|^\beta \le C|t - s|^{1+r} \] for all $s,t\in[0,T]$, then $F$ has a continuous modification which is locally H\"older continuous with exponent $\gamma$ for all $\gamma<r/\beta$. Since $F$ is Gaussian, \eqref{var_order_0} implies $E|F(t)-F(s)|^p\le C_p|t-s|^{p/4}$. We may therefore take $\beta=p$ and $r = p/4-1$. Letting $p\to\infty$ completes the proof. $\Box$ \begin{corollary}\label{C:IV.1} For each $T>0$, there exist positive constants $\delta,C_1,C_2$ such that \[ -C_2|t - s|^{-1/2}\Delta t \le E[F(s)(F(t+\Delta t) - F(t))] \le -C_1|t + \Delta t - s|^{-1/2}\Delta t, \] for all $0\le s<t\le T$ and $\Delta t>0$ such that $|t-s|<\delta$ and $\Delta t<\delta$. \end{corollary} \noindent{\bf Proof.} We write \[ E[F(s)(F(t+\Delta t) - F(t))] = \rho(s,t+\Delta t) - \rho(s,t) = \int_t^{t+\Delta t} \partial_t\rho(s,u)\,du. \] Applying Theorem \ref{T:IV}(ii) and the fact that \[ |t + \Delta t - s|^{-1/2}\Delta t \le \int_t^{t+\Delta t} |u - s|^{-1/2}\,du \le |t - s|^{-1/2}\Delta t, \] finishes the proof. $\Box$ \begin{corollary}\label{C:IV.2} For each $T>0$, there exist positive constants $\delta,C_1,C_2$ such that \begin{multline*} -C_2|t - s|^{-3/2}\Delta s\Delta t \le E[(F(s) - F(s-\Delta s))(F(t+\Delta t) - F(t))]\\ \le -C_1|t + \Delta t - (s - \Delta s)|^{-3/2}\Delta s\Delta t, \end{multline*} for all $0\le s<t\le T$, $\Delta s>0$, and $\Delta t>0$ such that $|t-s|<\delta$, $\Delta s<\delta$, and $\Delta t<\delta$. \end{corollary} \noindent{\bf Proof.} We write \[ E[(F(s) - F(s-\Delta s))(F(t+\Delta t) - F(t))] = \int_{s-\Delta s}^s\int_t^{t+\Delta t} \partial^2_{st}\rho(u,v)\,dv\,du. \] Applying Theorem \ref{T:IV}(iii) and the fact that \[ |t + \Delta t - (s - \Delta s)|^{-3/2}\Delta s\Delta t \le \int_{s-\Delta s}^s\int_t^{t+\Delta t} |v - u|^{-3/2}\,dv\,du \le |t - s|^{-3/2}\Delta s\Delta t, \] finishes the proof. $\Box$ \begin{remark}\label{R:Ftilde} Note that in Corollaries \ref{C:IV.1} and \ref{C:IV.2}, since $\partial_t \rho(s,t)$ and $\partial_{st}^2\rho(s,t)$ are continuous away from $\{s=t\}$, we have \begin{align*} |E[F(s)(F(t+\Delta t) - F(t))]| &\le C|t - s|^{-1/2}\Delta t,\text{ and}\\ |E[(F(s) - F(s-\Delta s))(F(t+\Delta t) - F(t))]| &\le C|t - s|^{-3/2}\Delta s\Delta t, \end{align*} for all $0\le s<t\le T$, $\Delta s>0$, and $\Delta t>0$. Also note that Theorem \ref{T:IV} and Corollaries \ref{C:var_order_0}-\ref{C:IV.2} all apply to $\widetilde F(t)=u(q(t),t)F(t)$ as well. \end{remark} As an application of these estimates on the local covariance structure of the increments of $F$, we now show that $F$ has a finite, nonzero, deterministic quartic variation. \begin{thm}\label{T:quart_var} If \[ V_\Pi(t) = \sum_{0<t_j\le t} |F(t_j) - F(t_{j-1})|^4, \] where $\Pi=\{0=t_0<t_1<t_2<\cdots\}$ is a partition of $[0,\infty)$ with $t_j\uparrow\infty$, then for all $T>0$, \[ \lim_{|\Pi|\to0} E\bigg[\sup_{0\le t\le T} \bigg|V_\Pi(t) - \frac6\pi\int_0^t|u(q(s),s)|^{-2}\,ds\bigg|^2\bigg] = 0, \] where $|\Pi|=\sup_j|t_j-t_{j-1}|$. \end{thm} \noindent{\bf Proof.} We again adopt the notation of the proof of Theorem \ref{T:IV}. Let $\theta(t)=(u(q(t),t))^{-1}$ and recall that $\theta\in C^1[0,\infty)$ is strictly positive, and that $F(t)=\theta(t)\widetilde F(t)$. Since $V_\Pi$ is monotone, it suffices by Dini's theorem to show that \[ V_\Pi(t) \to \frac6\pi\int_0^t\theta(s)^2\,ds, \] in $L^2$ for each fixed $t>0$. For any stochastic process $X$, we adopt the notation $\Delta X_j=X(t_j)-X(t_{j-1})$. We will also write $\Delta t_j=t_j-t_{j-1}$. Hence, \[ \Delta F_j = \theta(t_j)\Delta\widetilde F_j + \widetilde F(t_{j-1})\Delta\theta_j, \] which implies $\Delta F_j^4 = \theta(t_j)^4\Delta\widetilde F_j^4 + R_j$, where \[ |R_j| \le C\sum_{i=1}^4 |\widetilde F(t_{j-1})|^i\Delta t_j^i\Delta\widetilde F_j^{4-i}. \] Thus, \[ E\bigg|\sum_{0<t_j\le t} R_j\bigg|^2 \le C\sum_{i=1}^4 E\bigg|\sum_{0<t_j\le t} |\widetilde F(t_{j-1})|^i\Delta t_j^i\Delta\widetilde F_j^{4-i}\bigg|^2. \] Note that Corollary \ref{C:var_order_0} also holds for $\widetilde F$. Thus, using H\"older's inequality, we have \begin{align*} E\bigg|\sum_{0<t_j\le t} R_j\bigg|^2 &\le C\sum_{i=1}^4 E\bigg[\bigg(\sum_{0<t_j\le t}\Delta t_j^2\bigg) \bigg(\sum_{0<t_j\le t} |\widetilde F(t_{j-1})|^{2i}\Delta t_j^{2(i-1)}\Delta\widetilde F_j^{2(4-i)} \bigg)\bigg]\\ &= C\sum_{i=1}^4 \bigg(\sum_{0<t_j\le t}\Delta t_j^2\bigg) \bigg(\sum_{0<t_j\le t} \Delta t_j^{2(i-1)}E[|\widetilde F(t_{j-1})|^{2i}\Delta\widetilde F_j^{2(4-i)}] \bigg)\\ &\le C\sum_{i=1}^4 \bigg(\sum_{0<t_j\le t}\Delta t_j^2\bigg) \bigg(\sum_{0<t_j\le t} \Delta t_j^{2(i-1)}(E|\widetilde F(t_{j-1})|^{4i})^{1/2} (E\Delta\widetilde F_j^{4(4-i)})^{1/2} \bigg)\\ &\le C\sum_{i=1}^4 \bigg(\sum_{0<t_j\le t}\Delta t_j^2\bigg) \bigg(\sum_{0<t_j\le t} \Delta t_j^{2i-2}(E\Delta\widetilde F_j^{4(4-i)})^{1/2} \bigg). \end{align*} As in the proof of Corollary \ref{C:Holder}, since $F$ is Gaussian, \eqref{var_order_0} implies $E\Delta\widetilde F_j^{4(4-i)} \le C_i\Delta t^{4-i}$. Thus, \[ E\bigg|\sum_{0<t_j\le t} R_j\bigg|^2 \le C\sum_{i=1}^4 \bigg(\sum_{0<t_j\le t} \Delta t_j^2\bigg) \bigg(\sum_{0<t_j\le t} \Delta t_j^{3i/2}\bigg) \to 0, \] as $|\Pi|\to0$. It therefore suffices to show that \[ \sum_{0<t_j\le t} \theta(t_j)^4\Delta\widetilde F_j^4 \to \frac6\pi\int_0^t\theta(s)^2\,ds, \] in $L^2$ as $|\Pi|\to0$. By \eqref{tilde_increment}, \begin{equation}\label{quart_var.1} E\Delta\widetilde F_j^2 = 2(\widetilde\rho(t_j,t_j) - \widetilde\rho(t_{j-1},t_j)) = 2\int_0^{\Delta t_j}\partial_s\widetilde\rho(t_j-\varepsilon,t_j)\,d\varepsilon. \end{equation} By \eqref{IV.3}, \begin{multline*} \partial_s\widetilde\rho(t_j-\varepsilon,t_j) = \frac12 p(\varepsilon,q(t_j-\varepsilon),q(t_j)) u(q(t_j-\varepsilon),t_j-\varepsilon)\\ = \frac1{2\sqrt{2\pi}}\,\varepsilon^{-1/2}e^{-(q(t_j)-q(t_j-\varepsilon))^2/2\varepsilon} u(q(t_j-\varepsilon),t_j-\varepsilon). \end{multline*} Using $|1-e^{-x}|\le x$ for all $x\ge0$, and $|q(t)-q(s)|\le C|t-s|$ for all $0\le s\le t\le T$, and $u(q(\cdot),\cdot)\in C^1[0,\infty)$, this gives \[ \left|\partial_s\widetilde\rho(t_j-\varepsilon,t_j) - \frac1{2\sqrt{2\pi}}\,\varepsilon^{-1/2} u(q(t_j),t_j)\right| \le C\varepsilon^{1/2}. \] Substituting this into \eqref{quart_var.1} gives \[ \bigg|E\Delta\widetilde F_j^2 - \sqrt{\frac2\pi}\Delta t_j^{1/2}u(q(t_j),t_j)\bigg| \le C\Delta t_j^{3/2}. \] Since $E\Delta\widetilde F_j^4=3(E\Delta\widetilde F_j^2)^2$ and $\theta(t)=(u(q(t),t))^{-1}$, this implies \begin{multline*} \bigg|\theta(t_j)^4 E\Delta\widetilde F_j^4 - \frac6\pi\Delta t_j \theta(t_j)^2\bigg|\\ = 3\theta(t_j)^4\bigg|E\Delta\widetilde F_j^2 - \sqrt{\frac2\pi}\Delta t_j^{1/2} \theta(t_j)^{-1}\bigg| \bigg|E\Delta\widetilde F_j^2 + \sqrt{\frac2\pi}\Delta t_j^{1/2} \theta(t_j)^{-1}\bigg| \le C\Delta t_j^2. \end{multline*} Note that \[ \sum_{0<t_j\le t} \frac6\pi\Delta t_j \theta(t_j)^2 \to \frac6\pi\int_0^t \theta(s)^2\,ds. \] It will therefore suffice to show that \[ E\bigg|\sum_{0<t_j\le t} (\theta(t_j)^4\Delta\widetilde F_j^4 - \theta(t_j)^4 E\Delta\widetilde F_j^4)\bigg|^2 \to 0, \] as $|\Pi|\to0$. For this, we write \[ E\bigg|\sum_{0<t_j\le t} (\theta(t_j)^4\Delta\widetilde F_j^4 - \theta(t_j)^4 E\Delta\widetilde F_j^4)\bigg|^2 = \sum_{i,j} \theta(t_i)^4\theta(t_j)^4(E[\Delta\widetilde F_i^4\Delta\widetilde F_j^4] - (E\Delta\widetilde F_i^4)(E\Delta\widetilde F_j^4)). \] If $X$ and $Y$ are jointly normal with mean zero and variances $\sigma_X^2$ and $\sigma_Y^2$, then \[ E[X^4Y^4] = \sigma_X^4\sigma_Y^4(9 + 72\rho_{X,Y}^2 + 24\rho_{X,Y}^4), \] where $\rho_{X,Y} = (\sigma_X\sigma_Y)^{-1}E[XY]$. Hence, \[ |E[X^4Y^4] - (EX^4)(EY^4)| \le C\sigma_X^2\sigma_Y^2|E[XY]|^2. \] Therefore, \[ E\bigg|\sum_{0<t_j\le t} (\theta(t_j)^4\Delta\widetilde F_j^4 - \theta(t_j)^4 E\Delta\widetilde F_j^4)\bigg|^2 \le C\sum_{i,j}\Delta t_i^{1/2}\Delta t_j^{1/2}|E\Delta\widetilde F_i\Delta\widetilde F_j|^2. \] By H\"older's inequality, $|E\Delta\widetilde F_i\Delta\widetilde F_j|^2\le C\Delta t_i^{1/2} \Delta t_j^{1/2}$. Thus, it will suffice to show that \[ \sum_{t_j-t_i>2|\Pi|} \Delta t_i^{1/2}\Delta t_j^{1/2}|E\Delta\widetilde F_i\Delta\widetilde F_j|^2 \to 0, \] as $|\Pi|\to0$. If $t_j-t_i>2|\Pi|$, then $t_{j-1}>t_i$. Hence, by Corollary \ref{C:IV.2} and Remark \ref{R:Ftilde}, \[ |E\Delta\widetilde F_i\Delta\widetilde F_j|^2 \le C\Delta t_i^2\Delta t_j^2|t_{j-1} - t_i|^{-3}, \] where $C$ depends only on $T$. Since $|t_{j-1}-t_i|>|\Pi|\ge\Delta t_k$ for all $k$, this implies \[ |E\Delta\widetilde F_i\Delta\widetilde F_j|^2 \le C\Delta t_i^{1/2}\Delta t_j^{5/4}|t_{j-1} - t_i|^{-3/4} \le C|\Pi|^{3/4}\Delta t_i^{1/2}\Delta t_j^{1/2}|t_{j-1} - t_i|^{-3/4}. \] Hence, \[ \sum_{t_j-t_i>2|\Pi|} \Delta t_i^{1/2}\Delta t_j^{1/2}|E\Delta\widetilde F_i\Delta\widetilde F_j|^2 \le C|\Pi|^{3/4}\sum_{t_j-t_i>2|\Pi|} \Delta t_i\Delta t_j|t_{j-1} - t_i|^{-3/4} \to 0, \] since $\int_0^t\int_0^t|u-v|^{-3/4}\,du\,dv<\infty$. $\Box$ Although the local properties of $F$ are very similar to $B^{1/4}$, the global properties need not be. One simple observation along these lines is that $E|B^H(t)|^2=t^{2H}$, whereas $E|F(t)|^2\ge Ct$, for some constant $C$ that depends on the initial distribution $f$. Indeed, by \eqref{rho_def}, we have $E|F(t)|^2 =(\alpha-\alpha^2)|u(q(t),t)|^{-2}$, and \[ u(x,t) = \frac1{\sqrt{2\pi t}}\int f(y)e^{-(x-y)^2/2t}\,dy \le \frac1{\sqrt{2\pi t}}\|f\|_1. \] In other words, for $t$ large, $F(t)$ has a variance which is comparable to Brownian motion, rather than fBm. We will illustrate some other global properties of $F$ with a particular example of an initial distribution. As noted in Section \ref{S:intro}, if $B(0)\sim N(0,1)$ and $j(n) = \flr{{(n+1)}/2}$, so that $Q_n$ is the median, $\alpha=1/2$, and $q=0$, then the result in \cite{Sw07.2} implies that $F_n(\cdot)\Rightarrow X(\cdot+1)$ in $C[0, \infty)$, where the covariance function of $X$ is given by \eqref{med_covar}. We will illustrate the global properties of $F$ in this case by deriving some of the global properties of $X$. First, note that fBm has a self-similarity scaling property. Namely, $c^{-H}B^H(ct)$ and $B^H(t)$ have the same law, as processes. Based on this, we might expect $X$ to have a similar scaling property with exponent $1/4$. However, $X$ obeys the Brownian scaling law with exponent $1/2$. This can be seen immediately by direct inspection of \eqref{med_covar}. Second, note that fBm is a long-memory process. In particular, \[ r_H(n) = E[B^H(1)(B^H(n+1) - B^H(n))] \] decays only polynomially. It is well-known that $r_H(n)\sim2H(2H-1)n^{2H-2}$, where $a_n\sim b_n$ means $a_n/b_n\to1$ as $n\to\infty$. Based on this, we might expect \begin{equation}\label{long_mem} r(n) = E[X(1)(X(n+1) - X(n))] \end{equation} to satisfy $r(n)\sim Cn^{-3/2}$ for some constant $C$. However, this is not the case. \begin{prop}\label{P:incr_decay} If $r(n)$ is given by \eqref{long_mem}, then $r(n)\sim -(1/6)n^{-2}$. \end{prop} \noindent{\bf Proof.} The proposition follows easily by observing that \[ r(n) = \sqrt{n+1}\,\sin^{-1}\left(\frac1{\sqrt{n+1}}\right) - \sqrt n\,\sin^{-1}\left(\frac1{\sqrt n}\right), \] and using the Taylor expansion $\sin^{-1}x = x + x^3/6 + 3x^5/40 + O(x^7)$. $\Box$ We see then that $X$ is a process which behaves locally like fBm with $H=1/4$, but whose increments are more weakly correlated than those of fBm. Another example of such a process is sub-fBm (see Bojdecki et al \cite{BGT}, for example). For sub-fBm with $H=1/4$, the increments decay at the rate $n^{-5/2}$. The decay rate of $n^{-2}$ in Proposition \ref{P:incr_decay} has only been established here for the case when the initial distribution of the particles is a standard Gaussian, and an investigation of the relationship between the initial distribution and this exponent is beyond the scope of the current paper. However, we may presently note that $F$ is not a sub-fBm for any initial distribution $f$, since the variance of a sub-fBm is always proportional to $t^{2H}$, whereas $E|F(t)|^2\ge Ct$, as we observed above. \section{Outline of proof of tightness} Our primary tool for proving tightness is the following version of the Kolmogorov-Prohorov tightness criterion. \begin{thm}\label{T:moment} If $\{Z_n\}$ is a sequence of continuous stochastic processes such that \begin{enumerate}[(i)] \item $\sup_{n\ge n_0} P(|Z_n(0)| > \lambda) \to 0$ as $\lambda\to\infty$, and \item $\sup_{n\ge n_0} P(|Z_n(t) - Z_n(s)| \ge \varepsilon) \le C_T\varepsilon^{-\alpha}|t - s|^{1+\beta}$ whenever $0 < \varepsilon < 1$, $T > 0$, and $0 \le s<t \le T$, \end{enumerate} for some positive constants $n_0$, $\alpha$, $\beta$, and $C_T$ (depending on $T$), then $\{Z_n\}$ is relatively compact in $C[0,\infty)$. \end{thm} When we apply the above theorem to our processes $\{F_n\}$, Condition (i) will automatically be satisfied due to the already established convergence of the fdds. Our main theorem will therefore be proved once we establish that $\{F_n\}$ satisfies Condition (ii) of Theorem \ref{T:moment}. The remainder of the paper will be devoted to proving this. For this, we will need to study the distribution of the difference of two quantiles. Unfortunately, the difference of the quantiles is not the quantile of the differences. The stark non-linearity of the quantile mapping creates substantial technical difficulties. In the next section, we begin by using conditioning to connect the probability being estimated to a certain random walk. With an eye to future applications, we construct this connection in abstract generality. In Section \ref{S:param_est}, we will return to the specific sequence $\{F_n\}$. Before moving on to the details of the proof of tightness, we pause here to present a general overview of the key steps to be taken in the remainder of the paper. Let $\overline B_j = B_j - q$, $\overline B = \overline B_1$, and $\overline Q_n = Q_n - q = \overline B_{j(n):n}$. To verify Condition (ii) of Theorem \ref{T:moment}, we will aim to show that \begin{equation}\label{heur_star2} P(|F_n(t) - F_n(s)| \ge \varepsilon) \le C_p\varepsilon^{-p}|t - s|^{p/4}, \end{equation} for any $p>2$. We begin by writing \[ P(|F_n(t) - F_n(s)| \ge \varepsilon) = P(|\overline Q_n(t) - \overline Q_n(s)| \ge n^{-1/2}\varepsilon). \] Heuristically, the order of magnitude of $|\overline Q_n(t) - \overline Q_n(s)|$ should be less than that of $|\overline B(t) - \overline B(s)|$, which is $|t - s|^{1/2}$. Consequently, it is easy to estimate this probability when $n^{-1/2}\varepsilon\ge|t-s|^{1/2-\Delta}$ for some $\Delta>0$. (This is what we call the ``large gap regime".) This routine estimate is carried out in the appendix, and \eqref{heur_star2} is easily verified in the large gap regime. We next turn our attention to the ``small gap regime", when $n^{-1/2}\varepsilon\le|t-s|^{1/2+\Delta}$ for some $\Delta>0$. We begin with the fact that \begin{equation}\label{heur_star} P(\overline Q_n(t) - \overline Q_n(s) \le -n^{-1/2}\varepsilon) = E[P(\overline Q_n(t) - \overline Q_n(s) \le -n^{-1/2}\varepsilon \mid \overline Q_n(s))]. \end{equation} (The estimates for $P(\overline Q_n(t) - \overline Q_n(s) \ge n^{-1/2}\varepsilon)$ are nearly identical.) To deal with the right-hand side of \eqref{heur_star}, we apply the results from Section \ref{S:RWrep}, wherein we study, in generality, the conditional distribution of the difference of two quantiles. There we find that this conditional probability can be well-approximated by certain probabilities connected to sums of iid random variables, and that sufficient estimates on these probabilities can be derived. When we apply Theorem \ref{T:RWrep} to the problem at hand, we find that if \[ \begin{split} q_1(x,y) &= P(B(t) > q(t) + x + y \mid B(s) < q(s) + x),\\ q_2(x,y) &= P(B(t) < q(t) + x + y \mid B(s) > q(s) + x), \end{split} \] and \[ \varphi_{j:n}^\le(x,y) = P\bigg(\sum_{i=1}^{j-1} 1_{\{U_i\le q_1\}} \le \sum_{i=j+1}^n 1_{\{U_i\le q_2\}}\bigg), \] where $\{U_i\}$ are iid and uniform on $(0,1)$, then \[ P(\overline Q_n(t) - \overline Q_n(s) \le -n^{-1/2}\varepsilon \mid \overline Q_n(s)) \le \varphi_{j(n):n}^\le(\overline Q_n(s),-n^{-1/2}\varepsilon). \] By \eqref{heur_star} and the fact that $\varphi_{j:n}^\le(x,y)\le1$, this gives, for any $K>0$, \[ P(\overline Q_n(t) - \overline Q_n(s) \le -n^{-1/2}\varepsilon) \le \sup_{|x|\le K}\varphi_{j(n):n}^\le(x,-n^{-1/2}\varepsilon) + P(|\overline Q_n(s)|>K). \] Since $F_n$ has well-behaved tail probabilities (see Proposition \ref{P:quant_tails} and Remark \ref{R:quant_tails}), we may choose $K$ so that $P(|\overline Q_n(s)|>K)\le C_p\varepsilon^{-p}|t - s|^{p/4}$. (In fact, the correct choice is $K=n^{-1/2}\varepsilon|t-s|^{-1/4}$.) The problem thus reduces to showing that \begin{equation}\label{heur_star3} \sup_{|x|\le K}\varphi_{j(n):n}^\le(x,-n^{-1/2}\varepsilon) \le C_p\varepsilon^{-p}|t - s|^{p/4}. \end{equation} For this, we again appeal to the general results in Section \ref{S:RWrep}. This time applying Theorem \ref{T:RWest}, we find that \[ \varphi_{j(n):n}^\le(x,-n^{-1/2}\varepsilon) \le C\left({\frac{\sigma(x,-n^{-1/2}\varepsilon)} {n\mu(x,-n^{-1/2}\varepsilon)^2} }\right)^{p/2}, \] where \begin{align*} \sigma(x,y) &= \alpha q_1(x,y) + (1 - \alpha)q_2(x,y),\\ \mu(x,y) &= \alpha q_1(x,y) - (1 - \alpha)q_2(x,y). \end{align*} To show that this leads to \eqref{heur_star3}, we use Taylor expansions for $\sigma$ and $\mu$ which are developed in Proposition \ref{P:Psi_Taylor} and Corollary \ref{C:Psi_Taylor}, and which rely on some of the computations in the proof of Theorem \ref{T:IV}. When all of this analysis is finally complete, we will have established \eqref{heur_star3}, and therefore \eqref{heur_star2}, in the small gap regime. Lastly, we must deal with the ``medium gap regime", in which $|t-s|^{1/2+\Delta} \le n^{-1/2}\varepsilon\le |t-s|^{1/2-\Delta}$ for some $\Delta>0$. This case is treated by making minor modifications to the proof for the small gap regime. In making these modifications, however, the result is weakened, and we are only able to prove that, for parameter values in the medium gap regime, \begin{equation}\label{heur_star4} P(|F_n(t) - F_n(s)| > \varepsilon) \le C(\varepsilon^{-1}|t - s|^{1/4-2\Delta})^p. \end{equation} Although we conjecture that the sharper bound \eqref{heur_star2} does in fact hold in all regimes, the weaker bound \eqref{heur_star4} which we are able to prove is still sufficient to establish tightness. \section{A random walk representation and estimate}\label{S:RWrep} Let $\{U_n\}_{n=1}^\infty$ be an iid sequence of random variables, uniformly distributed on $(0,1)$, and for $r_1,r_2\in(0,1)$, define \begin{align*} \psi_{j:n}^\le(r_1,r_2) = P\bigg(\sum_{i=1}^{j-1} 1_{\{U_i\le r_1\}} \le \sum_{i=j+1}^n 1_{\{U_i\le r_2\}}\bigg),\\ \psi_{j:n}^<(r_1,r_2) = P\bigg(\sum_{i=1}^{j-1} 1_{\{U_i\le r_1\}} < \sum_{i=j+1}^n 1_{\{U_i\le r_2\}}\bigg). \end{align*} Also, let $\psi_{j:n}^>=1-\psi_{j:n}^\le$ and $\psi_{j:n}^\ge=1-\psi_{j:n}^<$. Let $X$ and $Y$ be real-valued random variables such that $(x,y) \mapsto P(X\le x,Y\le y)$ is continuous. Define \[ \begin{split} q_1(x,y) &= P(Y > x + y \mid X < x),\\ q_2(x,y) &= P(Y < x + y \mid X > x), \end{split} \] and let $\varphi_{j:n}^\star(x,y)=\psi_{j:n}^\star(q_1(x,y),q_2(x,y))$, where $\star$ may be any one of the symbols $\le$, $<$, $>$, or $\ge$. Note that $\varphi_{j:n}^>(x,y) = \psi^<_{(n-j+1):n}(q_2,q_1)$ and $\varphi^\ge_{j:n} (x,y) = \psi^\le_{(n-j+1):n}(q_2,q_1)$. \begin{thm}\label{T:RWrep} If $\{(X_n,Y_n)\}$ is a sequence of iid copies of $(X,Y)$, then for all $y\in\mathbb{R}$, \begin{equation}\label{RWrep.1} \varphi_{j:n}^<(X_{j:n},y) \le P(Y_{j:n} - X_{j:n} < y \mid X_{j:n}) \le \varphi_{j:n}^\le(X_{j:n},y), \end{equation} almost surely. Consequently, \begin{equation}\label{RWrep.2} \varphi_{j:n}^>(X_{j:n},y) \le P(Y_{j:n} - X_{j:n} > y \mid X_{j:n}) \le \varphi^\ge_{j:n}(X_{j:n},y), \end{equation} almost surely. \end{thm} Theorem 5.1 establishes a connection between the conditional distribution of the difference of two quantiles and probabilities connected to sums of iid random variables. To give some intuitive sense to this connection, let us consider the following heuristic derivation of \eqref{RWrep.1}. We are interested in estimating $P(Y_{j:n} - X_{j:n} < y \mid X_{j:n} = x) = P(Y_{j:n} < x + y \mid X_{j:n} = x)$. Let us consider $X_1,\ldots, X_n$ as representing the locations on the real line of some particles at time $s$, and $Y_1,\ldots,Y_n$ as representing the locations of those same particles at some later time $t>s$. We are given that the $j$-th leftmost particle at time $s$ is located at position $x$. Conditioned on this information, we know the following. At time $s$, there is one particle located at $x$, there are $j-1$ iid particles located in $(-\infty,x)$, and there are $n-j$ iid particles located in $(x,\infty)$. The event $\{Y_{j:n} < x + y\}$ will occur if and only if at least $j$ particles end up in $(-\infty,x+y)$ at time $t$. Each particle which is in $(-\infty,x)$ at time $s$ has probability \[ p_1 = 1 - q_1 = P(Y < x + y \mid X < x) \] of ending up in the target interval $(-\infty,x+y)$ at time $t$. Therefore, we may represent the number of particles from $(-\infty, x)$ which end up in the target interval by $\sum_{i=1}^{j-1} 1_{\{ U_i\le p_1\}}$. Similarly, the number of particles from $(x,\infty)$ which end up in the target interval is represented by $\sum_{i=j+1}^n 1_{\{U_i\le q_2\}}$. We are therefore interested in computing the probability that \begin{align*} j &\le \sum_{i=1}^{j-1} 1_{\{ U_i\le p_1\}} + \sum_{i=j+1}^n 1_{\{U_i\le q_2\}}\\ &\overset{d}{=} \sum_{i=1}^{j-1} (1 - 1_{\{ U_i\le q_1\}}) + \sum_{i=j+1}^n 1_{\{U_i\le q_2\}}\\ &= j - 1 - \sum_{i=1}^{j-1} 1_{\{ U_i\le q_1\}} + \sum_{i=j+1}^n 1_{\{U_i\le q_2\}}, \end{align*} which happens if and only if $\sum_{i=1}^{j-1} 1_{\{ U_i\le q_1\}} < \sum_{i=j+1}^n 1_{\{U_i\le q_2\}}$. This probability is exactly $\psi_{j:n}^<(q_1,q_2)=\varphi_{j:n}^<(x,y)$. In fact, the true probability is even larger than this, since the particle at $x$ may itself end up in the target interval $(-\infty,x+y)$ at time $t$. If that happens, then we only need $\sum_{i=1}^{j-1}1_{\{U_i\le q_1\}}\le\sum_{i=j+1}^n 1_{\{U_i\le q_2\}}$, and the probability of this is $\varphi_{j:n}^\le(x,y)$. Hence, the conditional probability of interest is sandwiched between $\varphi_{j:n}^<$ and $\varphi_{j:n}^\le$. \noindent{\bf Proof of Theorem \ref{T:RWrep}.} By taking complements, \eqref{RWrep.1} and \eqref{RWrep.2} are equivalent. We will prove \eqref{RWrep.2}. For this, we will first establish that for any $a,b,y\in\mathbb{R}$ with $a<b$, \begin{multline}\label{RWrep.3} E[\varphi_{j:n}^>(X_{j:n},y)1_{\{X_{j:n} \in (a,b)\}}] \le P(Y_{j:n} - X_{j:n} > y, X_{j:n} \in (a,b))\\ \le E[\varphi_{j:n}^\ge(X_{j:n},y)1_{\{X_{j:n} \in (a,b)\}}]. \end{multline} Let us begin with the upper bound in \eqref{RWrep.3}. First, by taking $a_i\downarrow a$ and $b_i\uparrow b$ if necessary, we may assume that $P(X<a) >0$ and $P(X>b)>0$, so that $q_i$, $\varphi_{j:n}^\ge$, and $\Phi$, where $\Phi(x)=P(X<x)$, are all well-defined, continuous, and bounded on $\{(x,y)\in[a,b]\times\mathbb{R}\}$. Now fix $\varepsilon>0$ and $x\in[a,b-\varepsilon]$ and let $N=\#\{i:X_i\in(x,x+\varepsilon)\}$. Then \begin{multline*} P(Y_{j:n} - X_{j:n} > y, X_{j:n} \in (x,x+\varepsilon)) \le P(N \ge 2)\\ + P(Y_{j:n} > x + y, \#\{i: X_i < x\} = j - 1, \#\{i: X_i > x + \varepsilon\} = n - j, N = 1). \end{multline*} Thus, \begin{multline}\label{RWrep.4} P(Y_{j:n} - X_{j:n} > y, X_{j:n} \in (x,x+\varepsilon))\\ \le \binom{n}{2}P(X \in (x,x+\varepsilon))^2 + j\binom{n}{j}P(Y_{j:n} > x + y, A), \end{multline} where \[ A = \{X_i < x\text{ for all }i<j\} \cap \{X_i > x + \varepsilon\text{ for all }i>j\} \cap \{X_j \in (x,x+\varepsilon)\}. \] Note that \begin{align*} P(Y_{j:n} > x + y,\, &A)\\ &= P(\#\{i: Y_i < x + y\} < j, A)\\ &\le P(\#\{i \ne j: Y_i < x + y\} < j, A)\\ &= \sum_{\ell=0}^{j-1}\sum_{m=\ell}^{j-1} P(\#\{i < j: Y_i < x + y\} = \ell, \#\{i > j: Y_i < x + y\} = m - \ell, A). \end{align*} By independence, this gives \begin{multline*} P(Y_{j:n} > x + y, A)\\ \le \sum_{\ell=0}^{j-1}\sum_{m=\ell}^{j-1} \binom{j-1}{\ell}P(Y < x + y, X < x)^\ell P(Y > x + y, X < x)^{j-1-\ell}\\ \cdot\binom{n-j}{m-\ell} P(Y < x + y, X > x + \varepsilon)^{m-\ell} P(Y > x + y, X > x + \varepsilon)^{n-j-m+\ell}\\ \cdot P(X\in(x,x+\varepsilon)). \end{multline*} If we define $\widetilde q_i(x,y)=q_i(x+\varepsilon,y-\varepsilon)$, $p_i=1-q_i$, $\widetilde p_i = 1 - \widetilde q_i$, and $\overline\Phi=1-\Phi$, then this becomes \begin{multline}\label{RWrep.5} P(Y_{j:n} > x + y, A) \le \sum_{\ell=0}^{j-1}\sum_{m=\ell}^{j-1} \binom{j-1}{\ell}\binom{n-j}{m-\ell} p_1^\ell q_1^{j-1-\ell}\widetilde p_2^{n-j-m+\ell}\widetilde q_2^{m-\ell}\\ \cdot\Phi(x)^{j-1}\overline\Phi(x+\varepsilon)^{n-j}P(X\in(x,x+\varepsilon)). \end{multline} Compare this with \begin{align} \varphi_{j:n}^\ge(x,y) &= P\bigg(\sum_{i=1}^{j-1} 1_{\{U_i\le q_1\}} \ge \sum_{i=j+1}^n 1_{\{U_i\le q_2\}}\bigg)\notag\\ &= \sum_{\ell=0}^{j-1}\sum_{m=\ell}^{j-1} P\bigg(\sum_{i=1}^{j-1} 1_{\{U_i\le q_1\}} = j - 1 - \ell, \sum_{i=j+1}^n 1_{\{U_i\le q_2\}} = m - \ell\bigg)\notag\\ &= \sum_{\ell=0}^{j-1}\sum_{m=\ell}^{j-1} \binom{j-1}{\ell}\binom{n-j}{m-\ell} p_1^\ell q_1^{j-1-\ell}p_2^{n-j-m+\ell}q_2^{m-\ell}.\label{RWrep.6} \end{align} Finally, partition $(a,b)$ into subintervals of size $\varepsilon$ and apply \eqref{RWrep.4} and \eqref{RWrep.5}. Let $\varepsilon\to0$ and apply dominated convergence. By \eqref{RWrep.6}, this gives \[ P(Y_{j:n} - X_{j:n} > y, X_{j:n} \in (a,b)) \le j\binom{n}{j}\int_a^b \varphi_{j:n}^\ge(x,y) \Phi(x)^{j-1}\overline\Phi(x)^{n-j}\,d\Phi(x). \] By Lemma \ref{L:RWrep}, this completes the proof of the upper bound in \eqref{RWrep.3}. The lower bound in \eqref{RWrep.3} is proved similarly. As before, \begin{multline*} P(Y_{j:n} - X_{j:n} > y, X_{j:n} \in (x,x+\varepsilon))\\ \ge P(Y_{j:n} > x + \varepsilon + y, \#\{i: X_i < x\} = j - 1, \#\{i: X_i > x + \varepsilon\} = n - j, N = 1), \end{multline*} which gives \begin{align*} P(Y_{j:n} - X_{j:n} > y, X_{j:n} \in (x,x+\varepsilon)) &\ge j\binom{n}{j}P(Y_{j:n} > x + \varepsilon + y, A)\\ &= j\binom{n}{j}P(\#\{i: Y_i < x + \varepsilon + y\} < j, A)\\ &\ge j\binom{n}{j}P(\#\{i \ne j: Y_i < x +\varepsilon + y\} < j - 1, A). \end{align*} Note that \begin{multline*} P(\#\{i \ne j: Y_i < x +\varepsilon + y\} < j - 1, A) \\ = \sum_{\ell=0}^{j-2}\sum_{m=\ell}^{j-2} P(\#\{i < j: Y_i < x + \varepsilon + y\} = \ell, \#\{i > j: Y_i < x + \varepsilon + y\} = m - \ell, A). \end{multline*} If we now define $\widehat q_i(x,y)=q_i(x,y+\varepsilon)$, $\overline q_i(x,y)=q_i(x+\varepsilon, y)$, $\widehat p_i=1-\widehat q_i$, and $\overline p_i=1-\overline q_i$, then this gives \begin{multline*} P(Y_{j:n} - X_{j:n} > y, X_{j:n} \in (x,x+\varepsilon))\\ \ge j\binom{n}{j}\sum_{\ell=0}^{j-2}\sum_{m=\ell}^{j-2} \binom{j-1}{\ell}\binom{n-j}{m-\ell} \widehat p_1^\ell\widehat q_1^{j-1-\ell}\overline p_2^{n-j-m+\ell}\overline q_2^{m-\ell}\\ \cdot\Phi(x)^{j-1}\overline\Phi(x+\varepsilon)^{n-j}P(X\in(x,x+\varepsilon)) \end{multline*} We compare this with \begin{align*} \varphi_{j:n}^>(x,y) &= P\bigg(\sum_{i=1}^{j-1} 1_{\{U_i\le q_1\}} > \sum_{i=j+1}^n 1_{\{U_i\le q_2\}}\bigg)\\ &= \sum_{\ell=0}^{j-2}\sum_{m=\ell}^{j-2} P\bigg(\sum_{i=1}^{j-1} 1_{\{U_i\le q_1\}} = j - 1 - \ell, \sum_{i=j+1}^n 1_{\{U_i\le q_2\}} = m - \ell\bigg)\\ &= \sum_{\ell=0}^{j-2}\sum_{m=\ell}^{j-2} \binom{j-1}{\ell}\binom{n-j}{m-\ell} p_1^\ell q_1^{j-1-\ell}p_2^{n-j-m+\ell}q_2^{m-\ell}, \end{align*} and the remainder of the proof is the same. Using \eqref{RWrep.3}, we now prove the upper bound in \eqref{RWrep.2}. Let \[ \xi = P(Y_{j:n} - X_{j:n} > y \mid X_{j:n}) - \varphi^\ge_{j:n}(X_{j:n},y), \] so that $\{\xi>0\}=\{X_{j:n}\in B\}$ for some Borel subset $B\subset\mathbb{R}$. Fix $\varepsilon>0$. There exists $B_0\subset\mathbb{R}$ such that $B_0$ is a finite, disjoint union of open intervals, and \[ |P(X_{j:n} \in B) - P(X_{j:n} \in B_0)| < \varepsilon. \] (See Proposition 1.20 in \cite{Fo99}, for example.) Hence, by \eqref{RWrep.3}, \begin{align*} E[P(Y_{j:n} - X_{j:n} > y \mid X_{j:n})1_{\{X_{j:n} \in B\}}] &= P(Y_{j:n} - X_{j:n} > y, X_{j:n} \in B)\\ &\le P(Y_{j:n} - X_{j:n} > y, X_{j:n} \in B_0) + \varepsilon\\ &\le E[\varphi^\ge_{j:n}(X_{j:n},y)1_{\{X_{j:n} \in B_0\}}] + \varepsilon\\ &\le E[\varphi^\ge_{j:n}(X_{j:n},y)1_{\{X_{j:n} \in B\}}] + 2\varepsilon. \end{align*} Therefore, $E[\xi1_{\{\xi>0\}}]\le 2\varepsilon$. Letting $\varepsilon\to0$ shows that $\xi \le0$ a.s., completing the proof. The lower bound in \eqref{RWrep.2} is proved similarly. $\Box$ With Theorem \ref{T:RWrep}, we have accomplished the first step of connecting our quantiles to a random walk. The second step, given by the next theorem, is to derive an estimate for this walk. \begin{thm}\label{T:RWest} Fix $\alpha\in(0,1)$. Let \[ \begin{split} \sigma &= \sigma(r_1,r_2) = \alpha r_1 + (1 - \alpha)r_2,\\ \mu &= \mu(r_1,r_2) = \alpha r_1 - (1 - \alpha)r_2, \end{split} \] and suppose $j(n)/n=\alpha+o(n^{-1/2})$. Then for each $\tau>1$, there exist constants $C>0$ and $n_0\in\mathbb{N}$ such that, for all $n\ge n_0$, \[ \psi^\le_{j(n):n}(r_1,r_2) \le C\frac{\sigma^\tau}{n^\tau\mu^{2\tau}}, \] whenever $\mu>0$. Note that $C$ does not depend on $r_1$ or $r_2$. \end{thm} \noindent{\bf Proof.} First note that if $n\alpha r_1\le 2$, then \[ \frac{\sigma^\tau}{n^\tau\mu^{2\tau}} \ge \frac{(\alpha r_1)^\tau}{n^\tau(\alpha r_1)^{2\tau}} \ge 2^{-\tau} \ge 2^{-\tau}\psi^\le_{j(n):n}(r_1,r_2). \] Hence, we may assume that $n\alpha r_1>2$. Now, since $j(n)/n=\alpha+n^{-1/2}a_n$, where $a_n\to0$, we may write \[ \psi^\le_{j(n):n}(r_1,r_2) = P(n^{-1}(\xi_L - \xi_U) + b_n \le -\mu) \le P(|n^{-1}(\xi_L - \xi_U) + b_n| \ge \mu), \] where \begin{align*} \xi_L &= \sum_{i=1}^{j(n)-1} (1_{\{U_i\le r_1\}} - r_1),\\ \xi_U &= \sum_{i=j(n)+1}^n (1_{\{U_i\le r_2\}} - r_2),\\ b_n &= n^{-1/2}a_n(r_1 + r_2) - n^{-1}r_1. \end{align*} By Chebyshev's inequality, \[ \begin{split} \psi^\le_{j(n):n}(r_1,r_2) &\le \mu^{-2\tau}E|n^{-1}(\xi_L - \xi_U) + b_n|^{2\tau}\\ &\le C(n\mu)^{-2\tau}(E|\xi_L|^{2\tau} + E|\xi_U|^{2\tau}) + C\mu^{-2\tau}|b_n|^{2\tau}. \end{split} \] Note that \[ |b_n| \le Cn^{-1/2}(r_1 + r_2) \le \frac{Cn^{-1/2}\sigma}{\alpha\wedge(1 - \alpha)}, \] which implies $|b_n|^{2\tau}\le Cn^{-\tau}\sigma^{2\tau}$. It will therefore suffice to show that \[ E|\xi_L|^{2\tau} + E|\xi_U|^{2\tau} \le C(n\sigma)^\tau. \] By Lemma \ref{L:RWest}, there exists $n_0$ and $C$ such that $n\ge n_0$ implies \[ E|\xi_L|^{2\tau} \le C((j(n)r_1)^\tau \vee (j(n)r_1)). \] By making $n_0$ larger if necessary, we may assume $n\alpha/2\le j(n)\le 3n\alpha/2$. Since $n\alpha r_1>2$, this gives $j(n)r_1>1$, so that \[ E|\xi_L|^{2\tau} \le C(j(n)r_1)^\tau \le C(n\alpha r_1)^\tau \le C(n\sigma)^\tau. \] Similarly, for $n$ sufficiently large, \[ \begin{split} E|\xi_U|^{2\tau} &\le C(((n-j(n))r_2)^\tau \vee ((n-j(n))r_2))\\ &\le C((n(1 - \alpha)r_2)^\tau \vee 1)\\ &\le C((n(1 - \alpha)r_2)^\tau \vee (n\alpha r_1)^\tau) \le C(n\sigma)^\tau, \end{split} \] completing the proof. $\Box$ \section{Parameter estimates and gap regimes}\label{S:param_est} Let us now return to the specific assumptions of our model, as outlined in the beginning of Section \ref{S:fdd}. Fix $0\le s<t$. We shall adopt the notation and definitions of Section \ref{S:RWrep}, in the special case that $X=B(s)-q(s)$ and $Y=B(t)-q(t)$. In this case, \begin{equation}\label{q_def_specific} \begin{split} q_1(x,y) &= P(B(t) > q(t) + x + y \mid B(s) < q(s) + x),\\ q_2(x,y) &= P(B(t) < q(t) + x + y \mid B(s) > q(s) + x). \end{split} \end{equation} Let us also define \begin{equation}\label{Psidef} \Psi(x,y) = P(B(t) > q(t) + x + y, B(s) < q(s) + x). \end{equation} Our objective is to verify Condition (ii) of Theorem \ref{T:moment}. Ideally, we would like to show that \begin{equation}\label{tight_goal} P(|F_n(t) - F_n(s)| > \varepsilon) \le C_p\varepsilon^{-p}|t - s|^{p/4}. \end{equation} In the end, we will actually show something slightly weaker, although our final estimate will be sufficient to verify the conditions of Theorem \ref{T:moment}. Our approach will begin by conditioning on $F_n(s)$, so that we may apply Theorem \ref{T:RWrep}. We will then use Theorem \ref{T:RWest} to obtain the specific bound we need. Implementing this strategy will require precise estimates on the function $q_1$ and $q_2$, in terms of $x$, $y$, $s$, and $t$. These estimates will come from Taylor expansions. We therefore begin with a Taylor expansion for $\Psi$. \begin{prop}\label{P:Psi_Taylor} Fix $T,K>0$. There exists a constant $C$ such that for all $0\le s<t\le T$ and all $x,y$ satisfying $|x|+|y|\le K$, \begin{equation}\label{Psi_Taylor} \Psi(x,y) = \Psi(0,0) - \frac12u(q(s),s)y + \frac1{2\sqrt{2\pi\delta}}u(q(s),s)y^2 + R, \end{equation} where \begin{equation}\label{Psi_Taylor_rem} |R| \le C\left({(|x| + |y|)(\delta^{1/2} + |y| + \delta^{-1/2}|y|^2) + \delta^{-3/2}|y|^4}\right), \end{equation} and $\delta=t-s$. \end{prop} \noindent{\bf Proof.} Fix $T,K>0$, $s,t\in[0,T]$, and $x,y\in\mathbb{R}$ such that $s<t$ and $|x|+|y| \le K$. Let $\delta=t-s$. In what follows, $C$ shall denote a constant that depends only on $T$ and $K$, and may change value from line to line. By Taylor's theorem, we may write \begin{multline}\label{Psi_Taylor1} \Psi(x,y) = \Psi(0,0) + \partial_x\Psi(0,0)x + \partial_y\Psi(0,0)y\\ + \frac12\partial_x^2\Psi(0,0)x^2 + \partial_x\partial_y\Psi(0,0)xy + \frac12\partial_y^2\Psi(0,0)y^2\\ + \frac16\partial_x^3\Psi(\theta x,\theta y)x^3 + \frac12\partial_x^2\partial_y\Psi(\theta x,\theta y)x^2 y + \frac12\partial_x\partial_y^2\Psi(\theta x,\theta y)x y^2 + \frac16\partial_y^3\Psi(\theta x,\theta y)y^3, \end{multline} where $\theta\in(0,1)$. Let $\Phi(x) =(2\pi)^{-1/2}\int_{-\infty}^x e^{-y^2/2}\,dy$ and $\overline\Phi=1-\Phi$. We first establish that for all integers $i\ge0$ and $j\ge1$, \begin{align} \partial_x^i\Psi &= \int_{-\infty}^{q(s)+x}\overline\Phi\left({ \frac{x+y+q(t)-z}{\delta^{1/2}}}\right)\partial_x^i u(z,s)\,dz,\label{dform1}\\ \partial_x^i\partial_y^j\Psi &= -\delta^{-(j-1)/2} \overline\Phi^{(j-1)}\left({\frac{y+q(t)-q(s)}{\delta^{1/2}}}\right) \partial_x^i u(q(s)+x,s) + \partial_x^{i+1}\partial_y^{j-1}\Psi.\label{dform2} \end{align} If $i=0$, then \eqref{dform1} follows directly from the definition of $\Psi$, \eqref{Psidef}. Differentiating \eqref{dform1} gives \begin{multline*} \partial_x^{i+1}\Psi = \overline\Phi\left({\frac{y+q(t)-q(s)}{\delta^{1/2}}}\right) \partial_x^i u(q(s)+x,s)\\ + \delta^{-1/2}\int_{-\infty}^{q(s)+x}\overline\Phi'\left({ \frac{x+y+q(t)-z}{\delta^{1/2}}}\right)\partial_x^i u(z,s)\,dz. \end{multline*} Applying integration by parts shows that \eqref{dform1} holds for $i+1$. By induction, this proves \eqref{dform1}. For \eqref{dform2}, let $j=1$ and let $i\ge0$ be arbitrary. By \eqref{dform1}, we have \begin{align*} \partial_x^i\partial_y\Psi &= \partial_y\left[{\int_{-\infty}^{q(s)+x}\overline\Phi\left({ \frac{x+y+q(t)-z}{\delta^{1/2}}}\right)\partial_x^i u(z,s)\,dz}\right]\\ &= \partial_x\left[{\int_{-\infty}^{q(s)+x}\overline\Phi\left({ \frac{x+y+q(t)-z}{\delta^{1/2}}}\right)\partial_x^i u(z,s)\,dz}\right]\\ &\quad - \overline\Phi\left({\frac{y+q(t)-q(s)}{\delta^{1/2}}}\right) \partial_x^i u(q(s)+x,s)\\ &= - \overline\Phi\left({\frac{y+q(t)-q(s)}{\delta^{1/2}}}\right) \partial_x^i u(q(s)+x,s) + \partial_x^{i+1}\Psi, \end{align*} which is \eqref{dform2}. Now assume \eqref{dform2} is valid for some $j\ge1$ and all $i\ge0$. Applying $\partial_y$ to both sides of \eqref{dform2} immediately shows that \eqref{dform2} is valid for $j+1$ and any $i\ge0$. By induction, this proves \eqref{dform2}. Now, by \eqref{dform1}, we may write, for any $i\ge 1$, \begin{align*} \partial_x^i\Psi(x,y) &= \int_{-\infty}^{q(s)+x}\partial_x^i u(z,s)\,dz - \int_{-\infty}^{q(s)+x}\int_{-\infty}^{q(t)+x+y} p(t-s,z,w)\partial_x^i u(z,s)\,dw\,dz\\ &= \partial_x^{i-1}u(q(s)+x,s) - \zeta(s,t,x,x+y), \end{align*} where $\zeta$ is given by \eqref{zeta_def}. We shall adopt the convention that $\partial_x^{-1}u(x,t) := P(B(t)\le x)$, so that the above equality is, in fact, valid for all $i\ge 0$. Hence, \[ \partial_x^i\Psi(x,y) = \partial_x^{i-1}u(q(s)+x,s) + \int_s^t \partial_s\zeta(r,t,x,x+y)\,dr - \zeta(t,t,x,x+y). \] Since \[ \zeta(s,t,w,z) = \int_{-\infty}^{q(s)+w} P^x(B(t-s) \le q(t) + z) \partial_x^i u(x,s)\,dx, \] it follows that \[ \zeta(t,t,w,z) = \int_{-\infty}^{q(t)+(w\wedge z)}\partial_x^i u(x,t)\,dx = \partial_x^{i-1} u(q(t)+(w\wedge z),t). \] Therefore, \[ \partial_x^i\Psi(x,y) = \partial_x^{i-1}u(q(s) + x,s) - \partial_x^{i-1}u(q(t) + x + (y\wedge0),s) + \int_s^t \partial_s\zeta(r,t,x,x+y)\,dr. \] By the mean value theorem, since $|q(t)-q(s)|\le C|t-s|=C\delta$, we have \[ |\partial_x^{i-1}u(q(s) + x,s) - \partial_x^{i-1}u(q(t) + x + (y\wedge0),s)| \le C(\delta + |y|). \] By \eqref{newIV.3}, $|\partial_s\zeta(r,t,x,x+y)|\le C|t-r|^{-1/2}$, and so we obtain \begin{equation}\label{pa_x_est} |\partial_x^i\Psi(x,y)| \le C(\delta^{1/2}+|y|), \end{equation} for any $i\ge0$. (Here we have used the fact that since $\delta\le T$, there exists $C$ such that $\delta\le C\delta^{1/2}$ for all $\delta\le T$.) We next consider the derivatives with respect to $y$. By \eqref{dform2}, \[ \partial_y\Psi(0,0) = -\overline\Phi\left({\frac{q(t)-q(s)}{\delta^{1/2}}}\right) u(q(s),s) + \partial_x\Psi(0,0). \] By \eqref{pa_x_est}, $|\partial_x\Psi(0,0)|\le C\delta^{1/2}$. Also, $|q(t)-q(s)| \le C\delta$ and $\overline\Phi(x)=1/2+O(|x|)$. Hence, \[ \left|{\partial_y\Psi(0,0) + \frac12 u(q(s),s)}\right| \le C\delta^{1/2}. \] Similarly, \[ |\partial_x\partial_y\Psi(0,0)| = \left|{-\overline\Phi\left({ \frac{q(t)-q(s)}{\delta^{1/2}}}\right)\partial_x u(q(s),s) + \partial_x^2\Psi(0,0)}\right| \le C, \] and \[ \partial_y^2\Psi(0,0) = -\delta^{-1/2}\overline\Phi'\left({ \frac{q(t)-q(s)}{\delta^{1/2}}}\right)u(q(s),s) + \partial_x\partial_y\Psi(0,0). \] Since $\overline\Phi'(x)=-(2\pi)^{-1/2}e^{-x^2/2}=-(2\pi)^{-1/2}(1+O(|x|^2))$, this implies \[ \left|{\partial_y^2\Psi(0,0) - \frac1{\sqrt{2\pi\delta}}u(q(s),s)}\right| \le C. \] For the third-order partial derivatives, we have \begin{align*} |\partial_x^2\partial_y\Psi(x,y)| &= \left|{-\overline\Phi\left({ \frac{y+q(t)-q(s)}{\delta^{1/2}}}\right)\partial_x^2 u(q(s)+x,s) + \partial_x^3\Psi(x,y)}\right| \le C,\\ |\partial_x\partial_y^2\Psi(x,y)| &= \left|{-\delta^{-1/2}\overline\Phi'\left({ \frac{y+q(t)-q(s)}{\delta^{1/2}}}\right)\partial_x u(q(s)+x,s) + \partial_x^2\partial_y\Psi(x,y)}\right| \le C\delta^{-1/2},\\ |\partial_y^3\Psi(x,y)| &= \left|{-\delta^{-1}\overline\Phi''\left({ \frac{y+q(t)-q(s)}{\delta^{1/2}}}\right)u(q(s)+x,s) + \partial_x\partial_y^2\Psi(x,y)}\right|\\ &\le C(\delta^{-3/2}|y| + \delta^{-1/2}), \end{align*} where the last inequality follows from $\overline\Phi''(x)\le C|x|$. Substituting all of this into \eqref{Psi_Taylor1} gives \eqref{Psi_Taylor}, where $R$ satisfies \begin{multline*} |R| \le C(\delta^{1/2}|x| + \delta^{1/2}|y| + \delta^{1/2}|x|^2 + |xy| + |y|^2\\ + \delta^{1/2}|x|^3 + |x^3 y| + |x^2 y| + \delta^{-1/2}|xy^2| + \delta^{-3/2}|y|^4 + \delta^{-1/2}|y|^3). \end{multline*} Since $x$ and $y$ are restricted to a compact set, we may simplify this to \[ |R| \le C(\delta^{1/2}|x| + \delta^{1/2}|y| + |xy| + |y|^2 + \delta^{-1/2}|xy^2| + \delta^{-3/2}|y|^4 + \delta^{-1/2}|y|^3), \] which is precisely \eqref{Psi_Taylor_rem}. $\Box$ \begin{corollary}\label{C:Psi_Taylor} Recall $q_1,q_2$ given by \eqref{q_def_specific}. Fix $T,K>0$. There exist constants $\delta_0$ and $C$ such that for all $0\le s<t\le T$ and all $x,y$ satisfying $|x|+|y|\le K$ and $|x|\le\delta_0$, \begin{align} \alpha q_1(x,y) &= \Psi(0,0) - \frac12u(q(s),s)y + \frac1{2\sqrt{2\pi\delta}}u(q(s),s)y^2 + R_1,\label{q1_Taylor}\\ (1 - \alpha)q_2(x,y) &= \Psi(0,0) + \frac12u(q(s),s)y + \frac1{2\sqrt{2\pi\delta}}u(q(s),s)y^2 + R_2,\label{q2_Taylor} \end{align} where $R_1,R_2$ both satisfy \eqref{Psi_Taylor_rem}, and $\delta=t-s$. \end{corollary} \noindent{\bf Proof.} By \eqref{Psi_Taylor} and \eqref{pa_x_est}, \begin{equation}\label{Psi_bound} |\Psi(x,y)| \le C(\delta^{1/2} + |y| + \delta^{-1/2}|y|^2) + |R|, \end{equation} where $R$ satisfies \eqref{Psi_Taylor_rem}. By \eqref{q_def_specific} and \eqref{Psidef}, \[ q_1(x,y) = \frac{\Psi(x,y)}{P(B(s) < q(s) + x)}. \] Note that $P(B(s)<q(s)+x)=\alpha+r(x,s)$, where $|r(x,s)|\le C|x|$. Hence, \[ |\alpha q_1(x,y) - \Psi(x,y)| = \left|{ \frac{r(x,s)}{\alpha + r(x,s)}}\right|\Psi(x,y) \le \frac{C|x|}{\alpha - C|x|}\Psi(x,y). \] If $x$ is sufficiently small, then using \eqref{Psi_bound}, we have \[ |\alpha q_1(x,y) - \Psi(x,y)| \le C(\delta^{1/2}|x| + |xy| + \delta^{-1/2}|xy^2|) + |R|. \] By \eqref{Psi_Taylor}, this gives \begin{multline*} \bigg|\alpha q_1(x,y) - \Psi(0,0) + \frac12u(q(s),s)y - \frac1{2\sqrt{2\pi\delta}}u(q(s),s)y^2\bigg|\\ \le C(\delta^{1/2}|x| + |xy| + \delta^{-1/2}|xy^2|) + 2|R|. \end{multline*} Since this is bounded above by the right-hand side of \eqref{Psi_Taylor_rem}, this completes the proof of \eqref{q1_Taylor}. For \eqref{q2_Taylor}, let $\widetilde B(t)=-B(t)$, and let $\widetilde q(t)$ be the $(1-\alpha)$-quantile of the law of $\widetilde B(t)$, so that $\widetilde q(t)=-q(t)$. Let $\widetilde u(x,t)$ be the density of $\widetilde B(t)$, so that $\widetilde u(x,t)=u(-x,t)$. Define \[ \widetilde\Psi(x,y) = P(\widetilde B(t) > \widetilde q(t) + x + y, \widetilde B(s) < \widetilde q(s) + x), \] and \begin{align*} \widetilde q_1(x,y) &= P(\widetilde B(t) > \widetilde q(t) + x + y \mid \widetilde B(s) < \widetilde q(s) + x)\\ &= P(B(t) < q(t) - x - y \mid B(s) > q(s) - x)\\ &= q_2(-x,-y). \end{align*} Hence, by \eqref{q1_Taylor}, \begin{align*} (1-\alpha)q_2(x,y) &= (1-\alpha)\widetilde q_1(-x,-y)\\ &= \widetilde\Psi(0,0) + \frac12\widetilde u(\widetilde q(s),s)y + \frac1{\sqrt{2\pi\delta}}\widetilde u(\widetilde q(s),s)y^2 + R_2\\ &= \widetilde\Psi(0,0) + \frac12u(q(s),s)y + \frac1{\sqrt{2\pi\delta}}u(q(s),s)y^2 + R_2, \end{align*} where $R_2$ satisfies \eqref{Psi_Taylor_rem}. To complete the proof, we observe that \begin{align*} \widetilde\Psi(0,0) &= P(B(t) < q(t), B(s) > q(s))\\ &= P(B(t) < q(t)) - P(B(t) < q(t), B(s) < q(s))\\ &= P(B(s) < q(s)) - P(B(t) < q(t), B(s) < q(s))\\ &= P(B(t) > q(t), B(s) < q(s)) = \Psi(0,0), \end{align*} which gives \eqref{q2_Taylor}. $\Box$ We are now ready to establish \eqref{tight_goal} and complete the proof of our main result, Theorem \ref{T:main}. Recall that $F_n=n^{1/2}(Q_n-q)$. Hence, the event $\{|F_n(t)-F_n(s)|>\varepsilon\}$ is precisely the event that the process $Q_n-q$ traverses a gap of size $n^{-1/2}\varepsilon$ in the time interval $[s,t]$. In that same time interval, each Brownian particle will traverse a gap of size roughly $|t-s|^{1/2}$. We therefore divide our proof into three separate cases, or parameter regimes: $n^{-1/2}\varepsilon\gg|t-s|^{1/2}$ (the large gap regime), $n^{-1/2} \varepsilon\ll |t-s|^{1/2}$ (the small gap regime), and $n^{-1/2}\varepsilon\approx|t-s|^{1/2}$ (the medium gap regime). The proof in the large gap regime (Lemma \ref{L:large_gap}) uses only crude estimates and requires none of the machinery we have so far developed. The proof is entirely standard and is given in the appendix. The proof in the small gap regime (Lemma \ref{L:small_gap}) requires the most care. We follow the strategy outlined earlier, utilizing all of the tools developed previously. The proof in the medium gap regime (Lemma \ref{L:medium_gap}) is a minor modification of the proof of Lemma \ref{L:small_gap}. In making this modification, however, our estimate loses some precision, and the result in the medium gap regime is, in fact, slightly weaker than \eqref{tight_goal}. \begin{lemma}\label{L:large_gap} Fix $T>0$, $\Delta\in(0,1/2)$, and $p>2$. There exists a positive constant $C$ such that \[ P(|F_n(t) - F_n(s)| > \varepsilon) \le C(\varepsilon^{-1}|t - s|^{1/4})^p, \] whenever $s,t\in[0,T]$, $0<\varepsilon<1$, and $n^{-1/2}\varepsilon\ge |t-s|^{1/2-\Delta}$. \end{lemma} \begin{lemma}\label{L:small_gap} Fix $T>0$, $\Delta\in(0,1/2)$, and $p>2$. There exists a positive constant $C$ and an integer $n_0$ such that \[ P(|F_n(t) - F_n(s)| > \varepsilon) \le C(\varepsilon^{-1}|t - s|^{1/4})^p, \] whenever $n\ge n_0$, $s,t\in[0,T]$, $0<\varepsilon<1$, and $n^{-1/2}\varepsilon\le |t-s|^{1/2+\Delta}$. \end{lemma} \noindent{\bf Proof.} Fix $T>0$, $\Delta\in(0,1/2)$, and $p>2$. Without loss of generality, assume $s<t$ and define $\delta=t-s$. Note that it is sufficient to prove that there exists a constant $\delta_0>0$ such that the lemma holds whenever $\delta\le\delta_0$. Let $n_0$ be as in Theorem \ref{T:RWest}, with $\tau=p/2$. Define $\overline B_j=B_j-q$ and $\overline Q_n=Q_n-q$. Note that $\overline Q_n=\overline B_{j(n):n}$ and $F_n=n^{1/2}\overline Q_n$. Let $K=n^{-1/2}\varepsilon\delta^{-1/4}$ and $y=n^{-1/2} \varepsilon$. Then by Theorem \ref{T:RWrep}, \begin{align} P(F_n(t) - F_n(s) < -\varepsilon) &= P(\overline Q_n(t) - \overline Q_n(s) < -y)\notag\\ &= E[P(\overline Q_n(t) - \overline Q_n(s) < -y \mid \overline Q_n(s))]\notag\\ &\le E[\varphi_{j(n):n}^\le(\overline Q_n(s),-y)]\notag\\ &\le \sup_{|x|\le K}\varphi_{j(n):n}^\le(x,-y) + P(|\overline Q_n(s)|>K). \label{small_gap1} \end{align} By making $n_0$ larger, if necessary, we may apply Proposition \ref{P:quant_tails} and Remark \ref{R:quant_tails} to obtain \begin{equation}\label{small_gap2} P(|\overline Q_n(s)| > K) = P(|F_n(s)| > \varepsilon\delta^{-1/4}) \le C(\varepsilon\delta^{-1/4})^{-p} = C(\varepsilon^{-1}|t - s|^{1/4})^p. \end{equation} For the other term, fix $x\in[-K,K]$. Following the notation of Theorem \ref{T:RWest}, define \begin{align*} \sigma(x,y) &= \alpha q_1(x,y) + (1 - \alpha)q_2(x,y),\\ \mu(x,y) &= \alpha q_1(x,y) - (1 - \alpha)q_2(x,y). \end{align*} By Corollary \ref{C:Psi_Taylor}, \[ \mu(x,-y) = u(q(s),s)y + R_\mu, \] where $R_\mu$ satisfies \eqref{Psi_Taylor_rem}. Note that $|x|\le K=n^{-1/2} \varepsilon\delta^{-1/4}$ and $y=n^{-1/2}\varepsilon\le\delta^{1/2+\Delta}$. In particular, $y\le K$ and $y\le\delta^{1/2}$. Therefore, \eqref{Psi_Taylor_rem} simplifies to \[ |R_\mu| \le C(K\delta^{1/2} + \delta^{-3/2}y^4) \le C(K\delta^{1/2} + \delta^{3\Delta}y) = C(\delta^{1/4} + \delta^{3\Delta})y. \] Hence, since $s\mapsto u(q(s),s)$ is bounded below on $[0,T]$, there exists $\delta_0$ such that $\delta\le\delta_0$ implies $\mu(x,-y) > Cy$. Similarly, \[ \sigma(x,-y) = 2\Psi(0,0) + \frac1{\sqrt{2\pi\delta}}u(q(s),s)y^2 + R_\sigma, \] where $|R_\sigma|\le C(\delta^{1/4} + \delta^{3\Delta})y\le Cy\le C\delta^{1/2}$. By \eqref{pa_x_est}, this implies $|\sigma(x,-y)|\le C\delta^{1/2}$. We now apply Theorem \ref{T:RWest} with $\tau=p/2$ to obtain \begin{equation}\label{small_gap3} \varphi_{j(n):n}^\le(x,-y) \le C\left({\frac{\sigma(x,-y)}{n\mu(x,-y)^2} }\right)^{p/2} \le C\left({\frac{\delta^{1/2}}{ny^2} }\right)^{p/2} = C(\varepsilon^{-1}|t - s|^{1/4})^p. \end{equation} Substituting \eqref{small_gap3} and \eqref{small_gap2} into \eqref{small_gap1} gives \[ P(F_n(t) - F_n(s) < -\varepsilon) \le C(\varepsilon^{-1}|t - s|^{1/4})^p. \] To complete the proof, we follow as in \eqref{small_gap1} to obtain \[ P(F_n(t) - F_n(s) > \varepsilon) \le \sup_{|x|\le K}\varphi_{j(n):n}^\ge(x,y) + P(|\overline Q_n(s)| > K), \] and apply \eqref{small_gap2} to the second term. For the first term, recall from Section \ref{S:RWrep} that \[ \varphi_{j(n):n}^\ge(x,y) = \psi_{(n-j(n)+1):n}^\le(q_2(x,y),q_1(x,y)). \] Since $(n-j(n)+1)/n = (1-\alpha) + o(n^{-1/2})$, we may apply Theorem \ref{T:RWest} to obtain \[ \varphi_{j(n):n}^\ge(x,y) \le C\left({\frac{\widetilde\sigma(x,y)}{n\widetilde\mu(x,y)^2}}\right)^{p/2}, \] where \begin{align*} \widetilde\sigma(x,y) &= (1 - \alpha)q_2(x,y) + \alpha q_1(x,y) = \sigma(x,y),\\ \widetilde\mu(x,y) &= (1 - \alpha)q_2(x,y) - \alpha q_1(x,y) = -\mu(x,y). \end{align*} The estimates preceding \eqref{small_gap3} can now be used, as before, to give \[ \varphi_{j(n):n}^\ge(x,y) \le C(\varepsilon^{-1}|t - s|^{1/4})^p, \] which completes the proof. $\Box$ \begin{lemma}\label{L:medium_gap} Fix $T>0$, $\Delta\in(0,1/8)$, and $p>2$. There exists a positive constant $C$ and an integer $n_0$ such that \[ P(|F_n(t) - F_n(s)| > \varepsilon) \le C(\varepsilon^{-1}|t - s|^{1/4-2\Delta})^p, \] whenever $n\ge n_0$, $s,t\in[0,T]$, $0<\varepsilon<1$, and $|t-s|^{1/2+\Delta} \le n^{-1/2}\varepsilon\le |t-s|^{1/2-\Delta}$. \end{lemma} \noindent{\bf Proof.} Fix $T>0$, $\Delta\in(0,1/8)$, and $p>2$. Without loss of generality, assume $s<t$ and define $\delta=t-s$. Note that it is sufficient to prove that there exists a constant $\delta_0>0$ such that the lemma holds whenever $\delta\le\delta_0$. Let $n_0$ be as in Theorem \ref{T:RWest}, with $\tau=p/2$. Let $K=n^{-1/2} \varepsilon\delta^{-1/4}$ and $y=n^{-1/2} \varepsilon$. Also define $\widetilde\varepsilon=n^{1/2}\delta^{1/2 +\Delta}\le\varepsilon$ and $\widetilde y=n^{-1/2}\widetilde\varepsilon=\delta^{1/2+\Delta}$. As in \eqref{small_gap1} and \eqref{small_gap2}, \begin{align*} P(F_n(t) - F_n(s) < -\varepsilon) &\le P(F_n(t) - F_n(s) < -\widetilde\varepsilon)\\ &\le \sup_{|x|\le K}\varphi_{j(n):n}^\le(x,-\widetilde y) + C(\varepsilon^{-1}|t-s|^{1/4})^p. \end{align*} By the estimates following \eqref{small_gap2}, $\mu(x,-\widetilde y)=u(q(s),s) \widetilde y + R_\mu$, where \[ |R_\mu| \le C(K\delta^{1/2} + \delta^{-3/2}\widetilde y^4) \le C(\delta^{3/4-\Delta} + \delta^{1/2+4\Delta}) = C(\delta^{1/4-2\Delta} + \delta^{3\Delta})\widetilde y. \] Since $\Delta<1/8$ and $s\mapsto u(q(s),s)$ is bounded below on $[0,T]$, there exists $\delta_0$ such that $\delta\le\delta_0$ implies $\mu(x,-\widetilde y) > C\widetilde y$. Similarly, $|\sigma(x,-\widetilde y)|\le C\delta^{1/2}$. As in \eqref{small_gap3}, this gives $\varphi_{j(n):n}^\le(x,-\widetilde y) \le C(n^{-1}\widetilde y^{-2}\delta^{1/2})^{p/2}$. Note that $\widetilde y=\delta^{1/2+\Delta} \ge n^{-1/2} \varepsilon\delta^{2\Delta}$. Hence, \[ \varphi_{j(n):n}^\le(x,-\widetilde y) \le C(\varepsilon^{-2}\delta^{1/2-4\Delta})^{p/2} = C(\varepsilon^{-1}|t - s|^{1/4-2\Delta})^p. \] As in the second half of the proof of Lemma \ref{L:small_gap}, the bound on $P(F_n(t)-F_n(s)>\varepsilon)$ is proved similarly. $\Box$ \noindent{\bf Proof of Theorem \ref{T:main}.} By Corollary \ref{C:quant_CLT}, it will suffice to show that $\{F_n\}$ is relatively compact in $C[0,\infty)$. By Theorem \ref{T:moment}, we need only verify that \[ \sup_{n\ge n_0} P(|F_n(t) - F_n(s)| \ge \varepsilon) \le C\varepsilon^{-\alpha}|t - s|^{1+\beta}. \] Taking $\Delta=1/16$ and $p=9$ in Lemmas \ref{L:large_gap}, \ref{L:small_gap}, and \ref{L:medium_gap} shows that \[ P(|F_n(t) - F_n(s)| \ge \varepsilon) \le C\varepsilon^{-9}|t - s|^{9/8}, \] for all $n\ge n_0$, which completes the proof. $\Box$ \section*{Acknowledgments} The author is grateful for the efforts of three very thorough referees, whose comments and suggestions have greatly improved this manuscript. \appendix \section{Appendix} In this section, we have collected together the more routine and straightforward proofs, so as not to distract from the essential techniques of the main paper. We begin with the proof of Lemma \ref{L:quant_ODE}. \noindent{\bf Proof of Lemma \ref{L:quant_ODE}.} Standard results imply that for each $t_0>0$, there exists a neighborhood $I$ of $t_0$ on which \eqref{quant_ODE} has a unique solution $\widetilde q$ with $\widetilde q(t_0)=q(t_0)$. To show that $\widetilde q=q$ on $I$, it suffices to show that $g(t)=P(B(t)\le\widetilde q(t))$ satisfies $g=\alpha$ on $I$. Since $g(t_0)=\alpha$, it suffices to show $g'=0$ on $I$. For this, we simply differentiate \[ g(t) = \int_{-\infty}^{\widetilde q(t)} u(x,t)\,dx, \] which gives \[ g'(t) = u(\widetilde q(t),t)\widetilde q'(t) + \int_{-\infty}^{\widetilde q(t)}\partial_t u(x,t)\,dx = -\frac12\partial_x u(\widetilde q(t),t) + \frac12\int_{-\infty}^{\widetilde q(t)}\partial_x^2 u(x,t)\,dx = 0. \] Since $t_0>0$ was arbitrary, this shows that $q\in C^\infty(0,\infty)$ and satisfies \eqref{quant_ODE} for all $t>0$. To show that $q(t)\to q(0)$ as $t\to0$, we first show that $q(0)\le\liminf_{t\to0} q(t)$. The proof is by contradiction. Suppose not. Then there exists $\varepsilon>0$ and $t_n\downarrow0$ such that $q(t_n)<q(0)-\varepsilon$ for all $n$. Hence, \begin{multline*} \alpha = P(B(t_n) \le q(t_n)) \le P(B(t_n) \le q(0) - \varepsilon)\\ \to P(B(0) \le q(0) - \varepsilon) \le P(B(0) \le q(0)) = \alpha. \end{multline*} It follows that $P(B(0)\le q(0)-\varepsilon)=\alpha$, but this contradicts the uniqueness of the $\alpha$-quantile of the measure $f(x)\,dx$. The proof that $\limsup_{t\to0} q(t)\le q(0)$ is similar. $\Box$ We next present a proof of the quantile central limit theorem. The proof uses a multi-dimensional version of the Lindeberg-Feller theorem, which is stated below as Theorem \ref{T:Lind-Feller}. \begin{thm}\label{T:Lind-Feller} For each fixed $n$, let $\{X_{m,n}\}_{m=1}^n$ be independent, $\mathbb{R}^d$-valued random vectors with mean zero and covariance matrix $\sigma_{m,n}$. If \begin{enumerate}[(i)] \item $\sum_{m=1}^n \sigma_{m,n} \to \sigma$ as $n\to\infty$, and \item for each $\theta\in\mathbb{R}^d$ and each $\varepsilon>0$, $\sum_{m=1}^n E[|\theta\cdot X_{m,n}|^2 1_{\{|\theta\cdot X_{m,n}|>\varepsilon\}}] \to 0$ as $n\to\infty$, \end{enumerate} then $S_n=X_{1,n}+\cdots+X_{n,n}\Rightarrow N$, where $N$ is multi-normal with mean $0$ and covariance $\sigma$. \end{thm} \noindent{\textbf{Proof of Theorem \ref{T:quant_CLT}.} } Given $x,y\in\mathbb{R}^d$, we shall write $x\le y$ if $x(i)\le y(i)$ for all $i$. Fix $x\in\mathbb{R}^d$ and for each $n,m\in\mathbb{N}$, $1\le m\le n$, define the random vector $Y_{m,n}\in\mathbb{R}^d$ by \[ Y_{m,n}(j) = n^{-1/2}\left( 1_{\{X_m(j) \le n^{-1/2}x(j) + q(j)\}} - p_n(j)\right), \] where $p_n(j)=\Phi_j(n^{-1/2}x(j)+q(j))$. Then for each fixed $n\in\mathbb{N}$, $Y_{m,n}$, $1\le m\le n$, are independent, $EY_{m,n}=0$, and \begin{enumerate}[(i)] \item $\sum_{m=1}^n E[Y_{m,n}(i)Y_{m,n}(j)]\to G_{ij}(q(i),q(j)) - \alpha(i)\alpha(j)$ as $n\to\infty$, \item for each $\theta\in\mathbb{R}^d$ and $\varepsilon>0$, $\sum_{m=1}^n E[|\theta\cdot Y_{m,n}|^2 1_{\{|\theta\cdot Y_{m,n}|>\varepsilon\}}] \to 0$ as $n\to\infty$. \end{enumerate} Part (i) follows since \begin{multline*} \sum_{m=1}^n E[Y_{m,n}(i)Y_{m,n}(j)]\\ = n^{-1}\sum_{m=1}^n [P( X_m(i) \le n^{-1/2}x(i) + q(i), X_m(j) \le n^{-1/2}x(j) + q(j)) - p_n(i)p_n(j)]\\ = P(X(i) \le n^{-1/2}x(i) + q(i), X(j) \le n^{-1/2}x(j) + q(j)) - p_n(i)p_n(j), \end{multline*} and part (ii) follows since $|\theta\cdot Y_{m,n}|\le n^{-1/2} d \max(|\theta(1)|, \ldots,|\theta(d)|)$, and therefore $P(|\theta\cdot Y_{m,n}| > \varepsilon) = 0$ for sufficiently large $n$. Thus, by Theorem \ref{T:Lind-Feller}, $S_n=Y_{1,n}+\cdots+Y_{n,n}\Rightarrow\widetilde N$, where $\widetilde N$ is multinormal with mean $0$ and covariance \begin{equation}\label{quant_CLT.1} E[\widetilde N(i)\widetilde N(j)] = G_{ij}(q(i),q(j)) - \alpha(i)\alpha(j). \end{equation} Now, observe that the following are equivalent: \begin{enumerate}[(a)] \item $n^{1/2}(X_{\kappa(n):n}-q)\le x$, \item $X_{\kappa(n):n}(j)\le n^{-1/2}x(j) + q(j)$ for all $j$, \item $\sum_{m=1}^n 1_{\{X_m(j) \le n^{-1/2}x(j) + q(j)\}} \ge \kappa(n,j)$ for all $j$, \item $n^{-1/2}\sum_{m=1}^n(1_{\{X_m(j) \le n^{-1/2}x(j) + q(j)\}} - p_n(j)) \ge n^{-1/2}(\kappa(n,j) - n p_n(j))$ for all $j$. \end{enumerate} Thus, if $a_n\in\mathbb{R}^d$ is defined by $a_n(j)=n^{-1/2}(\kappa(n,j) - n p_n(j))$, then $P(n^{1/2}(X_{\kappa(n):n}-q)\le x)=P(S_n\ge a_n)$. Note that \[ a_n(j) = n^{1/2}(\alpha(j) + o(n^{-1/2}) - p_n(j)) = \frac{\Phi_j(q(j)) - \Phi_j(n^{-1/2}x(j) + q(j))}{n^{-1/2}} + o(1), \] so that $a_n\to a\in\mathbb{R}^d$, where $a(j)=-x(j)\Phi_j'(q(j))$. Therefore, \[ P(n^{1/2}(X_{\kappa(n):n}-q)\le x) \to P(\widetilde N \ge a) = P(\widetilde N \le -a) = P(N\le x), \] where $N$ is the random vector defined by $N(j)=\widetilde N(j)/\Phi_j'(q(j))$. Comparing \eqref{quant_CLT.1} and \eqref{quant_CLT}, this completes the proof. $\Box$ At certain points in Section \ref{S:param_est}, we use the fact that $F_n$ has well-behaved tail probabilities. The precise formulation of this fact is given below. \begin{prop}\label{P:quant_tails} Fix $T>0$. For each $r>0$, there exist constants $C$ and $n_0$ such that \[ P(|F_n(t)| > \lambda) \le C(\lambda^{-r} + P(|B(t) - q(t)|^2 > \lambda)), \] for all $n\ge n_0$, $0\le t\le T$, and $\lambda>0$. In particular, $\sup_n P(|F_n(0)|>\lambda) \to 0$ as $\lambda\to\infty$. \end{prop} \begin{remark}\label{R:quant_tails} Since $B$ is Gaussian, Proposition \ref{P:quant_tails} in fact shows that $P(|F_n(t)| > \lambda) \le C\lambda^{-r}$. Indeed, \[ P(|B(t) - q(t)|^2 > \lambda) \le P(|B(t)| + M > \lambda^{1/2}) = 2P(B(t) \le -\lambda^{1/2} + M), \] where $M=\sup_{0\le t\le T}|q(t)|$. If $\lambda>4M^2$, then \[ P(|B(t) - q(t)|^2 > \lambda) \le 2P(B(t) \le -\lambda^{1/2}/2) = 2\Phi(-t^{-1/2}\lambda^{1/2}/2) \le 2\Phi(-T^{-1/2}\lambda^{1/2}/2), \] where $\Phi$ is the distribution function of the standard normal, which satisfies $\Phi(-x)\le C_rx^{-r}$ for all $r>0$. Hence, $P(|B(t) - q(t)|^2 > \lambda) \le 2C_{2r}T^{-r}2^{-2r}\lambda^r$ for $\lambda$ sufficiently large. \end{remark} \noindent{\textbf{Proof of Proposition \ref{P:quant_tails}.}} Fix $T>0$ and $r>0$. Since $u(q(0),0)>0$, there exists $\varepsilon\in(0,1)$ such that \[ m = \inf\{u(x,t): |x - q(t)| \le \varepsilon, 0 \le t \le T\} > 0. \] Since $j(n)/n=\alpha+o(n^{-1/2})$, we may choose $n_0 \ge 2m^{-1}$ such that $|j(n)/n-\alpha|\le n^{-1/2}\varepsilon m/2$, for all $n\ge n_0$. Let $n\ge n_0$, $t\in[0,T]$, and $\lambda>0$ be arbitrary. Note that we may assume without loss of generality that $\lambda > 1+2m^{-1/2}$ and $r>2$. Let us first assume $n^{-1/2}\lambda\le\varepsilon$. We begin with \begin{align} P(F_n(t) < -\lambda) &= P\bigg(\sum_{j=1}^n 1_{\{B_j(t) < q(t) - n^{-1/2}\lambda\}} \ge j(n)\bigg)\notag\\ &= P\bigg(n^{-1}\sum_{j=1}^n(1_{\{B_j(t) < q(t) - n^{-1/2}\lambda\}} - p_n) \ge \frac{j(n)}n - p_n\bigg),\label{quant_tails.1} \end{align} where $p_n=p_n(\lambda)=P(B(t) < q(t) - n^{-1/2}\lambda)$. By the mean value theorem, $\alpha - p_n\ge n^{-1/2}\lambda m$. Since $|j(n)/n-\alpha|\le n^{-1/2}m/2\le n^{-1/2}\lambda m/2$, we have \[ P(F_n(t) < -\lambda) \le P\bigg(n^{-1}\sum_{j=1}^n (1_{\{B_j(t) < q(t) - n^{-1/2}\lambda\}} - p_n) \ge n^{-1/2}\lambda m/2\bigg). \] By Chebyshev's inequality, \[ P(F_n(t) < -\lambda) \le Cn^{-r/2}\lambda^{-r}E\bigg|\sum_{j=1}^n (1_{\{B_j(t) < q(t) - n^{-1/2}\lambda\}} - p_n)\bigg|^r. \] By Burkholder's inequality (see, for example, Theorem 6.3.10 in \cite{St}), \[ P(F_n(t) < -\lambda) \le Cn^{-r/2}\lambda^{-r}E\bigg|\sum_{j=1}^n |1_{\{B_j(t) < q(t) - n^{-1/2}\lambda\}} - p_n|^2\bigg|^{r/2}. \] Finally, by Jensen's inequality, \[ P(F_n(t) < -\lambda) \le C\lambda^{-r}E|1_{\{B(t) < q(t) - n^{-1/2}\lambda\}} - p_n|^r. \] Note that \begin{align*} E|1_{\{B(t) < q(t) - n^{-1/2}\lambda\}} - p_n|^r &= p_n(1 - p_n)^r + (1 - p_n)p_n^r\\ &= p_n(1 - p_n)((1 - p_n)^{r-1} + p_n^{r-1}) \le 2p_n. \end{align*} Hence, $P(F_n(t) < -\lambda)\le C\lambda^{-r}p_n(\lambda)\le C\lambda^{-r}$. Similarly, using the mean value theorem, \begin{align*} P(F_n(t) > \lambda) &= P\bigg(\sum_{j=1}^n 1_{\{B_j(t) > q(t) + n^{-1/2}\lambda\}} \ge n - j(n) + 1\bigg)\\ &\le P\bigg(n^{-1}\sum_{j=1}^n (1_{\{B_j(t) > q(t) + n^{-1/2}\lambda\}} - \overline p_n) \ge 1 - \frac{j(n)}n - \overline p_n\bigg)\\ &\le P\bigg(n^{-1}\sum_{j=1}^n (1_{\{B_j(t) > q(t) + n^{-1/2}\lambda\}} - \overline p_n) \ge n^{-1/2}\lambda m/2\bigg). \end{align*} where $\overline p_n=\overline p_n(\lambda)=P(B(t) > q(t) + n^{-1/2}\lambda)$. By Chebyshev, Burkholder, and Jensen, $P(F_n(t) > \lambda)\le C\lambda^{-r}\overline p_n(\lambda)\le C\lambda^{-r}$. This completes the proof in the case $n^{-1/2}\lambda\le\varepsilon$. We now assume $n^{-1/2}\lambda>\varepsilon$. In this case, $\alpha-p_n\ge P(q(t)-\varepsilon<B(t) <q(t))\ge\varepsilon m$. Hence, $j(n)/n-p_n\ge \varepsilon m/2$. Thus, starting from \eqref{quant_tails.1}, Chebyshev, Burkholder, and Jensen imply \[ P(F_n(t) < -\lambda)\le Cn^{-2r}E\bigg| \sum_{j=1}^n(1_{\{B_j(t) < q(t) - n^{-1/2}\lambda\}} - p_n)\bigg|^{2r} \le Cn^{-r}p_n(\lambda). \] Similarly, $P(F_n(t)>\lambda)\le Cn^{-r}\overline p_n(\lambda)$. Hence, \[ P(|F_n(t)| > \lambda) \le Cn^{-r}(p_n(\lambda) + \overline p_n(\lambda)) = Cn^{-r}P(|B(t) - q(t)| > n^{-1/2}\lambda). \] If $n>\lambda$, then $P(|F_n(t)|>\lambda)\le C\lambda^{-r}$, and we are done. If $n\le \lambda$, then \[ P(|F_n(t)| > \lambda) \le CP(|B(t) - q(t)| > \lambda^{1/2}) = CP(|B(t) - q(t)|^2 > \lambda), \] and this completes the proof. $\Box$ In the proof of Theorem \ref{T:RWrep}, we need the following formula for the distribution of the $j$-th order statistic. \begin{lemma}\label{L:RWrep} If $\{X_n\}$ is a sequence of iid copies of $X$, then for all $x\in \mathbb{R}$, \[ P(X_{j:n} \le x) = \int_{-\infty}^x j\binom{n}{j} \Phi(y)^{j-1}\overline\Phi(y)^{n-j}\,d\Phi(y), \] where $\Phi(x)=P(X\le x)$ and $\overline\Phi=1-\Phi$. \end{lemma} \noindent{\bf Proof.} Recall that $\Phi$ is continuous. Hence, if $g$ is absolutely continuous on $[0,1]$, then \[ g(\Phi(x)) = g(0) + \int_{-\infty}^x g'(\Phi(y))\,d\Phi(y). \] (See Exercise 3.36(b) in \cite{Fo99}, for example.) We therefore have \[ \begin{split} P(X_{j:n} \le x) &= P\bigg(\sum_{i=1}^n 1_{\{X_i\le x\}} \ge j\bigg) = \sum_{k=j}^n P\bigg(\sum_{i=1}^n 1_{\{X_i\le x\}} = k\bigg) = \sum_{k=j}^n \binom{n}{k}\Phi(x)^k\overline\Phi(x)^{n-k}\\ &= \sum_{k=j}^n \binom{n}{k}\int_{-\infty}^x (k\Phi(y)^{k-1}\overline\Phi(y)^{n-k} - (n - k)\Phi(y)^k\overline\Phi(y)^{n-k-1})\,d\Phi(y). \end{split} \] We can rewrite this as \begin{multline*} P(X_{j:n} \le x) = \int_{-\infty}^x\bigg[ \sum_{k=j}^n k\binom{n}{k}\Phi(y)^{k-1}\overline\Phi(y)^{n-k}\\ - \sum_{k=j+1}^n(n - k + 1)\binom{n}{k-1} \Phi(y)^{k-1}\overline\Phi(y)^{n-k}\bigg]\,d\Phi(y). \end{multline*} Observing that \[ (n - k + 1)\binom{n}{k-1} = k\binom{n}{k}, \] completes the proof. $\Box$ The following lemma is needed in the proof of Theorem \ref{T:RWest}. \begin{lemma}\label{L:RWest} For each $r\ge1$, there exist constants $C>0$ and $n_0\in\mathbb{N}$ such that \begin{equation}\label{RWest} E\bigg|\sum_{i=1}^n(1_{\{U_i \le p\}} - p)\bigg|^{2r} \le C((np)^r \vee (np)), \end{equation} for all $n\ge n_0$ and all $p\in[0,1]$. Note that $C$ depends only on $r$, and not on $n$ or $p$. \end{lemma} \begin{remark} If $r<1$, then Lemma \ref{L:RWest}, together with Jensen's inequality, imply that the expectation in \eqref{RWest} is bounded above by $C(np)^r$. \end{remark} \noindent{\textbf{Proof of Lemma \ref{L:RWest}.} } First assume that $r\in\mathbb{N}$. Let $\xi=1_{\{ U_i\le p\}}-p$ and $\varphi(t)=E[e^{it\xi}]$. Then \[ E\bigg|\sum_{i=1}^n(1_{\{U_i \le p\}} - p)\bigg|^{2r} = (-1)^r\frac{d^{2r}}{dt^{2r}}(\varphi(t)^n)\bigg|_{t=0}. \] By Fa\`a di Bruno's formula (see \cite{Jo}, for example), if $n > 2r$, then \[ \frac{d^{2r}}{dt^{2r}}(\varphi(t)^n) = \sum \frac{(2r)!}{b_1!b_2!\cdots b_{2r}!} \frac{n!}{(n-k)!}\varphi(t)^{n-k} \left({\frac{\varphi'(t)}{1!}}\right)^{b_1} \left({\frac{\varphi''(t)}{2!}}\right)^{b_2}\cdots \left({\frac{\varphi^{(2r)}(t)}{(2r)!}}\right)^{b_{2r}}, \] where the sum is over all different solutions in nonnegative integers $b_1, \ldots,b_{2r}$ of $b_1+2b_2+3b_3+\cdots+2rb_{2r}=2r$, and $k=b_1+\cdots+b_{2r}$. Let us take $t=0$ and observe that, since $\varphi'(0)=0$, every nonzero summand has $b_1=0$, which implies \[ k = \frac{2b_2 + 2b_3 + 2b_4 + \cdots + 2b_{2r}}2 \le \frac{2b_2 + 3b_3 + 4b_4 + \cdots + 2rb_{2r}}2 = r. \] Also, $\varphi(0)=1$ and $|\varphi^{(j)}(0)|=|E\xi^j|\le2p$. Therefore, \[ \bigg|\frac{d^{2r}}{dt^{2r}}(\varphi(t)^n)\bigg|_{t=0}\bigg| \le C_r\sum_{k=1}^r n^k p^{b_2+b_3+\cdots+b_{2r}} = C_r\sum_{k=1}^r (np)^k. \] Taking into account the two possibilities, $np<1$ and $np\ge1$, gives us \eqref{RWest}. Now consider the general case, $r\in[1,\infty)$. First assume $np>1$. Choose an integer $k>r$. Then \eqref{RWest}, together with Jensen's inequality, give \[ E\bigg|\sum_{i=1}^n(1_{\{U_i \le p\}} - p)\bigg|^{2r} \le \left({ E\bigg|\sum_{i=1}^n(1_{\{U_i \le p\}} - p)\bigg|^{2k} }\right)^{r/k} \le C(np)^r. \] Next assume $np\le1$. Choose positive integers $k,\ell$ such that $\ell\le r\le k$. Then \begin{multline*} E\bigg|\sum_{i=1}^n(1_{\{U_i \le p\}} - p)\bigg|^{2r} \le E\bigg|\sum_{i=1}^n(1_{\{U_i \le p\}} - p)\bigg|^{2\ell} + E\bigg|\sum_{i=1}^n(1_{\{U_i \le p\}} - p)\bigg|^{2k}\\ \le C_1(np) + C_2(np) = C(np), \end{multline*} which completes the proof. $\Box$ Finally, we present the proof of Lemma \ref{L:large_gap}, which establishes the needed estimates for tightness in the large gap regime. \noindent{\textbf{Proof of Lemma \ref{L:large_gap}.} } Fix $T>0$, $\Delta\in(0,1/2)$, and $p>2$. Without loss of generality, assume $s<t$ and define $\delta=t-s$. Note that it is sufficient to prove that there exists a constant $\delta_0>0$ such that the lemma holds whenever $\delta\le\delta_0$. Let $M=\sup_{0\le t\le T}|q'(t)|$ and define $\delta_0=(M^{-2}\wedge2)/4$. Note that, for a.e. $\omega\in\Omega$, $\#\{j:B_j(s,\omega)\le Q_n(s, \omega)\}=j(n)$. Hence, if $B_j(t,\omega)\le B_j(s,\omega) + n^{-1/2}\varepsilon + q(t) - q(s)$, for all $j$, then $\#\{j:B_j(t,\omega)\le Q_n(s,\omega) + n^{-1/2}\varepsilon + q(t) - q(s)\}\ge j(n)$. In other words, up to a set of measure zero, \[ \bigcap_{j=1}^n\{B_j(t) \le B_j(s) + n^{-1/2}\varepsilon + q(t) - q(s)\} \subset \{Q_n(t) \le Q_n(s) + n^{-1/2}\varepsilon + q(t) - q(s)\}. \] We therefore have \begin{align*} P(F_n(t) - F_n(s) > \varepsilon) &= P(Q_n(t) - Q_n(s) > n^{-1/2}\varepsilon + q(t) - q(s))\\ &\le P\bigg(\bigcup_{j=1}^n \{B_j(t) - B_j(s) > n^{-1/2}\varepsilon + q(t) - q(s)\}\bigg)\\ &\le nP(B(t) - B(s) > n^{-1/2}\varepsilon + q(t) - q(s))\\ &\le nP(B(t) - B(s) > n^{-1/2}\varepsilon - M|t - s|). \end{align*} Recalling $\overline\Phi$, defined below \eqref{Psi_Taylor1}, this gives \[ P(F_n(t) - F_n(s) > \varepsilon) \le n\overline\Phi(n^{-1/2}\varepsilon\delta^{-1/2} - M\delta^{1/2}). \] Note that $M\delta^{1/2}\le M\delta_0^{1/2}\le1/2\le(1/2)\delta^{-\Delta}$ and $n^{-1/2}\varepsilon\delta^{-1/2}\ge\delta^{-\Delta}$. Thus, using $\overline\Phi(x)\le C_rx^{-r}$ with $r=\Delta^{-1}(1+p/4)$, we have \[ P(F_n(t) - F_n(s) > \varepsilon) \le nC_r2^r\delta^{r\Delta} \le \varepsilon^2\delta^{-1+\Delta}C_r2^r\delta^{1+p/4} = C_{p,\Delta}\varepsilon^2\delta^\Delta|t-s|^{p/4}, \] which is in fact a sharper bound than necessary. (The second inequality above follows from the assumption in the statement of the lemma that $n^{-1/2}\varepsilon\ge |t-s|^{1/2-\Delta}$.) The bound on $P(F_n(t) - F_n(s) < -\varepsilon)$ is obtained similarly. $\Box$ \end{document}
arXiv
\begin{document} \begin{abstract} We show that the minimal number of critical points of a function on a given closed manifold $M$ of dimension at least $6$ is the same as the minimal number of elements in a Singhof-Takens filling of $M$ by smooth balls with corners. \end{abstract} \maketitle \section{Introduction} Given a smooth function $f: M\to \R$ on a manifold $M$, a point $x$ in $M$ is said to be \emph{critical} if the differential $d_xf$ of $f$ at $x$ is trivial. In this paper we explore the relationship between the least number of critical points for any smooth function on a smooth closed manifold $M$, and the least number of elements in the Singhof-Takens filling of $M$ by smooth balls with corners. The Lusternik-Schnirelmann category ${\mathop\mathrm{cat}}(X)$ of a topological space $X$ is the least integer $n$ such that the space $X$ admits a covering by $n+1$ open subspaces, each of which is contractible in $X$ to a point. By the Lusternik-Schnirelmann theorem~\cite{LS34}, when $X$ is a closed manifold, the numeric invariant ${\mathop\mathrm{cat}}(X)$ gives a lower bound for the number of critical points of any smooth function on $X$, namely, \begin{equation}\label{eq:1} {\mathop\mathrm{cat}}(X) + 1 \le \Crit (X). \end{equation} We note that the differential geometry invariant $\Crit(X)$ is hard to compute in general, while the numeric invariant ${\mathop\mathrm{cat}}(X)$ is a homotopy invariant, which, at least in some cases, can be computed by means of homotopy theoretic methods, e.g., see \cite{CLOT}. We will improve this estimate by replacing ${\mathop\mathrm{cat}}(X)$ with a numeric invariant associated with Singhof-Takens fillings. A Singhof-Takens filling~\cite{Ta68, Si79} of a closed smooth manifold $M$ of dimension $m$ is essentially a covering of $M$ by compact smooth submanifolds of dimension $m$. The compact submanifolds may have corners, while the interiors of covering submanifolds are required to be disjoint. Of particular interest are Singhof-Takens fillings by contractible manifolds, smooth balls with corners, and topological balls which are smooth manifolds homeomorphic to balls, see Remark~\ref{rem:CWhi}. The least number $n$ such that every Singhof-Takens filling of $M$ by smooth (respectively, topological) balls has at least $n+1$ elements is denoted by ${\mathop\mathrm{Bcat}} (M)$ (respectively, ${\mathop\mathrm{Bcat}}^{\T}(M$)). If $M$ is a compact smooth manifold with boundary then ${\mathop\mathrm{Bcat}}(M)$ (respectively, ${\mathop\mathrm{Bcat}}^{\T}(M)$) is the least number $n$ such that every relative filling of $M$ by smooth (respectively, topological) balls with corners has at least $n+1$ elements. We also require the function $f$ minimizing the number of critical points on $M$ in the definition of $\Crit (M)$ to be constant over $\partial M$ and have no critical points in $\partial M\times [0,1]$. We will show that ${\mathop\mathrm{Bcat}}(M)$ and ${\mathop\mathrm{Bcat}}^\T (M)$ are closely related to $\Crit(M)$. Our main result is the following theorem. \begin{theorem} \label{Billy} Let $M$ be a smooth closed manifold of dimension at least $6$. Then $1+{\mathop\mathrm{Bcat}}^\T(M) =\Crit (M)$. \end{theorem} We note that the statement of Theorem~\ref{Billy} is true for manifolds $M$ with $\dim (M)=2$. In the case where $\dim (M)=3$ the conclusion of Theorem~\ref{Billy} also remains true; it is the Takens theorem \cite{Ta68}. Since clearly ${\mathop\mathrm{cat}}(M)\le {\mathop\mathrm{Bcat}}^\T(M)$, Theorem~\ref{Billy} implies the Lusternik-Schnirelmann estimate. Theorem~\ref{Billy} should be compared to the well-known Cornea theorem~\cite{Co98}, in which a manifold $M$ is decomposed into cones rather than balls and where functions satisfy certain conditions. \begin{theorem}[Cornea, 1997] Let $f$ be a smooth function on a smooth compact manifold $M$ such that $f$ is constant maximal and regular on $\partial M$. Then $\mathop\mathrm{cl}(f)+1\le \Crit^\bullet(M)$, where $\mathop\mathrm{cl}(f)$ is the cone length of $M$. \end{theorem} For manifolds of arbitrary dimension we have a general estimate. \begin{theorem}\label{Susan} Let $M$ be a smooth closed manifold. Then $1+{\mathop\mathrm{Bcat}}^{\T}(M) \le \Crit(M) \le 1+ {\mathop\mathrm{Bcat}}(M)$. \end{theorem} We note that Theorem~\ref{Susan} implies Theorem~\ref{Billy} since every smooth manifold of dimension at least 6 homeomorphic to a ball is actually diffeomorphic to a ball, and therefore ${\mathop\mathrm{Bcat}}^\T(M)={\mathop\mathrm{Bcat}}(M)$ for manifolds $M$ of dimension at least $6$. In section~\ref{s:2} we review Singhof-Takens theory and introduce relative fillings. The proof of Theorem~\ref{Susan} relies on the fact that every isolated critical point is pseudoalgebraic \cite{Sa20}. In section~\ref{s:3} we state the definitions and relevant results from \cite{Sa20}. In section~\ref{s:ls5}, we establish the lower bound for $\Crit(M)$ of Theorem~\ref{Susan}, see Theorem~\ref{th:13lower}. The upper bound for $\Crit(M)$ of Theorem~\ref{Susan} is proved in section~\ref{s:5}, see Theorem~\ref{th:18upper}. \section{Singhof-Takens fillings}\label{s:2} A continuous map $f\colon X\to Y$ of a subset $X\subset \R^n$ to a subset $Y\subset \R^n$ is said to be \emph{smooth} if it extends to a smooth map from an open neighborhood of $X$ to an open neighborhood of $Y$. A smooth map $f\colon X\to Y$ of subsets of $\R^n$ is a \emph{diffeomorphism} if it is a homeomorphism and its inverse is smooth. Let $Q$ be a topological space. A \emph{coordinate $n$-chart} $(U, \varphi)$ on $Q$ consists of an open subset $U$ of $[0, \infty)^k\times \R^{n-k}$ for some $k\in \{0, ..., n\}$ and a homeomorphism $\varphi\colon U \to Q$ onto image. An \emph{$n$-atlas} on $Q$ is a maximal collection $\{(U_\alpha, \varphi_\alpha)\}$ of $n$-charts on $Q$ such that $Q=\cup \varphi_\alpha(U_\alpha)$ and the transition maps \[ \varphi_{\beta}^{-1}\circ \varphi_{\alpha}\colon \varphi_{\alpha}^{-1}(\varphi_\alpha(U_\alpha)\cap \varphi_{\beta}(U_\beta))\longrightarrow \varphi_{\beta}^{-1}(\varphi_\alpha(U_\alpha)\cap \varphi_{\beta}(U_\beta)) \] are diffeomorphisms for all $\alpha$ and $\beta$. A second-countable Hausdorff space $Q$ with a maximal $n$-atlas is said to be a \emph{manifold with corners} of dimension $n$. If a point $x$ in a manifold with corners belongs to the image $\varphi(U)$ of a chart of the form $U\subset [0,\infty)^k\times \R^{n-k}$, then we say that the order of the corner at $x$ is at most $k$. It follows that the set of points of order $k$ in a manifold with corners $Q$ is a smooth manifold itself. We call this smooth manifold a \emph{face} of dimension $n-k$. In particular, the face of dimension $n$ is the interior of $Q$, while all other faces are parts of the boundary $\partial Q$. Faces of dimension less than $n$ form a \emph{canonical} stratification of the boundary $\partial Q$. Let $Q$ be a manifold with corners. Let $\Sigma\subset \partial Q$ denote the set of points of the boundary at which the boundary is not smooth. Suppose that $Q'$ is a smooth manifold with (smooth) boundary. Let $f\colon Q\to Q'$ be a homeomorphism which restricts to a diffeomorphism $Q\setminus \Sigma\to Q'\setminus f(\Sigma)$. Then we say that $f$ as well as $f^{-1}$ is a \emph{special almost diffeomorphism}. In particular, every diffeomorphism is a special almost diffeomorphism. A filling of a smooth manifold $M$ is a certain decomposition of $M$ into manifolds with corners $Q_1, ..., Q_N$. The number $N$ is said to be the \emph{order} of the filling. Fillings of order $3$ of closed manifolds were introduced by Takens in \cite{Ta68}, and later the notion was extended to that of arbitrary order by Singhof~\cite{Si79}. \begin{definition}\label{Filling} Let $M$ be a closed smooth manifolds of dimension $n$. A \emph{filling} of $M$ is a family of compact condimension $0$ submanifolds with corners $Q_1,..., Q_N$ of $M$ such that the following conditions are satisfied. \begin{itemize} \item[P1:] $M=Q_1\cup \cdots \cup Q_N$, i.e, $\{Q_i\}$ is a covering of $M$ by compact subsets. \item[P2:] The interiors of submanifolds $Q_i$ and $Q_j$ are disjoint for all $i\ne j$. \item[P3:] Given a point $z\in M$, let $k_1<\cdots <k_\nu$ be the indices $i$ of submanifolds $Q_i$ containing $z$, $\nu\le n+1$. If $\nu\ge 2$, then we require that there is a coordinate $n$-chart $D$ of dimension $n$ about $z$ in $M$ such that for each $j<\nu$ \iffalse \[ D\cap Q_{k_1}=\{x_1\le 0\}, \] \[ D\cap Q_{k_2}=\{x_1\ge 0, x_2\le 0\}, \] \[ \cdots \] \[ D\cap Q_{k_{\nu-1}}=\{x_1\ge 0, ..., x_{\nu-2}\ge 0, x_{\nu-1}\le 0\}, \] \[ D\cap Q_{k_{\nu}}=\{x_1\ge 0, ..., x_{\nu-1}\ge 0\}. \] \fi \[ D\cap Q_{k_{j}}=\{x_1\ge 0, ..., x_{j-1}\ge 0, x_{j}\le 0\}, \] and \[ D\cap Q_{k_{\nu}}=\{x_1\ge 0, ..., x_{\nu-1}\ge 0\}. \] \end{itemize} We say that $D$ is a \emph{special} coordinate chart about the point $z$ with respect to the filling $\{Q_i\}$. \end{definition} In this paper we will consider only smooth fillings, i.e., fillings by smooth submanifolds $Q_i$ with corners. \begin{remark} The submanifolds $Q_i$ in a filling $\{Q_i\}$ are ordered. In fact, the order of submanifolds $Q_i$ is essential. In particular, if $Q_1,..., Q_N$ is a filling of a manifold $M$, then $Q_{\sigma(1)}, ..., Q_{\sigma(N)}$ may not be a filling, where $\sigma$ is a permutation of $N$ elements. \end{remark} We say that a filling that consists of $N$ submanifolds $Q_i$ is \emph{categorical} if $N-1={\mathop\mathrm{cat}}(M)$, and each submanifold $Q_i$ is contractible in $M$. It is known (e.g., see \cite[Proposition 1.10]{CLOT}) that if $X$ is a normal ANR, then the closed Lusternik-Schnirelmann category ${\mathop\mathrm{cat}}^{cl}X$ of $X$ agrees with the standard (open) Lusternik-Schnirel\-mann category, i.e., ${\mathop\mathrm{cat}} X ={\mathop\mathrm{cat}}^{cl}X$. Thus, if a manifold $M$ admits a filling of order $N$ by contractible submanifolds, then $N+1\ge {\mathop\mathrm{cat}}(M)$. On the other hand, every closed manifold admits a categorical filling, see \cite[Proposition 3.5]{Si79}, and \cite[Proposition 2.4]{Ta68}, and therefore the inequality is sharp for contractible fillings. We will also need the notion of a relative filling. Let $M$ be a compact manifold of dimension $n$ with a non-empty boundary $\partial M$. Then the union \[ \tilde M= M \sqcup (\partial M\times [0,1])/_{\partial M\sim \partial M\times \{0\}} \] has a unique smooth structure that agrees with the smooth structures on $M$ and $\partial M\times [0,1]$. \begin{definition}\label{RelFilling} Let $M$ be a compact manifold with boundary of dimension $n$. We say that a family $Q_1,..., Q_N$ of compact submanifolds with corners of $M$ is a \emph{relative filling} of $M$ if the decomposition $\tilde M=Q'_1\cup Q'_2\cup \cdots \cup Q'_N\cup Q'_{N+1}$, where $Q_i'=Q_{i-1}$ for $i=2,..., N+1$, and $Q'_{1}=\partial M \times [0,1]$, satisfies the properties $(P1)$ and $(P2)$ of Definition~\ref{Filling} as well as the property $(P3)$ for all points $z\in \tilde M\setminus \partial \tilde M$. We say that the relative filling $Q_1\cup\cdots \cup Q_N$ of $M$ is \emph{categorical} if $Q_i$ is contractible for $i=1,..., N$, and $N-1={\mathop\mathrm{cat}}(M)$. \end{definition} \begin{remark} We note that $Q'_{1}=\partial M \times [0,1]$ is not necessarily contractible. \end{remark} Many properties of relative fillings of compact manifolds with boundary are similar to the corresponding properties of fillings of closed manifolds. \begin{lemma} Every compact manifold $M$ admits a categorical filling. \end{lemma} \begin{proof} The proof is similar to one of \cite[Proposition 3.5]{Si79}. \end{proof} Suppose now that a manifold $M=M_1\cup M_2$ is obtained from two compact manifolds with boundary $M_1$ and $M_2$ by identifying some of their boundary components. In general, the union of a filling of $M_1$ and a filling of $M_2$ do not comprise a filling of $M$. However, the union of fillings of $M_1$ and $M_2$ do comprise a filling of $M$ in certain cases. \begin{remark}\label{r:8} Let $M_1$ and $M_2$ be two compact manifolds with fillings. For $i=1,2$, let $\partial_iM$ denote a union of some of the path components of the boundary of $M_i$. Suppose that $\partial_1M$ is diffeomorphic to $\partial_2M$. Furthermore, suppose that the filling of $M_2$ is given by a decomposition $Q_1\cup \cdots \cup Q_N$ into submanifolds with corners such that if two points $x, y$ belong to the same path component of $\partial _2M$, then $x$ and $y$ belong to the same submanifold with corners $Q_i$. Let $M$ be a manifold obtained from $M_1$ and $M_2$ by identifying $\partial_1M$ with $\partial_2M$. Then the union of fillings of $M_1$ and $M_2$ comprise a filling of $M=M_1\cup M_2$ at least for some choice of indexing of the submanifolds with corners of $M_1\cup M_2$. \end{remark} \section{Isolated critical points}\label{s:3} We now recall some definitions and results from \cite{Sa20}. Let $M$ be a manifold and $f:M\to \R$ be a smooth function with at most finitely many critical points. We say that a critical point of $f$ is isolated if it has a neighborhood that contains no other critical points of $f$. Let $x_0\in M$ be an isolated critical point of $f$. Without loss of generality we may assume that $f(x_0)=0$. By slightly perturbing $f$ without changing the number of critical points of $f$, we may assume that $x_0$ is the only critical point in $f^{-1}(0)$, and that $0$ is the unique critical value in $[-c, c]$ where $c$ is a positive real number. We choose a Riemannian metric on $M$. This metric induces a gradient flow $\gamma_t$ on $M$ such that the trajectory $t\to \gamma_t(y)$ of any point $y\in M$ is a curve in $M$. We say that a trajectory is closed if the curve is a closed subset of $M$. The trajectories are either closed in $M$ or one of the limits of $\gamma_t(y)$ is a critical point of $f$. Define $D\subset M$ to be the set of all critical points of $f$ as well as all non-closed trajectories. For a critical point $x$, let $D(x)$ be the space of all points $y$ such that for the trajectory $t\to \gamma_t(y)$, we have $\lim\limits_{t\to\infty} \gamma_t(y)=x$ or $\lim\limits_{t\to-\infty} \gamma_t(y)=x$. We note that the point $x$ is itself in $D(x)$. Let $M_a$ denote the level set $f^{-1}(a)$ for any value $a\in \R$. Let $D_a=D\cap M_a$, and similarly let $D_a(x)=D(x)\cap M_a$. The gradient flow induces a diffeomorphism \[ h_{a,b}:M_a\backslash D_a\to M_b\backslash D_b \] for every pair $(a,b)$ of real numers, where $-c\le a<b\le c$. We adopt the convention that $h_{b,a}(z)=h^{-1}_{a,b}(z)$ for $z\in M_b\backslash D_b$. The following is proved in \cite[Lemma 2]{Sa20}. \begin{lemma}\label{Oliver} There is a closed neighborhood $U$ of $D_c(x_0)$ in the hypersurface $M_c$ and a continuous function $g_1$ on $U$ such that \begin{itemize} \item $D_c(x_0)$ is a deformation retraction of $U$, \item the boundary $\partial U$ is smooth, \item the function $g_1|(U\backslash D_c(x_0))$ is smooth, \item $g_1\geq 0$ over $U$, \item $g_1^{-1}(0)=D_c(x_0)$, and \item $g_1$ has no singular points on $\partial U$, and, furthermore, the gradient vector field of $g_1$ over $\partial U$ is outward normal. \end{itemize} \end{lemma} Let $V=h_{c,0}(U\backslash D_c(x_0))\cup \{x_0\}$ be a closed neighborhood of $x_0$ in $M_0$. Let $M_{[-c,c]}$ be the submanifold of $M$ of points $x$ such that $f(x)\in [-c,c]$. Let $H\subset M_{[-c,c]}$ be the space of all gradient curves in $M_{[-c,c]}$ that intersect $V$ or have a limit of $x_0$. The set $H$ is a smooth manifold with corners \cite[Lemma 9]{Sa20} homeomorphic to a disc \cite[Corollary 15]{Sa20}. \section{The Lusternik-Schnirelmann estimate by ${\mathop\mathrm{Bcat}}^\T$}\label{s:ls5} Given a closed smooth manifold $M$, we recall that ${\mathop\mathrm{Bcat}}^\T(M)$ is the least number $n$ such that there is a Singhof-Takens filling of $M$ by topological balls has at least $n+1$ elements. We also recall that $Crit(M)$ is the least number of critical points for any smooth function on $M$. In this section we will establish the Lusternick-Schnirelmann type inequality: \[{\mathop\mathrm{Bcat}}^\T (M) + 1\le \Crit(M)\] \begin{lemma}\label{Kyle} Let $f:M\to \R$ be a smooth function on a closed manifold of dimension $n$. Suppose that $x_0$ is a critical point at which $f$ assumes a unique global minimum. Let $c$ be a number such that $f$ has no critical values in the interval $(f(x_0),c]$. Then the compact manifold $M_{(-\infty,a]}$ is homeomorphic to a disc for any $a\in(f(x_0),c)$. \end{lemma} \begin{proof} The manifold $M_{(-\infty,a]}$ is contractible as there is a deformation of $M_{(-\infty,a]}$ to a point along the negative to the gradient flow of $f$. In fact the manifold $M_{(-\infty,a]}$ is homeomorphic to a cone over $f^{-1}(a)=M_{a}$. Since $M$ is a manifold of dimension $n$, the group $H_i(M, M\setminus \{x_0\})=H_i(M_{(-\infty,a]}, M_{(-\infty,a]}\setminus \{x_a\})$ is isomorphic to $\Z$ in dimensions $i=0$ and $i=n$ and it is trivial otherwise. By the long exact sequence, we deduce that $M_{a}$ is a homology $(n-1)$-sphere. If $n<3$, then $M_a$ is a sphere, and therefore $M_{(-\infty,a]}$ is homeomorphic to a disc. Suppose that $n\ge 3$. Let us show that $M_a$ is simply connected, and, consequently, $M_a$ is a homotopy sphere. To this end, let $\gamma$ be any loop on $M_a$. We will show that it is contractible in $M_a$. Since $M_{(-\infty,a]}$ is contractible, there is a disc $D$ in $M_{(-\infty,a]}$ bounded by $\gamma$. Furthermore, since $n\ge 3$, by slightly perturbing $D$ we may assume that $x_0$ is not in $D$. Since $M_{(-\infty,a]}$ is homeomorphic to a cone over $M_a$ there is a projection of $M_{(-\infty,a]}\setminus \{x_0\}$ to $M_a$. It takes the disc $D$ to $M_a$ producing a map of a disc extending the inclusion of $\gamma=\partial D$. Therefore, the manifold $M_a$ is simply connected. Thus $M_a$ is a homotopy $(n-1)$-sphere. Finally, by the generalized topological Poincar\'e conjecture $\partial M_a$ is homeomorphic to a sphere, and, therefore, $M_a$ is homeomorphic to a disc. \end{proof} \iffalse \textcolor{red}{Delete these theorems if we can smooth $H$ by taking a disc around it.(Since then we don't need to smooth $H$ like Takens does)} \begin{theorem} (Douady) Let $V$ be a manifold with boundary and $S$ be the set of points where $\partial V$ is not smooth. If every point of $S$ has a neighborhood in $V$, diffeomorphic with $\{(x_1,\ldots,x_n)\in \R^n|x_1\geq 0 \& x_2\geq 0\}$, then $V$ has a unique smoothing $\tilde{V}$. \end{theorem} \begin{theorem}\label{Noah} (Takens) Let $Q_1,Q_2,Q_3$ be a filling for a manifold $M$. There is a homeomorphism $h:M\to M$, isotopic with the identity such that: \begin{itemize} \item[i] $\{h(Q_i)\}_{i=1}^3$ is a filling of $M$ with $\partial Q_2$ smooth, \item[ii] $h(Q_i)\overset{a}{\simeq} Q_i$ for $i=1,2,3$. \end{itemize} \end{theorem} \begin{corollary}\label{Jasper} Let $Q_1,Q_2,\ldots, Q_n$ be a filling for a manifold $M$. There is a homeomorphism $h:M\to M$ isotopic to the identity such that: \begin{itemize} \item[i] $\{h(Q_i)\}_{i=1}^n$ is a filling of $M$ with $\partial Q_1$ smooth, \item[ii] $h(Q_i)\overset{a}{\simeq} Q_i$ for $i=1,2,\ldots,n$. \end{itemize} \end{corollary} \begin{proof} ... \end{proof} \fi Let $M$ be a compact manifold with corners, and let $Q_1,\ldots, Q_n$ be a filling of $M$. We denote by $Q_{i,j}$ the union of $(j+1)$-faces of the element $Q_i$ of the filling, i.e., $Q_{i,0}$ is the complement in $\partial Q_i$ to the corners of $\partial Q_i$, and $Q_{i,j+1}$ is the complement in $\partial Q_i\backslash \left(\bigcup\limits_{k\le j} Q_{i, k}\right)$ to the corners in $\partial Q_i\backslash \left(\bigcup\limits_{k\le j} Q_{i, k}\right)$ for $j\ge 0$. \begin{lemma}\label{Emma} Let $N$ be a smooth closed manifold of dimension $d$. Let $Q_1,\ldots, Q_n$ be a filling of $N$. Let $K$ be a smooth compact submanifold of $N$ of dimension $d$ such that $\partial K$ is transverse to each stratum $Q_{i,j}$ of the canonical stratification of each $\partial Q_i$. Put $S_i=\overline{Q_i\backslash K}$ for $i\in\{1,2,\ldots, n\}$. Then the sets $K, S_1, \ldots S_n$ comprise a filling of $N$. \end{lemma} \begin{proof} We note that the sets $K, S_1, ..., S_n$ cover the manifold $N$, and the interiors of the sets $K, S_1,..., S_n$ are disjoint. Therefore, the sets $K, S_1,..., S_n$ satisfy the first two properties of a filling. Let us now verify Property 3. To begin with, let $z$ be a point in $N\setminus K$. Suppose that $z\in Q_{i_1}\cap Q_{i_2}\cap\ldots\cap Q_{i_m}$ for $1<m\leq n$, but $z$ is not in $Q_{i_1}\cap Q_{i_2}\cap\ldots\cap Q_{i_m}\cap Q_{i}$ for all $i\notin \{i_1,..., i_m\}$. Since $\{Q_i\}$ is a filling, we deduce that there is a small coordinate disc neighborhood $D$ about $z$ in $N\setminus K$ such that \begin{align*} &D\cap Q_{i_1}=\{x_1\leq 0\},\\ &D\cap Q_{i_2}=\{x_1\geq 0, x_2\leq 0\},\\ &\cdots\\ &D\cap Q_{i_m}=\{x_1\geq 0, x_2\geq 0, \ldots, x_{m-1}\geq 0\}. \end{align*} Since $D\cap Q_i=D\cap S_i$, we deduce that Property 3 is satisfied for the point $z$. If $z$ is in the interior of $K$, then Property 3 is again clearly satisfied. Suppose now that $z$ is on the boundary of $K$, the point $z$ is in $Q_{i_1}, ..., Q_{i_m}$, and $z\notin Q_i$ for $i\notin\{i_1,..., i_m\}$. Again, there is a coordinate disc neighborhood $D$ about $z$ such that the intersections of $D$ with $Q_{i_1}, ..., Q_{i_m}$ are as above. The dimension of the manifold \[ Q = \{ x\in Q_{i_1}\cap Q_{i_2}\cap\ldots\cap Q_{i_m} \ |\ x\notin Q_{i_1}\cap Q_{i_2}\cap\ldots\cap Q_{i_m}\cap Q_{i} \quad \mathrm{for}\quad i\notin \{i_1, ..., i_m\}\} \] is $d-m+1$. Therefore, if $z\in \partial K$, then $d\ge m$, since $\partial K$ is transverse to $Q$. Without loss of generality, we may assume that there is a smooth function $y_1$ on $D$ such that $D\cap K$ is given by $y_1\le 0$ and the gradient of $y_1$ at $z$ is non-trivial. We write $A\pitchfork_S B$ when a manifold $A$ is transverse to a manifold $B$ at each point of a set $S$. We have \begin{align*} \partial K\pitchfork_z (Q_{i_1}\cap\ldots\cap Q_{i_m}) &\iff \partial K\pitchfork_z \{(x_1,x_2,\ldots, x_m)|x_1=0,\ldots,x_{m-1}=0\}\\ &\iff \nabla y_1 \notin \left\langle\pa{}{x_1},\ldots, \pa{}{x_{m-1}}\right\rangle. \end{align*} Indeed, since $\partial K$ is of dimension $d-1$, it is not transverse to $Q$ at $z$ only if $T_zQ\subset T_z\partial K$, i.e., $\nabla y_1$ is perpendicular to $T_zQ=\left\langle\pa{}{x_m},\ldots, \pa{}{x_{d}}\right\rangle$ with respect to the standard Riemannian metric on $D$. Put $y_j=x_{j-1}$ for $j=2,\ldots, m-1$. Since $\nabla y_1 \notin \left\langle\pa{}{x_1},\ldots, \pa{}{x_{m-1}}\right\rangle$, there exist functions $y_{m+1},\ldots, y_d$ of $(x_m,\ldots, x_d)$ such that the $\nabla y_1, \nabla y_m,\ldots, \nabla y_d$, are linearly independent at the point $z$. By the inverse function theorem, for a small disc neighborhood $D_0$ of $z$, the map $y|D_0: D_0\to \R^d$ is a local coordinate chart. Then \begin{align*} &D_0\cap K=\{y_1\leq 0\}\\ &D_0\cap S_{i_1}= D_0\cap (Q_{i_1}\setminus K)=\{y_1\geq 0, x_1\le 0 \} =\{y_1\geq 0, y_2\leq 0\}\\ &D_0\cap S_{i_2}=D_0\cap (Q_{i_2}\setminus K)=\{y_1\geq 0, x_1\geq 0, x_2\le 0 \}=\{y_1\geq 0, y_2\geq 0, y_3\leq 0\}\\ &\cdots\\ &D_0\cap S_{i_m}=D_0\cap (Q_{i_m}\setminus K)=\{y_1\geq 0, x_1\geq 0,..., x_{m-1}\geq 0 \}=\{y_1\geq 0, y_2\geq 0, \ldots, y_m\geq 0\} \end{align*} which fulfills property 3. Therefore, the sets $K, S_1,\ldots, S_n$ comprise a filling. \end{proof} We are now in position to prove the following theorem. \begin{theorem}\label{th:13lower} For any closed manifold $M$, we have ${\mathop\mathrm{Bcat}}^\T (M) + 1\le \Crit(M)$. \end{theorem} \begin{proof} Let $x_0, x_1, \ldots, x_n$ be the critical points of a smooth function $f:M\to \R$ minimizing $Crit(f)$. Since $M$ is closed we may assume that $f>0$. By slightly perturbing $f$ and re-indexing $x_0, x_1, \ldots, x_n$, we may also assume that $f(x_i)<f(x_{i+1})$. For $i=0,..., n+1$, let $a_i$ be real numbers such that $a_i<f(x_i)<a_{i+1}$ and $0=a_0$. We note $f$ is constant over $M_{a_i}$ and for each $i$ the restriction $f|{M_{[0, a_i]}}$ is a function minimizing $Crit(f|M_{[0, a_i]})$ among functions on $M_{[0, a_i]}$ that are constant and maximal over $M_{a_i}$. The function $f$ possesses only one critical point in the manifold $M_{[0, a_1]}$ at which $f$ assumes the global minimum. Therefore, by Lemma~\ref{Kyle}, the manifold $M_{[0,a_1]}$ is homeomorphic to a disk. Thus ${\mathop\mathrm{Bcat}}^\T(M_{[0,a_1]})+1\leq Crit(M_{[0,a_1]})$. By induction, suppose that \[ {\mathop\mathrm{Bcat}}^\T(M_{[0,a_i]})+1\leq Crit(M_{[0,a_i]}). \] We note that \[ Crit(M_{[0,a_i]})+1= Crit(M_{[0,a_{i+1}]}). \] Let's show that ${\mathop\mathrm{Bcat}}^\T(M_{[0,a_{i+1}]})\le \ell+ 1$, where $\ell={\mathop\mathrm{Bcat}}^\T(M_{[0,a_i]})$. Suppose that $Q_0,..., Q_\ell$ is a relative filling for $M_{[0,a_{i}]}$. It suffices to show that there is a filling for $M_{[0,a_{i+1}]}$ with $\ell+2$ elements. For an isolated critical point $x_i\in M_{{[a_i,a_{i+1}]}}$, recall that $D(x_i)$ denote the set of points $x$ in $M$ such that $\lim \gamma_t(x)=x_i$ as $t\to \infty$ or $t\to -\infty$, where $\gamma_t(x)$ is the gradient flow of $f$. The construction after Lemma~\ref{Oliver} produces a neighborhood $H\subset M_{[a_i, a_{i+1}]}$ of $D(x_i)\cap M_{[a_i, a_{i+1}]}$. Namely, we first construct a certain small neighborhood $V$ of $x_i$ in $M_{x_i}$, and then define $H$ to be the space of all points $x$ such that either the gradient curve passing through $x$ intersects $V$, or one of the limits of the gradient curve through $x$ is $x_i$. By \cite[Lemma 9]{Sa20} and \cite[Corollary 15]{Sa20} the manifold $H$ is a smooth manifold with corners, and $H$ is homeomorphic to a ball. On the manifold $M_{[a_i, a_{i+1}]}$ there is a positive function $\alpha$ such that the flow along the scaled vector field $\alpha\cdot \nabla f$ takes any point on $M_{a_{i}}\backslash H$ to a point on $M_{a_{i+1}}\backslash H$ at the same time $t=1$ for all points on $M_{a_{i}}\backslash H$. \begin{figure} \caption{The gradient field $\nabla f$, and the elements $Q_i$, $Q_j$ of a filling of $M_{[0, a_i]}$.} \label{bcat1} \end{figure} \begin{figure} \caption{The gradient field $w'$ shown at 3 points with the modified filling.} \label{bcat2} \end{figure} Since $Q_0,..., Q_\ell$ is a relative filling of $M_{[0, a_i]}$, the strata of the boundary $Q_j$ intersects each submanifold $M_{t}$ transversally for $t\in [a_i-\varepsilon, a_i)$ and all $j$ for a sufficiently small number $\varepsilon>0$. In particular, there is a gradient like vector field $w$ on $M_{[a_i-\varepsilon, a_i)}$ such that $w$ is tangent to each stratum of $\partial Q_j$ when restricted to $\partial Q_j$ for all $j$, see Fig.~\ref{bcat1}. Our aim is to extend the present relative filling $\{Q_i\}$ to a relative filling of $M_{[0, a_{i+1}]}$. To this end, we we will next modify the vector field $w$ and the relative filling of $M_{[0, a_i]}$ to a vector filed $w'$ and a relative filling $\{Q_{i}''\}$. Let $\lambda$ be a smooth monotone function on $[0, a_{n+1}]$ such that $\lambda(t)= 0$ for all $t\in [0,a_i-\varepsilon]$, and $\lambda(t)= 1$ for all $t\in [a_i, a_{n+1}]$. Define a vector filed $w'$ on $M_{[a_{i}-\varepsilon, a_i]}$ by \[ w'(x)= (1-\lambda (f(x)))w(x)+\lambda(f(x))\nabla f(x). \] Then $w'$ is a vector field that agrees with $w$ near $M_{a_i-\varepsilon}$, and agrees with $\nabla f$ near $M_{a_i}$. We now define a new relative filling of $M_{[0, a_i]}$ by sets $Q_0'', ..., Q_{\ell}''$, where $Q_j''$ is the union of $Q_j\cap M_{[0, a_i-\varepsilon)}$ and the points $y$ in $M_{[a_i-\varepsilon, a_i]}$ that flow along $-w'$ to $Q_j\cap M_{a_i-\varepsilon}$, see Fig.~\ref{bcat2}. To show that $\{Q_i''\}$ is a relative filling, we observe that the submanifold $M_{a_i-\varepsilon, a_i}$ is diffeomoprhic to $M_{a_i-\varepsilon}\times [0,1]$ under a diffeomorphism that takes any flow line of $w'$ in $M_{a_i-\varepsilon}$ to a segment of the form $\{x\}\times [0,1]$. The sets $Q_i''$ coincide with the elements $Q_i$ of a filling except over the part $M_{a_i-\varepsilon}\times [0,1]$. On the other hand, the sets $Q_i''\cap (M_{a_i-\varepsilon}\times [0,1])$ are cylinders $(Q_i''\cap M_{a_i-\varepsilon})\times [0,1]$. Therefore, the sets $Q_i''$ indeed form a relative filling. We observe that the filling $\{Q_i''\}$ of $M_{[0, a_i]}$ has the property that each non-horizontal stratum of each element $Q_j''$ is vertically up near $M_{a_i}$. \begin{figure} \caption{The relative filling $\{Q_i'\}$.} \label{bcat3} \end{figure} We next modify the vector field $w'$ to a vector field $v$ and extend it over $M_{[0, a_{i+1}]}$ which we will use to smooth corners of $H$, and extend the relative filling $\{Q_i''\}$ to a relative filling of $M_{[0, a_{i+1}]}$ that consists of extensions $Q_i'$ of $Q_i''$ and the smoothed version $H'$ of $H$. There is a smooth vector field $v(x)$ on $M_{[0, a_{i+1}]}$ such that \begin{itemize} \item $v$ is zero over $M_{[0, a_i-\varepsilon]}$, \item $v$ is a multiple of $w'$ over $M_{[a_i-\varepsilon, a_i]}$, \item $v$ is a non-zero multiple of $w'$ over $M_{[a_i-\varepsilon/2, a_i]}$, and \item $v$ agrees with the scaled gradient vector field $\alpha\cdot \nabla f$ on $M_{[a_i, a_{i+1}]}\setminus H$. \end{itemize} In fact, we may choose $v(x)$ to be of the form $v(x)=\lambda_1(f(x))w'+\lambda_2(f(x))\nabla f$ for some smooth functions $\lambda_1$ and $\lambda_2$. Let \[ \rho\colon M_{[0, a_{i}]}\times [0,1]\to M_{[0, a_{i+1}]}\] denote the flow along the vector field $v$ for $t\in[0,1]$. We note that $\rho_1$ takes $M_{a_i}\backslash H$ diffeomorphically onto $M_{a_{i+1}}\backslash H$, where $\rho_t(x)=\rho(x,t)$. Next we replace $H$ with a smooth submanifold $H'$ of $M_{[0, a_{i+1}]}$ with boundary that is smooth away from $M_{a_{i+1}}$. To this end we use the flow $\rho$ to identify a neighborhood of $H_{a_i}=H\cap M_{a_i}$ with $H_{a_i}\times [-\varepsilon/2, \varepsilon/2]$ by a diffeomorphism that takes a point $x$ in a neighborhood of $H_{a_i}$ with a pair $(y, t)$ such that $x=\rho_t(y)$. Let $\pi_{i}$ denote the projection of the product neighborhood onto the $i$-th factor where $i=1,2$. Given a function $\psi\le 0$ on $H_{a_i}$ with values in $(-\varepsilon/2, 0]$, let $H_\psi$ denote the union of $H$ and the points $x$ in the neighborhood $H_{a_i}\times [-\varepsilon/2, \varepsilon/2]$ above the graph of the function $\psi$, i.e., with points satisfying $\pi_2(x)\ge \psi(\pi_1(x))$. We may choose $\psi$ so that $H'=H_\psi$ is a submanifold of $M$ with corners only on $H'_{a_{i+1}}=H'\cap M_{a_{i+1}}$. Let $Q_i'$ denote the subset $\rho_1(Q_i'')\setminus H'$, see Fig.~\ref{bcat3}. We claim that $H', Q_0', Q_1',\ldots, Q_\ell'$ is a relative filling of $M_{[0, a_{i+1}]}$. We note that the ordering of the elements of this relative filling is important and that $H'$ is the first element. Let $L$ be $M_{a_{i+1}}\times[0,1]$, and let $\tilde{M}$ stands for $L\cup M_{[0, a_{i+1}]}$ where the union is taken by identifying $M_{a_{i+1}}\times \{0\}\subset L$ with the boundary of $M_{[0, a_{i+1}]}$. We see that the sets $L, H', Q_0', Q_1',\ldots, Q_\ell'$ satisfy properties $1$ and $2$ for a relative filling of $\tilde{M}$ by construction. All that is left to show is that these sets satisfy property $3$ for all points $z\in \tilde{M}\backslash\partial\tilde{M}$. The sets $Q_0, Q_1, \ldots, Q_\ell$ define a filling of the manifold $M_{a_i}$ by the sets $R_i=Q_i\cap M_{a_i}$. Let $H_0=H'\cap M_{a_i}$. Since $H'$ is a smooth compact submanifold of $M_{[0, a_{i+1}]}$ such that $\partial H'$ intersects the level $M_{a_i}$ transversely, we conclude that $H_0$ is a smooth compact submanifold of $M_{a_i}$. By Lemma~\ref{Emma}, the submanifold $H_0$ together with the sets $R_i'=\overline{R_i\setminus H_0}$ define a new filling of $M_{a_i}$. Let $H_1=H'\cap M_{a_{i+1}}$, and let $R_i''$ denote $\rho_1(R_i')$. Then $H_1$ together with $R_i''$ form a filling of $M_{a_{i+1}}$. We shall now prove that property $3$ is satisfied for all $z\in \tilde{M}\backslash\partial\tilde{M}$. We will consider four cases. \begin{enumerate} \item If $z\in M_{[0,a_{i+1}]}\backslash H'$ then property $3$ is satisfied for $z$ since property $3$ is preserved under diffeomorphism, and the sets $Q'_i$ are images of the sets $Q_i''$ of a filling under the diffeomorphism $\rho_1$. \item If $z\in int(H')$ or $z\in int(L)$ then property $3$ is satisfied since the number of submanifolds of the filling containing that point is $1$. \item Suppose $z$ is in the intersection $\partial H'\cap M_{(a_i,a_{i+1})}$. Let $z'$ be a point on $M_{a_i}$ on the gradient flow line passing through $z$, i.e., $\rho_{t_0}(z')=z$ for some $t_0>0$ Since the sets $H_0$ together with $R_i'$ form a filling of $M_{a_i}$, there is a special coordinate chart $D'$ of $z'$ as in Defintion~\ref{Filling}. Let $D_\varepsilon\subset M_{[0, a_{i+1}]}$ be a coordinate chart about $z$ given by \[ D_\varepsilon = D'\times (-\varepsilon, \varepsilon)\subset M_{[0, a_{i+1}]}, \] \[ (x, t) = \rho_{t_0+t}(x). \] We may choose $\varepsilon$ sufficiently small so that $D_\varepsilon$ is a subset in $H'$. Then $D_\varepsilon$ is a special coordinate chart about $z$ as in Definition~\ref{Filling}. \item Suppose that $z$ is in the intersection $H'_{a_{i+1}}=H'\cap M_{a_{i+1}}$. If $z$ is in the interior of $H'_{a_{i+1}}$, then the property $3$ is clearly satisfied. If $z$ is in the boundary of $H'_{a_{i+1}}$, then the property $3$ is satisfied since $\{H_1, R''_0, ... R''_\ell\}$ is a filling of $M_{a_{i+1}}$. \item Suppose that $z$ is in the intersection $J=\partial H'\cap M_{[0,a_i]}$. If $z$ is in the interior of $J$, then the property $3$ is satisfied for $z$ since $\{Q_i''\}$ is a relative filling for $M_{[0, a_i]}$. If $z$ is in the boundary of $J$, then the property $3$ is satisfied for $z$ by the reason as in the case $3$. \end{enumerate} Thus, the sets $H', Q_0',Q_1',\ldots Q_\ell'$ form a relative filling for $M_{[0,a_{i+1}]}$. Therefore as we pass from $M_{[0,a_i]}$ to $M_{[0,a_{i+1}]}$, the number of elements in a minimizing filling increases by $1$. Consequently, we have \[ {\mathop\mathrm{Bcat}}^\T(M_{[0,a_{i+1}]})+1\leq {\mathop\mathrm{Bcat}}^\T(M_{[0,a_i]})+2\leq Crit(M_{[0,a_i]})+1\leq Crit(M_{[0,a_{i+1}]}).\] When $i=n$, the relative filling of $M_{[0, a_{i+1}]}$ is a filling of $M$. Consequently \[ {\mathop\mathrm{Bcat}}^\T(M)+1\leq Crit(M). \] which completes the proof. \end{proof} \section{The converse to the Lusternik-Schnirelmann inequality}\label{s:5} In this section we will establish the converse to the Lusternik-Schnirelmann inequality. Namely, we will show that for a closed manifold $M$ there is an estimate: \[ {\mathop\mathrm{Bcat}}(M)+1\ge \Crit(M). \] Let $Q$ be a compact submanifold with corners of codimension $0$ in a smooth manifold $M$. A \emph{smooth collar neighborhood} of $Q$ in $M$ is a compact submanifold $Q'\subset M$ with smooth boundary such that there is an almost diffeomorphism $Q'\to Q\cup (\partial Q\times [0,1])$, where the union is taken by identifying $\partial Q\subset Q$ with $\partial Q\times \{0\}\subset \partial Q\times [0,1]$. We recall that a vector field over a compact subset $Q$ of a smooth manifold is said to be \emph{smooth} if it is the restriction of a smooth vector field defined over a neighborhood of $Q$. Let $X$ be a topological submanifold of codimension $1$ of a Riemannian manifold $M$. Let $v$ be a smooth vector field over $X$ in $M$. Roughly speaking, the vector field $v$ is \emph{transverse} to $X$ at a point $x\in X$ if the projection of a neighborhood of $x$ in $X$ to a disc of dimension $\dim M-1$ transverse to $v(x)$ is a homeomorphism onto image. More precisely, for each point $x\in X$, let $N_x$ denote the unique geodesic in $M$ parametrized by $t\in (-\varepsilon, \varepsilon)$ such that at the time $t=0$ it passes through $x$, and its velocity at $t=0$ is $v$. If $\varepsilon>0$ is sufficiently small, then the geodesic $N_x$ exists for each $x\in X$. Suppose that for each point $x\in X$ there is a smooth disc $D_x$ in $M$ centered at $x$ such that $D_x$ is transverse to $N_x$ and $\{D_x\}$ is a continuous family of discs. Then for any point $x\in X$ we may identify $D_x\times N_x$ with a neighborhood of $x$ so that $D_x\times \{0\}$ is identified with $D_x\subset M$, and $0\times N_x$ is identified with $N_x\subset M$. Suppose that a neighborhood of $x$ in $X$ is the graph $\{(y, f_x(y))\}\subset D_x\times N_x$ of a continuous function $f_x\colon D_x\to N_x$. Then we say that $v$ is \emph{transverse} to $X$ at $x$. If $v$ is transverse to $X$ at every point $x\in X$, then we say that $v$ is \emph{transverse} to $X$. Given a vector field $v$ transverse to $X$, let $\sigma\colon X\to M$ be a continuous function that associates with each point $x$ a point in $N_x$. Then $\sigma(X)$ is homeomorphic to $X$. By the Cairns-Whitehead theorem, for every $\delta>0$ there is a continuous function $\sigma$ as above such that $\sigma(X)$ is a smooth submanifold of $M$ in the $\delta$-neighborhood of $X$. Similarly, let $M$ be a smooth manifold, and $Q$ be a set in a Singhof-Takens filling of $M$ by smooth manifolds with corners. Then the Cairns-Whetehead smooth approximation of the boundary of $Q$ defines a smooth manifold (with smooth boundary) $Q'$ approximating $Q$. Lemma~\ref{lQ:15} shows that we may choose $Q'$ so that $Q\subset Q'$. \begin{remark}\label{rem:CWhi} In the Cairns-Whitehead construction above, if $\varepsilon>0$ is small enough, then different geodesic segments $N_x$ do not intersect. Let $\pi$ denote the projection of the neighborhood $\cup N_x$ of $X=\partial Q$ to $X$ defined by projecting each fiber $N_x$ to $x$. The smoothing $\sigma(X)$ is determined by the Riemannian metric on $M$, the value $\varepsilon$, the vector field $v$ as well as a smooth approximation of $\pi$. Since any two transverse vector fields over $X$ are homotopic, and any two smooth approximations of $\pi$ are isotopic, any two smooth approximations of $X$ are isotopic. In particular, if a Cairns-Whitehead smoothing of a manifold $Q$ with corners in a Singhof-Takens filling is diffeomorphic to a ball, then any other Cairns-Whitehead smoothing of $Q$ is also diffeomorphic to a ball. Thus, the notion of a smooth ball with corners is well-defined. \end{remark} \begin{lemma}\label{lQ:15} Let $Q$ be a compact submanifold with corners of codimension $0$ in a smooth manifold $M$. Then $Q$ possesses a collar neighborhood. \end{lemma} \begin{proof} There is a smooth vector field $v$ over $\partial Q$ in $M$ transverse to $\partial Q$. Choose a Riemannian metric on $M$. For each point $x\in Q$, let $\gamma_t(x)$ denote the geodesic parametrized by $t$ such that $\gamma_0(x)=x$, and $\dot\gamma_t(x)|_{t=0}=v(x)$. Let $exp$ denote the map $Q\to M$ that associates with each point $x$ the point $\gamma_1(x)$. If the vector field $v$ is sufficiently small, then the image $X$ of $\partial Q$ under the exponential map $exp$ in the direction $v$ is a topological submanifold of $M$ homeomorphic to $\partial Q$. The vectors $v(x)$ translate over the geodesics $\gamma_t(x)$ to vectors $v_X(exp(x))$ so that $v_X$ is a vector field over $X$. In view of the transverse vector field $v_X$, by the Cairns-Whitehead theorem~\cite{Pu}, there is a smooth approximation $Y$ of $X$. Then the manifold bounded by $Y$ that consists of $Q$ and parts $(x, \exp(x))$ of the shortest geodesics $\gamma_t(x)$ is a collar neighborhood of $Q$. \end{proof} Let $M$ be a smooth manifold of dimension $n$ with boundary $\partial M$. A relative Singhof-Takens filling of $M$ by two subsets $Q_1$ and $Q_2$ is said to be \emph{nice} if the set $Q_1$ contains $\partial M$, the pair $(Q_1, \partial M)$ is almost diffeomorphic to the pair $(\partial M\times [0,1], \partial M\times \{0\})$, and $Q_2$ is diffeomorphic to the disc $D^n$. We note that in this case both manifolds $Q_1$ and $Q_2$ are smooth submanifolds with (smooth) boundary. Let now $M$ be a compact connected smooth manifold of dimension $n$ such that $\partial M$ is the disjoint union of two manifolds $\partial_1 M$ and $\partial_2 M$. Suppose that $M=Q_3\cup Q_1\cup Q_2$ is a relative Singhof-Takens filling of $M$ such that $Q_1$ contains $\partial_1 M$, while $Q_2$ contains $\partial_2 M$. Suppose that $(Q_i,M_i)$ is almost diffeomorphic to $(M_i\times [0,1],M_i\times\{0\})$ for $i=1,2$, while $Q_3$ is diffeomorphic to a disc $D^n$. Then we say that the filling $\{Q_i\}$ is \emph{nice}. \begin{remark} In the original definition of a nice filling by Takens \cite{Ta68}, the boundary of $Q_3$ is required to be smooth. We omit this requirement since the corners of a manifold can be smoothened as in \cite[section 2]{Ta68}. \end{remark} To establish the converse of the Lusternik-Schnirelmann inequality we will rely on the Takens Theorem. \begin{theorem}[see Corollary 2.8 in \cite{Ta68}]\label{th:13} Let $M$ be a compact manifold with a filtration $\emptyset = M_0 \subset M_1\subset M_2\subset \ldots\subset M_k=M$ by compact submanifolds with boundary such that $M_i\subset {\mathop\mathrm{Int}}(M_{i+1})$ and $\partial M\subset M_k\setminus M_{k-1}$. Suppose that for each $i=1,..., k$, the manifold $M_i\setminus {\mathop\mathrm{Int}}(M_{i-1})$ with boundary $\partial M_{i-1}\sqcup \partial M_{i}$ admits a nice filling. Then $\Crit(M)\le k$. \end{theorem} We are now in position to prove the converse of the Lusternik-Schnirelmann inequality. \begin{theorem}\label{th:18upper} For any closed manifold $M$, we have ${\mathop\mathrm{Bcat}}(M)+1\geq \Crit(M)$. \end{theorem} \begin{proof} Suppose that ${\mathop\mathrm{Bcat}}(M)=k-1$. In particular, the manifold $M$ admits a Singhof-Takens filling of $M=U_1\cup \cdots\cup U_k$ by smooth closed balls with corners. Let $M_1$ be a smooth compact collar neighborhood of $U_1$, and, by induction for $i=2,..., k$, let $M_i$ be the union of $M_{i-1}$ and a smooth compact collar neighborhood of $U_i$ if $U_i$ is disjoint from $M_{i-1}$, and let $M_{i}$ be a smooth compact collar neighborhood of $U_i\cup M_{i-1}$ otherwise. Then $\{M_i\}$ forms a filtration of $M$ by compact submanifolds with (smooth) boundary. To begin with let us show that $M_1$ admits a nice filling by sets $Q_1$ and $Q_2$. Put $Q_1=M_1\setminus {\mathop\mathrm{Int}}(U_1)$ and $Q_2=U_1$. Since $M_1$ is a collar neighborhood of $U_1$, the pair $(Q_1,\partial M_{1})$ is almost diffeomorphic to the pair $(\partial M_{1}\times [0,1],\partial M_{1}\times\{0\})$. Since $Q_2$ coincides with $U_1$ and $U_1$ is the first set in a Singhof-Takens filling $\{U_i\}$, the set $Q_2$ is diffeomorphic to a ball. Thus, the sets $Q_2$ and $Q_1$ form a nice filling of $M_1$. \begin{figure}\label{fig:1} \end{figure} For $i>1$ if $U_i$ is disjoint from $M_{i-1}$, then $M_i\setminus M_{i-1}$ admits a nice filling by a single smooth ball. Suppose now that $U_i$ is not disjoint from $M_{i-1}$. Let us show that $M_i\setminus {\mathop\mathrm{Int}}(M_{i-1})$ admits a nice filling for $i>1$ by sets $Q_1, Q_2$ and $Q_3$, see Fig.~\ref{fig:1}. Let $Q_1$ be a collar neighborhood of $\partial M_{i-1}$ in $M_i\setminus {\mathop\mathrm{Int}}(M_{i-1})$. Since $M_{i-1}$ is a submanifold of $M$ with smooth boundary, the same is true for $Q_1$. We may assume that the intersection $Q_1\cap U_i$ is a neighborhood of $\partial M_{i-1}\cap U_i$ in $U_i\setminus {\mathop\mathrm{Int}}(M_{i-1})$ of the form $(\partial M_{i-1}\cap U_i)\times [0,1]$. We define $Q_2$ to be the complement in $M_i\setminus {\mathop\mathrm{Int}}(M_{i-1})$ to ${\mathop\mathrm{Int}}(U_i\cup Q_1)$, and $Q_3$ to be the complement in $U_i$ to ${\mathop\mathrm{Int}}(Q_1\cup M_{i-1})$. \iffalse \begin{figure}\end{figure} \fi As the sets $Q_2$ and $Q_3$ may have high order corners, the sets $Q_1, Q_2$ and $Q_3$ do not form a filling of the manifold $M_i\setminus {\mathop\mathrm{Int}} M_{i-1}$. However, the three sets $Q_1, Q_2$ and $Q_3$ clearly cover the manifold $M_i\setminus {\mathop\mathrm{Int}} M_{i-1}$, and their interiors are disjoint. Since $Q_1$ is a collar neighborhood of the smooth boundary component $\partial M_{i-1}$ of the smooth submanifold $M_i\setminus {\mathop\mathrm{Int}}(M_{i-1})$, the pair $(Q_1,\partial M_{i-1})$ is almost diffeomorphic to the pair $(\partial M_{i-1}\times [0,1],\partial M_{i-1}\times\{0\})$. Since $Q_1\cap U_i$ is of the form $(\partial M_{i-1}\cap U_i)\times [0,1]$ and $U_i$ is a smooth ball with corners, it follows that $Q_3=U_i\setminus {\mathop\mathrm{Int}}(Q_1)$ is a smooth ball with corners as well. Let us show that $Q_2$ is almost diffeomorphic to $\partial M_i\times [0,1]$. We observe that $Q_2=(M_i\setminus {\mathop\mathrm{Int}} M_{i-1})\setminus {\mathop\mathrm{Int}}(Q_1\cup U_i)$ can be written as \[ M_i \setminus\ {\mathop\mathrm{Int}}(M_{i-1} \cup (\partial M_{i-1}\times [0,1])\cup U_i) \] where we identify $Q_1$ with $ \partial M_{i-1} \times [0,1]$. Consequently, the manifold with corners $Q_2$ is almost diffeomorphic to $M_i\setminus {\mathop\mathrm{Int}}(M_{i-1}\cup U_i)$. On the other hand, by definition, the manifold $M_i$ is a collar neighborhood of $M_{i-1}\cup U_i$. Therefore, up to almost diffeomorphism, the complement to ${\mathop\mathrm{Int}}(M_{i-1}\cup U_i)$ in $M_i$ is the collar $\partial M_i\times [0,1]$, as required. Let $Q_3'$ be a collar neighborhood of $Q_3$ in $M$, see Lemma~\ref{lQ:15}, such that $Q_3'$ is a submanifold in the interior of $M_{i}$ and $Q_3'$ is disjoint from $M_{i-1}$. Then $Q_3'$ is diffeomorphic to a ball. We now modify $Q_1$ by replacing it with a new set $Q_1'$ obtained from a collar neighborhood $\mathcal{U}(Q_1)$ of $Q_1$ by removing the interior of $Q_3'\cap \mathcal{U}(Q_1)$. In this construction we may choose the collar neighborhood $\mathcal{U}(Q_1)$ of $Q_1$ so that its boundary intersects the boundary of $Q_3'$ transversally. Finally, we redefine $Q_2'$ to be the complement in $M_i\setminus {\mathop\mathrm{Int}} M_{i-1}$ to the interior of the union of $Q_1'$ and $Q_3'$. The resulting sets form a nice filling of $M_i\setminus {\mathop\mathrm{Int}} M_{i-i}$. Thus, indeed, for every $i$, the manifold $M_i\setminus {\mathop\mathrm{Int}}(M_{i-1})$ admits a nice filling. Therefore, by Theorem~\ref{th:13}, the manifold $M$ admits a function with at most $k$ critical points. \end{proof} \end{document}
arXiv
What is the remainder of $5^{2010}$ when it is divided by 7? We start by writing out some powers of five modulo 7. \begin{align*} 5^1 &\equiv 5 \pmod{7} \\ 5^2 &\equiv 4 \pmod{7} \\ 5^3 &\equiv 6 \pmod{7} \\ 5^4 &\equiv 2 \pmod{7} \\ 5^5 &\equiv 3 \pmod{7} \\ 5^6 &\equiv 1 \pmod{7} \end{align*}Therefore, we have that $5^6 \equiv 1$ modulo 7. Thus, $5^{2010} \equiv (5^6)^{335} \equiv 1^{335} \equiv \boxed{1}$ modulo 7.
Math Dataset
\begin{document} { \sloppy \begin{center} {\bf \Large\bf On the measuring of independence degree of the two discrete random variables.} \\ E.А.Yanovich \\ University of St. Petersburg,\\ Department of Higher Mathematics,\\ Moskovskii 151a, 59, St. Petersburg, 196128 Russia\\ E-mail: [email protected] \end{center} \begin{quote} {\footnotesize In this paper we construct the new coefficient which allows to measure quantitatively the independence of the two discrete random variables. The new inequalities for the matrices with non-negative elements are found.} \end{quote} \section{Introduction.} Statistical description of the two random variables with finite number of states is determined by $(n\times m)$ matrix $P$ ($n\ge2\,,m\ge2$). Its elements $p_{i,j}$ ($p_{i,j}\ge 0$) define the probabilities of simultaneous appearance $i$-th value of one random variable and $j$-th value of another one. The condition of probability preservation \begin{equation} \label{total_prob} \sum_{i,j}p_{i,j}=1 \end{equation} is valid. Random variables will be independent if and only if the matrix $P$ elements are presented in the form $p_{i,j}=p_i\,q_j$. Then the matrix $P$ can be represented as a product of the matrix-column $B=(p_1,\ldots,p_n)^T$ and the matrix-line $C=(q_1,\ldots,q_m)$ and the condition of probability preservation~(\ref{total_prob}) take the form $$ \sum_{i=1}^n p_i=1\,,\quad \sum_{i=1}^m q_i=1 $$ If the matrix $P$ is known then the matrices $B$ and $C$ are defined by $$ B=P(D_m)^T\,\quad C=D_nP\,, $$ where $D_k=(1,\ldots,1)$ is the unit line of length $k$. Random variables is said to be completely dependent if and only if to every value of each random variable corresponds one and only one value of another random variable. That is each line and each column of $P$ contains at most one non-zero element. In this case the dependence between random variables is said to be functional. For the measuring independence degree it is necessary to construct such number coefficient $k$ that satisfies to some natural conditions. We claim that~\cite{1} $1.\quad k\in [0,1]$ $2.\quad\left(k=0\right)\Leftrightarrow\left(independence\right)$ $3.\quad\left(k=1\right)\Leftrightarrow\left(complete\,dependence\right)$\\ The coefficient $k$ satisfying to all three conditions can be constructed as follows $$ k=\frac{\mu}{\mu_f}\,, $$ where $$ \mu=\sum_k(M_k)^2\,,\quad \mu_f=\sum_{1\le i<j\le n}(s_i)^2(s_j)^2\,,\quad s_i=\sum_{k=1}^mp_{i,k} $$ Here $\mu$ is the sum of squares of all second order determinants $M_k$ of the matrix $P$. We suppose that $s_i>0$ for all $i$ and $n\le m$ (in the case $n>m$, we should transpose the matrix $P$). \section{Proof of the 1-3 conditions.} Let's start from the condition 2. The condition $p_{i,j}=p_i\,q_j$ is necessary and sufficient for $rank\,P=1$~\cite{2}. It follows that $k=0$ ($\mu=0$) if and only if the random variables are independent. It is evident that $\mu\ge 0$. For the proof of the condition 1 and 3, let's consider the matrix with two lines $$ P=\left(\begin{array}{cccc} a_1&a_2&\ldots&a_m\\ b_1&b_2&\ldots&b_m \end{array}\right)\,,\quad m\ge 2 $$ We have $$ \mu=\sum_k(M_k)^2=\sum_{i<j}|a_ib_j-a_jb_i|^2=\sum_ia_i^2\sum_ib_i^2-\left(\sum_ia_ib_i\right)^2\le \sum_ia_i^2\sum_ib_i^2\le $$ $$ \le(a_1+\ldots+a_m)^2\,(b_1+\ldots+b_m)^2=(s_1)^2\,(s_2)^2=\mu_f $$ The last inequality will be exact equality if and only if each sequence $a_n$ and $b_n$ contains at most one non-zero element. On the other hand, if $s_1=a_i$, $s_2=b_j$ and $i\ne j$ then $\mu=\mu_f$. The conditions 1 and 3 are proved for $n=2$. Let's consider a general case. Suppose that $n\le m$. Estimating the sums corresponding to each pare of lines as before, we obtain $$ \mu\le (s_1)^2((s_2)^2+\ldots+(s_n)^2)+(s_2)^2((s_3)^2+\ldots+(s_n)^2)+\ldots+(s_{n-1})^2(s_n)^2=\mu_f\,, $$ and $\mu=\mu_f$ if and only if each line contains one non-zero element and these elements are situated in the different columns of $P$. It is possible since $n\le m$. Thus all conditions of the coefficient $k$ are proved. One can note that $\mu_f$ is the maximum value of $\mu$ for fixed $s_1,s_2,\ldots,s_n$. This value should be achieved if $s_i=p_{i,k_i}$ and all $k_i$ were different that is if the dependence between random variables was functional. Let's note that due to the condition~(\ref{total_prob}) all results remain valid and for infinite probability matrices $P$. \end{document}
arXiv
\begin{document} \title[Embedding cycles in finite planes]{Embedding cycles in finite planes} \author{Felix Lazebnik} \address{Department of Mathematical Sciences\\ University of Delaware\\ Newark, DE 19716.} \email{[email protected]} \author{Keith E. Mellinger} \address{Department of Mathematics\\ University of Mary Washington\\ Fredericksburg, VA 22401.} \email{[email protected]} \author{Oscar Vega} \address{Department of Mathematics\\ California State University, Fresno \\ Fresno, CA 93740.} \email{[email protected]} \date{} \subjclass[2000]{Primary 05, 51; Secondary 20} \keywords{Graph embeddings, finite affine plane, finite projective plane, cycle, hamiltonian, pancyclic graph} \begin{abstract} We define and study embeddings of cycles in finite affine and projective planes. We show that for all $k$, $3\le k\le q^2$, a $k$-cycle can be embedded in any affine plane of order $q$. We also prove a similar result for finite projective planes: for all $k$, $3\le k\le q^2+q+1$, a $k$-cycle can be embedded in any projective plane of order $q$. \end{abstract} \maketitle \section{Introduction} Our work concerns substructures in finite affine and projective planes. In order to explain the questions we consider, we will need the following definitions and notations. Any graph-theoretic notion not defined here may be found in Bollob\' as \cite{Bol98}. All of our graphs are finite, simple and undirected. If $G = (V,E)=(V(G), E(G))$ is a graph, then the {\it order} of $G$ is $v(G)=|V|$, the number of vertices of $G$, and the {\it size} of $G$ is $e(G)=|E|$, the number of edges in $G$. Each edge of $G$ is thought as a 2-subset of $V$. An edge $\{x,y\}$ will be denoted by $xy$ or $yx$. A vertex $v$ is {\it incident} with an edge $e$ if $v\in e$. We say that a graph $G'=(V',E')$ is a {\it subgraph of $G$}, and denote it by $G'\subset G$, if $V'\subset V$ and $E'\subset E$. If $G' \subset G$, we will also say that $G$ {\it contains} $G'$. For a vertex $v \in V$, $N(v) = N_G(v)= \{u \in V: uv \in E\}$ denotes the {\it neighborhood} of $v$, and $deg(v) = deg_G(v) = |N(v)|$, the {\it degree} of $v$. If the degrees of all vertices of $G$ are equal to $d$, then $G$ is called $d$-{\it regular}. For a graph $F$, we say that $G$ is {\it $F$-free} if $G$ contains no subgraph isomorphic to $F$. For $k\ge 2$, any graph isomorphic to the graph with a vertex-set $\{x_1,\ldots, x_k\}$ and an edge-set $\{x_1x_2, x_2x_3,\ldots, x_{k-1}x_k\}$ is called an $x_1x_k$-path, or a {\it $k$-path}, and we denote it by ${\mathcal P}_k$. The length of a path is its number of edges. For $k\ge 3$, the graph with a vertex-set $\{x_1,\ldots, x_k\}$ and edge-set $\{x_1x_2, x_2x_3, \ldots , x_{k-1}x_k, x_kx_1\}$ is called a {\it $k$-cycle}, and it is often denoted by ${\mathcal C}$ or ${\mathcal C}_k$. Any subgraph of $G$ isomorphic to a $k$-cycle is called a {\it $k$-cycle in $G$}. The {\it girth} of a graph $G$ containing cycles, denoted by $g=g(G)$, is the length of a shortest cycle in $G$. Let $V(G)=A\cup B$ be a partition of $V(G)$, and let every edge of $G$ have one endpoint in $A$ and another in $B$. Then $G$ is called {\it bipartite} and we denote it by $G(A,B;E)$. If $|A|=m$ and $|B|=n$, then we refer to $G$ as an $(m,n)$-bipartite graph. All notions of incidence geometry not defined here may be found in \cite{Bu95}. A {\it partial plane} $\pi = ({{\mathcal P}}, {{\mathcal L}} ; {\mathcal I})$ is an incidence structure with a set of points ${\mathcal P}$, a set of lines ${\mathcal L}$, and a symmetric binary relation of incidence $I\subseteq ({{\mathcal P}}\times {{\mathcal L}})\,\cup \,({{\mathcal L}}\times {{\mathcal P}})$ such that any two distinct points are on at most one line, and every line contains at least two points (note that we have used ${\mathcal P}$ for two different object as of now: to denote a path and to denote the points on a partial plane. The usage of this symbol should be clear from the context). The definition implies that any two lines share at most one point. We will often identify lines with the sets of points on them. We say that a partial plane $\pi ' = ({{\mathcal P}}', {{\mathcal L}}' ; {\mathcal I}')$ is a {\it subplane} of $\pi$, denoted $\pi' \subset \pi$, if ${{\mathcal P}}'\subset {{\mathcal P}}, {{\mathcal L}}'\subset{{\mathcal L}}$, and ${\mathcal I}'\subset {\mathcal I}$. If there is a line containing two distinct points $X$ and $Y$, we denote it by $XY$ or $YX$. For $k\ge 3$, we define a $k$-gon as a partial plane with $k$ distinct points $\{P_1,P_2, \ldots P_k\}$, with $k$ distinct lines $\{P_1P_{2}, P_2P_{3}, \ldots, P_{k-1}P_k, P_kP_1\}$, and with point and line being incident if and only if the point is on the line. A subplane of $\pi$ isomorphic to a $k$-gon is called a {\it $k$-gon in $\pi$}. The {\it Levi graph} of a partial plane $\pi$ is its point-line bipartite incidence graph $Levi(\pi) = Levi({{\mathcal P}},{{\mathcal L}}; E)$, where $Pl\in E$ if and only if point $P$ is on line $l$. The Levi graph of any partial plane is $4$-cycle-free. Clearly, there exists a bijection between the set of all $k$-gons in $\pi$ and the set of $2k$-cycles in $Levi(\pi)$. A {\it projective plane of order $q\ge 2$}, denoted $\pi_q$, is a partial plane with every point on exactly $q+1$ lines, every line containing exactly $q+1$ points, and having four points such that no three of them are collinear. It is easy to argue that $\pi_q$ contains $ q^2 + q + 1$ points and $q^2 + q + 1$ lines. Let $n_q = q^2 + q + 1$. It is easy to show that a partial plane is a projective plane of order $q$ if and only if its Levi graph is a $(q+1)$-regular graph of girth $6$ and diameter $3$. Projective planes $\pi_q$ are known to exist only when the order $q$ is a prime power. If $q\ge 9$ is a prime power but not a prime, there are always non-isomorphic planes of order $q$, and their number grows fast with $q$. Let $PG(2,q)$ denote the {\it classical} projective plane of prime power order $q$ which can be modeled as follows: points of $PG(2,q)$ are 1-dimensional subspaces in the 3-dimensional vector space over the finite field of $q$ elements, lines of $PG(2,q)$ are 2-dimensional subspaces of the vector space, and a point is incident to a line if it is a subspace of it. Removing a line from a projective plane, and removing its points from all other lines, yields a partial plane known as an {\it affine plane}. The line removed is often referred to as the {\it line at infinity}, and it is denoted by $l_\infty$. Conversely, a projective plane of order $q$ can be obtained from an affine plane of order $q$ (i.e. having $q+1$ lines through each point) by adding a line at infinity to it, which can be thought of as a set of $q+1$ new points, called {\it points at infinity}, which is in bijective correspondence with the set of parallel classes (also called the set of all slopes) of lines in the affine plane. We will use $\pi_q$ to denote a projective plane of order $q$, and $\alpha_q$ for affine planes of order $q$. The following problem, stated in terms of set systems, appears in Erd\H os ~\cite{Erd79}: \noindent {\bf Problem 1.} \label{Problem1} {\it Is every finite partial linear space embedded in a finite projective plane?} It is possible that the question was asked before, as it was well known that every partial linear space embeds in some infinite projective plane, by a process of free closure due to Hall \cite{Hall43}. For recent results related to the question, see Moorhouse and Williford \cite{MoorWil2009}. Rephrased in terms of graphs, Problem 1 is the following: \noindent {\bf Problem 1$^*$.} \label{Problem1'} {\it Is every finite bipartite graph without 4-cycles a subgraph of the Levi graph of a finite projective plane?} Thinking about cycles in Levi graphs of projective planes, we introduced the following notion of embedding of a graph into a partial plane, and found it useful. Let $G$ be a graph and let $\pi = ({{\mathcal P}}, {{\mathcal L}}; {\mathcal I})$ be a partial plane. Let \[ f: V(G)\cup E(G) \to {{\mathcal P}}\cup {{\mathcal L}} \] be an injective map such that $f(V(G))\subset {{\mathcal P}}$, $f(E(G))\subset {\mathcal L}$, and for every vertex $x$ and edge $e$ of $G$, their incidence in $G$ implies the incidence of point $f(x)$ and line $f(e)$ in $\pi$. We call such a map $f$ an {\it embedding of $G$ in $\pi$}, and if it exists we say that {\it $G$ embeds in $\pi$} and write $G\hookrightarrow \pi$. If $G\hookrightarrow \pi$, then adjacent vertices of $G$ are mapped to collinear points of $\pi$. Note that if $G\hookrightarrow \pi_q$, then $v(G)\le n_q$, $e(G)\le n_q$, and $deg_G(x)\le q+1$ for all $x\in V(G)$. A cycle containing all vertices of a graph is called a {\it hamiltonian cycle} of the graph, and if such exists, the graph is called a {\it hamiltonian} graph. Similarly, if $\pi_q$ contains an $n_q$-gon, we call it {\it hamiltonian}. A graph $G$ containing $k$-cycles of all possible lengths, $3\le k\le v(G)$, is called {\it pancyclic}. Similarly, we say that $\pi _q$ is {\it pangonal}, if it contains $k$-gons for all $3\le k\le n_q$. The latter is equivalent to $Levi (\pi_q)$ containing all $2k$-cycles for $3\le k\le n_q$. Clearly, if $G\hookrightarrow \pi_q$, a $k$-cycle in $G$ corresponds to a $k$-gon in $\pi_q$, which, in turn, corresponds to a $2k$-cycle in $Levi(\pi_q)$. From now on we choose to be less pedantic, and will feel free to use graph theoretic and geometric terms interchangeably. For example, we will say `point' for a vertex of a graph, `vertex' for a point of a partial plane, and we will speak about `path' and `cycle' in a plane, etc. Determining whether a graph is hamiltonian, or, more generally, understanding what cycles it contains, is one of the central problems in graph theory, and it has been a subject of active research for many years. The existence of hamiltonian cycles in $\pi_q$ (or $Levi (\pi_q)$), or its pancyclicity, was addressed by several researchers. The presence of $k$-gons of some small lengths in $\pi_q$ is easy to establish. In \cite{LMV09}, the authors presented explicit formuli for the numbers of distinct $k$-gons in every projective plane of order $q$ for $k=3, 4, 5, 6$. Very recently, and in a very impressive way, Voropaev \cite{Vor12} extended this list to $k= 7, 8, 9, 10$. The existence of very special hamiltonian cycles in $PG(2,q)$ is a celebrated result of Singer \cite{Singer38}. These cycles are often referred to as the {\it Singer cycles} in $PG(2,q)$. For $q=p$ (prime) Schmeichel \cite{Schm89} showed by explicit constructions that $PG(2,p)$ is pancyclic, and that the hamiltonian cycles he constructed were different from Singer cycles. DeMarco and Lazebnik \cite{DL08} constructed a hamiltonian cycle in a Hall plane of order $p^2$. Most of the known sufficient conditions for the existence of hamiltonian cycles in graphs are effective for rather dense graphs: graphs of order $n$ and size greater that $cn^2$ for some positive constant $c$ (see a survey by Gould \cite{Gould03}). Levi graphs of projective planes are much sparser; being $(q+1)$-regular, their size is $ (1/(2\sqrt{2}) +o(1))n^{3/2}$ for $n\to \infty$, and that is why most techniques of proving hamiltonicity of graphs do not apply to them. For the same reason, upper bounds on the Tur\'an number of a $2k$-cycle, see, e.g., Pikhurko \cite{Pik12} and and references therein, are not effective for proving the existence of $2k$-cycles in Levi graphs of projective planes for most values of $k$ (as $k$ may depend on $q$). A new approach for establishing hamiltonicity and the existence of shorter cycles came from probabilistic techniques and studies of cycles in random and pseudo-random graphs (we omit the definition). See, e.g., Thomasson \cite{Thom87}, Chung, Graham and Wilson \cite{CGW89}, Frieze \cite{Fri00}, and Frieze and Krivelevich \cite{FriKriv02}. In \cite{KS03}, Krivelevich and Sudakov explored relations between pseudo-randomness and hamiltonicity in regular non-bipartite graphs. Some other results related to hamiltonicity and pancyclicity appeared in recent publications by Keevash and Sudakov \cite{KeeSud10}, Krivelevich, Lee and Sudakov \cite{KriLeeSud10}, and Lee and Sudakov \cite{LeeSud12}. It is likely that proofs in these papers can be modified to give results for (bipartite) Levi graphs of projective planes, but the requirement on the order of the graph to be sufficiently large (as is the case in the aforementioned papers) will remain. In this paper we establish the pancyclicity of $\pi_q$ and $\alpha_q$, for all $q$, and our proof is constructive. \\ Our main results follow. \begin{theorem} \label{thmlongcycles} Let $\alpha_q$ be an affine plane of order $q\ge 2$. Then $C_k \hookrightarrow \alpha_q$ for all $k$, $3\le k\le q^2$. \end{theorem} \begin{theorem}\label{main} Let $\pi_q$ be a projective plane of order $q\ge 2$, and $n_q=q^2 + q + 1$. Then $C_k \hookrightarrow \pi_q$ for all $k$, $3\le k\le n_q$. \end{theorem} We now proceed to give a construction for paths and cycles in \emph{any} finite affine or projective plane. We start with a remark that will be very useful later on. \begin{remark}\label{remcombpaths} Let $P_1 \rightarrow P_2 \rightarrow \cdots \rightarrow P_k$ and $Q_1\rightarrow Q_2\rightarrow \cdots \rightarrow Q_n$ be two disjoint (in terms of points and lines) paths embedded in $\pi_q$ or $\alpha_q$. Then, if the line $\ell = P_kQ_n$ has not been used in these embeddings, we can create the following embedding for a path on $n+k$ vertices: \[ P_1\rightarrow P_2\rightarrow \cdots \rightarrow P_k \xrightarrow{\ell} Q_n \rightarrow Q_{n-1} \rightarrow \cdots Q_{2} \rightarrow Q_1. \] Here, the symbol $P_k \xrightarrow{\ell} Q_n$ indicates that the line $\ell$ joins the points $P_k$ and $Q_n$. Moreover, if the line $m = Q_1P_1$ is still available, then we get a cycle of length $k+n$ embedded in $\pi_q$ (or $\alpha_q$).\\ \end{remark} Our main technique in the next two sections will be to construct paths that can be combined using Remark \ref{remcombpaths} to create cycles of any length. \section{Cycles in Affine Planes}\label{sec:CyclesAffinePlanes} Let $\alpha_q$ be an affine plane of order $q$, and let $O$ be any point of the plane. We label the $q+1$ lines through $O$ by $l_0, l_1, \ldots , l_{q}$. For any given point $Q\in \alpha_q$, we use $l_{i}+Q$ to denote the line parallel to $l_i$ that passes through $Q$. Let $ a \mod q+1$ denote the remainder of the division of $a$ by $q+1$. Pick any point $P_0$ on $l_0$, different from $O$. Let $P_1$ be the point of intersection of $l_{2}+P_0$ and $l_1$. Let $P_2$ be the point of intersection of $l_3+P_1$ and $l_2$, etc. In general, let $P_i$ be the point of intersection of $l_{i+1 \bmod q+1}+P_{i-1}$ and $l_i$, for all $i=1,2,\ldots , q$. Since $O\neq P_i \in l_i$, for all $i=1, 2, \ldots, q$, then all these points are distinct. Similarly, the lines $P_{i-1} P_{i}$ are in different parallel classes, for all $i=1, 2, \ldots, q$. It follows that by joining the points $P_{i-1}$ and $P_{i}$, for all $i=1, 2, \ldots, q$, we obtain a path on $q+1$ vertices. Denote this path by $\mathcal{P}_{P_0}$. \\ \begin{figure} \caption{Two vertex/edge disjoint paths, $\mathcal{P}_{P_0}$ and $\mathcal{P}_{Q_0}$, for $q=4$.} \end{figure} \begin{lemma}\label{lemdisjoint} Let $P_0 \neq Q_0 \in l_0$. Then the paths $\mathcal{P}_{P_0}$ and $\mathcal{P}_{Q_0}$ share neither points nor lines. \end{lemma} \begin{proof} Let \[ \mathcal{P}_{P_0}: \ P_0 \rightarrow P_1\rightarrow \cdots \rightarrow P_q \hspace{1in} \mathcal{P}_{Q_0}: \ Q_0 \rightarrow Q_{1} \rightarrow \cdots Q_q \] Clearly $P_i \neq Q_j$, for $i\neq j$. We also know that $P_0 \neq Q_0$. So, assume that $P_i=Q_i$, for some $i=1, \ldots , q$, so that $P_j\neq Q_j$, for all $0\leq j<i$. It follows that \[ \left( l_{i+1 \bmod q+1}+P_{i-1} \right) \cap l_i = P_i = Q_i =\left( l_{i+1 \bmod q+1}+Q_{i-1} \right)\cap l_i \] which forces $l_{i+1 \bmod q+1}+P_{i-1} = l_{i+1 \bmod q+1}+Q_{i-1} $, and thus $P_{i-1}=Q_{i-1}$, a contradiction. Finally, it is easy to see that if $\mathcal{P}_{P_0}$ and $\mathcal{P}_{Q_0}$ shared a line, then they would also share a point. \end{proof} \begin{lemma}\label{lemscycles} We can partition the points of $\alpha_q \setminus \{O\}$ into $s$ cycles, ${\mathcal C}_1, \ldots , {\mathcal C}_s$, where the length of $C_i$ is $t_i(q+1)$ for some integer $t_i$, $1\le i \le s \le q-1$, $1\le t_1 \le \ldots \le t_s$, and $t_1 + \cdots + t_s = q-1$. \end{lemma} \begin{proof} If we label the points on $l_0 \setminus\{O\}$ by $x_1, x_2, \cdots, x_{q-1}$, by Lemma \ref{lemdisjoint}, $\mathcal{P}_{x_1}$, $\mathcal{P}_{x_2}, \ldots$, $\mathcal{P}_{x_{q-1}}$ yields a partition of the points of $\alpha_q \setminus \{O\}$ into $q-1$ disjoint paths each having $q+1$ vertices. If $q\ge 3$, then we have at least two such paths, and we may want to connect them to create longer paths and/or cycles. Note that in the paths ${\mathcal P}_{x_i}$, no line parallel to $l_1$ has been used. Now, if we consider a path $\mathcal{P}_{P_0}$, then the line $l_{1}+ P_{q}$ intersects $l_0$ at a point $Q$, which can never be equal to $O$, otherwise $l_{1}+ P_{q}=l_1$, and thus $P_q\in l_1\cap l_q = \{O\}$, a contradiction. This point $Q$ is uniquely determined by $P_0$ (and the way we do this construction, of course). If $Q=P_0$, then we get a $(q+1)$-cycle. If $Q\neq P_0$, then we re-label $Q=Q_0$ and consider the path $\mathcal{P}_{Q_0}$. This will give us a path with $2(q+1)$ vertices, namely \[ P_0 \rightarrow P_1\rightarrow \cdots \rightarrow P_q \rightarrow Q_0 \rightarrow Q_{1} \rightarrow \cdots Q_q. \] We then proceed to find $R_0: = \left( l_{1}+ Q_{q} \right) \cap l_0$. If $R_0=P_0$ we get a cycle of length $2(q+1)$. If $R_0 =Q_0$, then we get that $Q_0$ is on two lines that are parallel to $l_1$, namely $l_{1}+ Q_{q}$ and $l_{1}+ P_{q}$. This forces $P_q$ and $Q_q$ to coincide, but this is impossible because of Lemma \ref{lemdisjoint}. It follows that we either get a cycle of length $2(q+1)$ or we can keep extending this path using $\mathcal{P}_{R_0}$. Since $l_0$ contains finitely many points this process must end. Moreover, it is impossible to `close' this cycle at any point that is not $P_0$, as this would yield the same contradiction we obtained above when we assumed $R_0 =Q_0$. Hence, by combining paths we can construct cycles of length $t(q+1)$, for some positive integer $t$, these are the cycles ${\mathcal C}_i$ we wanted to find. \end{proof} In order to prove Theorem \ref{thmlongcycles} we will need to construct paths out of the cycles ${\mathcal C}_1, {\mathcal C}_2,\ldots, {\mathcal C}_{s}$. Firstly, we define terms and set notation that will be necessary for the rest of this article. \begin{definition} For every $i=1,\ldots, s$, let $P_{i,i-1}$ be an arbitrary point on $l_{i-1}\cap {\mathcal C}_i$ (note that there are $t_i$ such points), and let $P_{i,i}$ be its neighbor on $l_i$. We construct two different types of paths in ${\mathcal C}_{i}$: all of them start at $P_{i,i-1}$ and \begin{enumerate} \item the next vertex is $P_{i,i}$. The other vertices in the path are easily determined from these first two, or \item the next vertex is the neighbor of $P_{i,i-1}$ in ${\mathcal C}_{i}$ that is on the line $l_{i-2 \mod q+1}$. The other vertices in the path are easily determined from these first two. \end{enumerate} We will say that the first path is a \emph{positive} path, and that the second is a \emph{negative} path. \end{definition} \begin{lemma}\label{lemcaset_1} $k$-cycles can be embedded in $\alpha_q$, for all $3\leq k \leq t_1(q+1)$. \end{lemma} \begin{proof} If $q=2, 3$ the result is immediate. We assume $q\ge 4$ for the rest of this proof. The cycle ${\mathcal C}_1$ is of length $t_1(q+1)$, and so we only need to construct $k$-cycles with $3 \le k < t_1(q+1)$. \\ If $k\equiv 1 \pmod{q+1}$, then, since $k\ge 3$, we consider a positive $k$-path in ${\mathcal C}_1$ starting at $P_{1,0}$. As $k\equiv 1 \pmod{q+1}$, this path ends at some $ Q_0 \in l_0$, and $Q_0 \ne P_0$. Connect $P_0$ to $Q_{0}$ using $l_0$ to get a $k$-cycle. \\ If $k \equiv 2 \pmod{q+1}$ and $2< k<t_1(q+1)$, then $t_1>1$. Consider a positive $(k-2)$-path ${\mathcal P}$ in ${\mathcal C}_1$ starting at $P_{1,0}$. This path ends at a point $P_q\in l_q$. Since $k<t_1(q+1)$ then there is a $2$-path in ${\mathcal C}_1$, disjoint from ${\mathcal P}$, of the form $Q_{q-1} \rightarrow Q_q$ with $Q_i\in l_i$, for $i=q-1,q$. Consider the following $k$ cycle \[ O \xrightarrow{l_0} \underbrace{P_0 \rightarrow \cdots \rightarrow P_{q-1}}_{in \ {\mathcal P}} \xrightarrow{l_{q-1}} Q_{q-1}\rightarrow Q_q \xrightarrow{l_q} O, \] where $P_{q-1}\in l_{q-1}$ was the neighbor of $P_q$ in ${\mathcal P}$. \\ If $k \not\equiv 1,2 \pmod{q+1}$, then, since $3\le k\le t_1(q+1)$, take a positive $(k-1)$-path in ${\mathcal C}_1$ starting at $P_{1,0}$. This path will end on a point $P_{k-2}\in l_{k-2}$. Connect $P_0$ and $P_{k-2}$ to $O$ using $l_0$ and $l_{k-2}$, respectively, to get a $k$-cycle. \end{proof} We now focus on the construction of $k$-cycles for $k > t_1(q+1)$. In order to do that we will use the following construction. \begin{construction}\label{constrP_m} Let $\lambda_m =t_1+t_2+\cdots + t_m$. We will construct a $\lambda _m (q+1)$-path $\mathcal{P}_m$ out of the cycles ${\mathcal C}_1, {\mathcal C}_2,\ldots, {\mathcal C}_{m}$, where $2\leq m \leq s$ (recall that $s\leq q-1$). \\ For each $i=1,\ldots, m-1$, we connect ${\mathcal C}_i$ with ${\mathcal C}_{i+1}$ by joining $P_{i,i}$ with $P_{i+1,i}$ using $l_i$. Then, for all $i=1,\ldots, m$, we take the $P_{i,i-1}P_{i,i}$ path in ${\mathcal C}_i$ having $t_i(q+1)$ vertices, and construct the following path \[ \underbrace{P_{1,0} \rightarrow \cdots \rightarrow P_{1,1}}_{in \ {\mathcal C}_1} \xrightarrow{l_1} \underbrace{P_{2,1} \rightarrow \cdots \rightarrow P_{2,2}}_{in \ {\mathcal C}_2} \xrightarrow{l_2} \cdots \xrightarrow{l_{m-1}} \underbrace{P_{m,m-1} \rightarrow \cdots \rightarrow P_{m,m}}_{in \ {\mathcal C}_m} \] Since no vertices were eliminated or added, and all new lines are distinct and through $O$ (none used in the construction of the ${\mathcal C}_i$'s), this construction yields a $P_{1,0}P_{m,m}$-path with $\lambda _m (q+1)$ vertices. \begin{figure} \caption{Construction of $\mathcal{P}_m$} \end{figure} \noindent Note that $O$ has not been used in the construction of $\mathcal{P}_m$, and that neither have the lines $l_m, \ldots, l_q$, and $l_0$. \\ Finally, we will denote the neighbor of $P_{1,0}$ in $\mathcal{P}_m$, which is a point on $l_q$, by $P_{1,q}$. \end{construction} We now prove Theorem \ref{thmlongcycles}. \begin{proof}[Proof of Theorem \ref{thmlongcycles}] In this proof we follow the notation introduced in Construction \ref{constrP_m}. If $q=2$, the existence of 3- and 4-cycles is obvious. If $q=3$, pancyclicity can be easily verified. In what follows we assume $q\ge 4$, though most arguments hold for $q\ge 3$. We want to embed all possible $k$-cycles in $\alpha_q$ that have not been already discussed in Lemma \ref{lemcaset_1}. For any given $k$, we write it as either $k=\lambda_s(q+1)$, $k=\lambda_s(q+1)+1 = q^2$, or $k = \lambda_m(q+1)+r$, for some $m=1, \ldots, s-1$ and $0\leq r<t_{m+1}(q+1)$. Note that the case $m=0$ was taken care of in Lemma \ref{lemcaset_1}. Firstly, we can join $P_{1,0}$ and $P_{s,s}$ with $O$, using the lines $l_0$ and $l_s$ respectively, to obtain a cycle of length $\lambda_s(q+1)+1$. Note that this grants hamiltonicity. Moreover, if we cut $\mathcal{P}_s$ short one vertex, and thus we ask it to end at $P_{1,q}$, then joining the endpoints of this new path to $O$ yields a $\lambda_s(q+1)$-cycle.\\ \noindent From now on, let $k = \lambda_m(q+1)+r$, for some $m=1, \ldots, s-1$ and some $0\leq r<t_{m+1}(q+1)$. \\ Our strategy for constructing a $k$-cycle in $\alpha _q$ will be to connect a path on ${\mathcal C}_{m+1}$ (note that $m<s$) to $O$ and ${\mathcal P}_m$. The paths on ${\mathcal C}_{m+1}$ we will consider always starts at $P_{m+1,m}$, which will be connected to $P_{m,m} \in {\mathcal P}_m$ by using $l_m$. \begin{figure} \caption{Connecting ${\mathcal P}_m$ and $\mathcal{C}_{m+1}$} \end{figure} We consider several cases. \\ \noindent \textbf{(a)} If $r\equiv 3 \pmod{q+1}$ we first get a positive path on $r-1$ vertices on ${\mathcal C}_{m+1}$ that ends on a point $Q_{m+1} \in l_{m+1}$. We then join $P_{1,0}$ with $O$ using $l_0$, $Q_{m+1}$ with $O$ using $l_{m+1}$. The result is a cycle of the desired length. \textbf{(b)} If $r\equiv 1 \pmod{q+1}$ we consider a negative $(r-2)$-path on ${\mathcal C}_{m+1}$ that ends on a point $Q_{m+1} \in l_{m+1}$. We finish the construction as in case \textbf{(a)}. \textbf{(c)} If $r\equiv 2 \pmod{q+1}$ or $r\equiv 0 \pmod{q+1}$, then we cut $\mathcal{P}_m$ short one vertex, so it ends at $P_{1,q}$. We get the path in ${\mathcal C}_{m+1}$ as in part \textbf{(a)} (for $r\equiv 2 \pmod{q+1}$) or \textbf{(b)} (for $r\equiv 0 \pmod{q+1}$). We close the cycle by joining $P_{1,q}$ with $O$ using $l_q$, $Q_{m+1}$ with $O$ using $l_{m+1}$.\\ \textbf{(d)} If $r\equiv i \pmod{q+1}$, where $4\leq i \leq q$. We want to get a positive $(r-2)$-path on ${\mathcal C}_{m+1}$ starting at $P_{m+1,m}$. This path would end at a point on $l_{m+i-2 \mod q+1}$. \\ \emph{(i)} If $i\leq q+2-m$, then $m+i-2 \leq q$, and thus this path on $r-1$ vertices ends at a point $Q_{m+i-2}\in l_{m+i-2}$. We then get a cycle of the desired length by joining $P_{1,0}$ with $O$ using $l_0$, $Q_{m+i-2}$ with $O$ using $l_{m+i-2}$.\\ \emph{(ii)} If $i\geq q+3-m$, then $m+i-2 \geq q+1$, and thus this path on $r-1$ vertices ends at a point $Q_{t-2}\in l_{t-2}$, where $0\leq t-2 \leq m-3$. We next `shift' this path to make it start at $P_{m+1,m+1}$ instead of $P_{m+1,m}$ and add a vertex to make it a path on $r$ vertices. Now this path ends at $Q_{t}\in l_{t}$, where $2\leq t \leq m-1$. Since the line $l_t$ is needed to construct ${\mathcal P}_m$ we will need to modify the construction of ${\mathcal P}_m$ by connecting the cycles ${\mathcal C}_1, {\mathcal C}_2,\ldots, {\mathcal C}_{m+1}$ in the following way \[ {\mathcal C}_1\xrightarrow{l_1} {\mathcal C}_2 \xrightarrow{l_2} \cdots \xrightarrow{l_{t-2}} {\mathcal C}_{t-1}\xrightarrow{l_{t-1}} {\mathcal C}_t \xrightarrow{l_{t+1}} {\mathcal C}_{t+1} \xrightarrow{l_{t+2}} \cdots \xrightarrow{l_{m+1}} {\mathcal C}_{m+1} \] Note that this can be done for all $2 \leq t \leq m-1$, and that doing this means that $P_{t,t}$ is a `loose' vertex, not used in ${\mathcal P}_m$ anymore.\\ Now we connect this path to the path on ${\mathcal C}_{m+1}$ that ends on $Q_t$. The line $l_t$ is now free, and thus it can be used to close the cycle at $O$. We get the cycle \[ O \xrightarrow{l_0} {\mathcal C}_1 \xrightarrow{l_1} \cdots \xrightarrow{l_{m}} {\mathcal C}_m \xrightarrow{l_{m+1}} P_{m+1,m+1} \rightarrow \cdots \rightarrow Q_t \xrightarrow{l_t} O \] This cycle has length: \[ \lambda_m(q+1) -1 + r +1 = \lambda_m(q+1) + r = k \] The `minus one' is because of the loose vertex, the `plus one' is because of $O$. \end{proof} \section{Cycles in Projective Planes} In this section we will study embeddings of cycles in finite projective planes. Let $\pi_q$ denote a projective plane of order $q$. We think about $\pi_q$ as obtained from an affine plane $\alpha _q$ by adding a line, denoted $\ell_{\infty}$, consisting of points $(i)$, for $i=0, \cdots, q$. Using the notations from the previous section, each of these points $(i)$ is incident with only the following lines: $\ell_{\infty}$, line $l_i$ of $\alpha _q$, and the $q-1$ lines of $\alpha _q$ parallel to $l_i$. The next statement follows immediately from our work in Section \ref{sec:CyclesAffinePlanes}. \begin{corollary}\label{lemprojcycles} Let $\pi_q$ be a projective plane of order $q$. Then, a $k$-cycle can be embedded in $\pi_q$, for all $k=3, \ldots , q^2$. \end{corollary} Therefore in order to prove the pancyclicity of $\pi _q$, we need to show that $k$-cycles can be embedded into $\pi_q$ for all $k$, $q^2+1 \leq k \leq q^2+q+1$. At this point one would expect to use heavily the pancyclicity of $\alpha _q$ for the construction of `long' cycles in $\pi_q$, but we could not make use of this idea. Instead, we base our construction methods on using the cycles ${\mathcal C}_i$ in similar ways to that in the proof of Theorem \ref{thmlongcycles}. Let $W_1$ be any of the vertices of ${\mathcal C}_1$ that are on $l_q$, and let $V_1= (l_1+W_1) \cap l_0$. It follows that $V_1$ is a vertex of ${\mathcal C}_1$, and that $l_1+W_1$ is an edge of ${\mathcal C}_1$. Similarly, for $2\leq i \leq s$, let $W_i \in l_{i-2}$ be a vertex of ${\mathcal C}_i$, and $V_i= (l_i + W_i) \cap l_{i-1}$. Hence, $V_i$ is a vertex of ${\mathcal C}_i$, and $l_i + W_i$ is an edge of ${\mathcal C}_i$. For each $i=1, \cdots , s$, let $[V_i,W_i]$ denote the $V_iW_i$-path in ${\mathcal C}_i$, different from the edge $V_iW_i$. Next we define $U_i$ to be the vertex of ${\mathcal C}_i$ that is on $l_q$ and that is the closest to $V_i$, when we move from $V_i$ towards $W_i$ along ${\mathcal C}_i$. By $[V_iU_i]$ we denote the subpath of $[V_i,W_i]$ having $q-i+2$ vertices and endpoints $V_i$ and $U_i$. \begin{figure} \caption{Paths $[V_i,W_i]$ and $[V_iU_i]$} \end{figure} \begin{figure} \caption{Paths $[V_1,W_1]$, $[V_2,W_2]$, and $[V_3,W_3]$} \end{figure} Recall that $(i)= l_i \cap \ell_{\infty}$. We now construct a path ${\mathcal P}$ (for $s\geq 2$) by connecting $W_i$ with $(i)$ using $l_i+W_i$ (which is not an edge of $[V_i,W_i]$), and connecting $(i)$ with $V_{i+1}$ using $l_i$. Thus ${\mathcal P}$ is the path: \[ [V_1,W_1] \xrightarrow{l_1+W_1} (1) \xrightarrow{l_1} [V_2,W_2] \xrightarrow{l_2+W_2} (2) \rightarrow \cdots \rightarrow (s-1) \xrightarrow{l_{s-1}} [V_s,W_s]. \] For $s=1$, ${\mathcal P}$ is obtained from the cycle ${\mathcal C} _1$ by removing the edge $V_1W_1$. Note that for all $s$, ${\mathcal P}$ has $(q^2-1)+(s-1)=q^2+s-2$ vertices. The lines $l_{s}, \cdots , l_q, l_0$, $l_s+W_s$, and $\ell_{\infty}$ have not been used in the construction of ${\mathcal P}$, and neither have the points $(s), \cdots , (q), (0)$, and $O$. \begin{figure} \caption{Paths $[V_1,W_1]$, $[V_2,W_2], [V_3,W_3]$ joined into a path ${\mathcal P}$. Its endpoints are $V_1$ and $W_3$} \end{figure} \begin{figure} \caption{A simple diagram of the path ${\mathcal P}$} \end{figure} Now we begin our construction of $k$-cycles in $\pi_q$ of lengths from $q^2+1$ to $q^2+q+1$ by using the path ${\mathcal P}$ and/or modifications of it. Recall that $s$ denotes the number of all cycles ${\mathcal C}_i$, or of all paths $[V_i,W_i]$, and that $1\le s\le q-1$. We will first construct cycles of length between $q^2+1$ to $q^2+s+2$, and then the ones that are longer than $q^2+s+2$. \begin{lemma}\label{lempathsq^2+1->q^2+s} Cycles of length ranging from $q^2+1$ to $q^2+s+2$ can be embedded in~$\pi_q$. \end{lemma} \begin{proof} Using the path ${\mathcal P}$ we can construct \begin{enumerate} \item a cycle of length $q^2+s+2$: \[ \underbrace{V_1 \rightarrow \cdots \rightarrow W_s}_{in \ {\mathcal P}} \xrightarrow{l_s+W_s} (s) \xrightarrow{l_s} O \xrightarrow{l_q} (q) \xrightarrow{\ell_{\infty}} (0) \xrightarrow{l_0} V_1, \] \item a cycle of length $q^2+s+1$: \[ \underbrace{V_1 \rightarrow \cdots \rightarrow W_s}_{in \ {\mathcal P}} \xrightarrow{l_s+W_s} (s) \xrightarrow{\ell_{\infty}} (q) \xrightarrow{l_q} O \xrightarrow{l_0} V_1,\, \text{and} \] \item a cycle of length $q^2+s$: \[ \underbrace{V_1 \rightarrow \cdots \rightarrow W_s}_{in \ {\mathcal P}} \xrightarrow{l_s+W_s} (s) \xrightarrow{\ell_{\infty}} (0) \xrightarrow{l_0} V_1. \] Note that the lines $l_{s}, \cdots , l_q$ have not been used in the construction of this last cycle, and neither have the points $(s+1), \cdots , (q)$, and $O$. We will denote this cycle by $\mathcal{C}$. \end{enumerate} If $q+1\leq 2s$, then, for every $q-s+1\leq i \leq s$, let us modify $\mathcal{C}$ in the following way: \begin{itemize} \item Delete $q-i+1$ vertices of the path $[V_iU_i]$, all except $U_i$. \item Connect $(i-1)$ with $O$ (recall that $(i-1)$ was connected to $V_i$ in $\mathcal{C}$ via $l_{i-1}$). \item Connect $O$ with $U_i$ using $l_q$. \end{itemize} This yields the cycle \[ \underbrace{U_i \rightarrow \cdots \rightarrow (i-1)}_{in \ \mathcal{C}} \xrightarrow{l_{i-1}} O \xrightarrow{l_q} U_i \] which has length $(q^2+s)-(q-i+1)+1 = q^2-q+s+i$. Since $q-s+1\leq i \leq s$, the length of this cycle ranges between $q^2+1$ and $(q^2+s)-(q-s)$. Note that if $q+1> 2s$, then $(q^2+s)-(q-s) < q^2 +1$. So, for all relevant values of $s$ we have been able to construct cycles with lengths ranging from $q^2+1$ to $(q^2+s)-(q-s)$. \medbreak Next we want to construct $k$-cycles for $(q^2+s)-(q-s)<k<q^2+s$. In order to do that we need to set more notation. Let us relabel the vertices in the path $[V_sU_s]$ by $V_s =P_{s-1}, P_s, \cdots , P_{q-1}, P_q$, where $P_i\in l_i$, for all $i=s, \ldots , q$. Note that $P_q = U_s$. For $s-1\le i< q$, let $[P_iP_j]$ denote the subpath of $[V_sU_s]$ joining $P_i$ and $P_j$. Note that a cycle of length $(q^2+s)-(q-s)$ vertices may be obtained using $i=s$ in the previous construction. We want to use a similar construction to get a cycle of length $(q^2+s)-(q-s)+1$. We modify $\mathcal{C}$ by replacing its subpath $[P_{s-1}P_{q-1}]$ by a path \[ (s-1) \xrightarrow{l_{s-1}} O \xrightarrow{l_{q-1}} P_{q-1} \rightarrow P_{q} \] leading to the following cycle of length $(q^2+s)-(q-s) +1$. \[ \underbrace{U_s \rightarrow \cdots \rightarrow (s-1)}_{in \ \mathcal{C}} \xrightarrow{l_{s-1}} O \xrightarrow{l_{q-1}} P_{q-1} \rightarrow P_{q} \] Note that we are using here that $s\leq q-1$. Now, to create cycles of length larger than $(q^2+s)-(q-s)+1$ we use the following strategy. For every $i=s, \cdots , q-1$, we modify $\mathcal{C}$, by connecting $P_i$ with $O$, and $O$ with $P_q$ to get the cycle \[ \underbrace{P_q \rightarrow \cdots \rightarrow P_i}_{in \ \mathcal{C}} \xrightarrow{l_{i}} O \xrightarrow{l_q} P_q , \] which has length $(q^2+s)-(q-i-1)+ 1=(q^2+s)-(q-i -2)$. Since $i=s, \cdots , q-1$, then the length of this cycle ranges from $(q^2+s)-(q-s)+2$ to $(q^2+s)+1$. \end{proof} \begin{corollary} With the same notation used in Lemma \ref{lempathsq^2+1->q^2+s}, if $s=q-1$, then $\pi_q$ is pancyclic. \end{corollary} Now we want to construct cycles longer than $q^2+s+2$ for when $1\leq s < q-1$. \begin{lemma}\label{lempathsq^2+s+2<} For every $1\leq s < q-1$, cycles of length ranging from $q^2+s+3$ to $q^2+q+1$ can be embedded in $\pi_q$. \end{lemma} \begin{proof} Just as we did in the proof of Lemma \ref{lempathsq^2+1->q^2+s}, the idea is to modify the path ${\mathcal P}$ to get the desired cycles. Hence, we will use the same notation introduced earlier in this section, including that used in the proof of Lemma \ref{lempathsq^2+1->q^2+s}. We first eliminate the edge $l_{s+1}+V_s$ from ${\mathcal P}$, and connect $W_s$ with $V_s$ using $l_s+W_s$. This gives us a path $\tilde{{\mathcal P}}$ that has the same length of ${\mathcal P}$ ($q^2+s-2$ vertices) with endpoints $V_1$ and $P_s$. \hspace{.2in} Note that the lines $l_{s}, \cdots , l_q, l_0$, $l_{s+1}+V_s$, and $\ell_{\infty}$ have not been used, neither have the points $(s), \cdots , (q), (0)$, and $O$. If we now eliminate the edge $l_{s+2}+P_s$ and, instead, connect $P_s$ with $P_{s+1}$ using \[ P_s \xrightarrow{l_{s+1}+V_s} (s+1) \xrightarrow{l_{s+1}} P_{s+1}, \] we get a path ${\mathcal G}_1$ in $(q^2+s-2)+1$ vertices (one more than $\tilde{P}$). We may close this path into a cycle by \[ V_1 \xrightarrow{l_{0}} (0) \xrightarrow{\ell_{\infty}} (s) \xrightarrow{l_{s}} P_s \xrightarrow{l_{s+1}+V_s} (s+1) \xrightarrow{l_{s+1}} \underbrace{P_{s+1} \rightarrow \cdots \rightarrow V_1}_{in \ \tilde{{\mathcal P}}} \] which has length $(q^2+s-1)+2=q^2+s+1$. Now we eliminate the edge $l_{s+3 \mod q+1}+P_{s+1}$ from ${\mathcal G}_1$ and instead connect $P_{s+1}$ with $P_{s+2}$ using \[ P_{s+1} \xrightarrow{l_{s+2}+P_s} (s+2) \xrightarrow{l_{s+2}} P_{s+2}. \] This yields a path ${\mathcal G}_2$ in $(q^2+s-2)+2$ vertices (two more than $\tilde{{\mathcal P}}$). We may close this path into a cycle by using $\tilde{{\mathcal P}}$ as above {\small \[ V_1 \xrightarrow{l_{0}} (0) \xrightarrow{\ell_{\infty}} (s) \xrightarrow{l_{s}} P_s \xrightarrow{l_{s+1}+V_s} (s+1) \xrightarrow{l_{s+1}} P_{s+1} \xrightarrow{l_{s+2}+P_s} (s+2) \xrightarrow{l_{s+2}} \underbrace{P_{s+2} \rightarrow \cdots \rightarrow V_1}_{in \ \tilde{{\mathcal P}}} \] } which has length $(q^2+s)+2=q^2+s+2$. In general, for $1\leq i< q-s$ (and thus $s+i +1< q+1$), given a path ${\mathcal G}_i$ of length $(q^2+s-2)+i$ constructed as above we can eliminate the edge $l_{s+i+2 \mod q+1}+P_{s+i}$ from ${\mathcal G}_i$ and instead connect $P_{s+i}$ with $P_{s+i+1}$ using \[ P_{s+i} \xrightarrow{l_{s+i+1}+P_{s+i-1}} (s+i+1) \xrightarrow{l_{s+i+1}} P_{s+i+1} \] this yields a path ${\mathcal G}_{i+1}$ in $(q^2+s-2)+i+1$ vertices ($i+1$ more than $\tilde{{\mathcal P}}$). We may close this path into a cycle as we did above \[ V_1 \xrightarrow{l_{0}} (0) \xrightarrow{\ell_{\infty}} (s) \xrightarrow{l_{s}} P_s \xrightarrow{l_{s+1}+V_s} (s+1) \xrightarrow{l_{s+1}} P_{s+1}\xrightarrow{l_{s+2}+P_s} (s+2) \rightarrow \cdots \hspace{1in} \] \[ \hspace{1.3in} \cdots \rightarrow P_{s+i} \xrightarrow{l_{s+i+1}+P_{s+i-1}} (s+i+1) \xrightarrow{l_{s+i+1}} \underbrace{P_{s+i+1} \rightarrow \cdots \rightarrow V_1}_{in \ \tilde{{\mathcal P}}} \] which has length $q^2+s+i+1$. This will yield cycles of length up to $q^2+q$. The line not used in the $(q^2+q)$-cycle ${\mathcal Q}$ is $l_0+P_{q-1}$, and the point not used is $O$. Figure \ref{fig:Q} gives an idea of what ${\mathcal Q}$ looks like. \begin{figure} \caption{Cycle ${\mathcal Q}$} \label{fig:Q} \end{figure} \hspace{.1in} In order to construct a $(q^2+q+1)$-cycle we use ${\mathcal Q}$ and modified it as follows. \begin{itemize} \item eliminate $\ell_{\infty}$, which connected $l_0$ and $l_s$. \item eliminate $l_{q-1}$, which connected $(q-1)$ and $P_{q-1}$. \item eliminate $l_{q}$, which connected $(q)$ and $P_{q}$. \item connect $(s)$ and $(q)$ using $\ell_{\infty}$. \item connect $(q-1)$ and $O$ using $l_{q-1}$ \item connect $P_q$ and $O$ using $l_{q}$ \item connect $(q-1)$ and $(0)$ using $l_0+P_{q-1}$ \end{itemize} We get the following hamiltonian cycle: \begin{figure} \caption{A hamiltonian cycle} \end{figure} \end{proof} \begin{proof}[Proof of Theorem \ref{main}] It follows from Lemmas \ref{lempathsq^2+1->q^2+s} and \ref{lempathsq^2+s+2<}. \end{proof} We wish to conclude this paper with a conjecture. Let $s\ge 1$ and $n\ge 2$. A finite partial plane $\mathcal{G} = ({{\mathcal P}}, {{\mathcal L}} ; {\mathcal I})$ is called a {\it generalized $n$-gon of order $s$} if its Levi graph is $(s+1)$-regular, has diameter $n$, and has girth $2n$. It is known that a generalized $n$-gons of order $s$ exists only for $n=2,3,4,6$, see Feit and Higman \cite{FH}. It is easy to argue that the number of points and the number of lines in the generalized $n$-gon is $p_s^{(n)} : = s^{n-1} + s^{n-2} + \cdots + s+1$ . Note that a projective plane of order $q$ is a generalized $3$-gon (generalized triangle) of order $q$, and so $p_q^{(3)} = q^2 + q +1 = n_q$ -- the notation used in this paper earlier. \begin{conjecture} Let $s\ge 2$ and $n\ge 3$. Then $C_k \hookrightarrow \mathcal{G}$ for all $k$, $n\le k\le p_s^{(n)}$. \end{conjecture} \hspace{.2in} \noindent \textbf{Acknowledgement:} The authors are thankful to Benny Sudakov for the clarification of related results obtained by probabilistic methods. \end{document} \end{document}
arXiv
Novel link prediction for large-scale miRNA-lncRNA interaction network in a bipartite graph Zhi-An Huang1,2 na1, Yu-An Huang3 na1, Zhu-Hong You3, Zexuan Zhu1 & Yiwen Sun4 BMC Medical Genomics volume 11, Article number: 113 (2018) Cite this article Current knowledge and data on miRNA-lncRNA interactions is still limited and little effort has been made to predict target lncRNAs of miRNAs. Accumulating evidences suggest that the interaction patterns between lncRNAs and miRNAs are closely related to relative expression level, forming a titration mechanism. It could provide an effective approach for characteristic feature extraction. In addition, using the coding non-coding co-expression network and sequence data could also help to measure the similarities among miRNAs and lncRNAs. By mathematically analyzing these types of similarities, we come up with two findings that (i) lncRNAs/miRNAs tend to collaboratively interact with miRNAs/lncRNAs of similar expression profiles, and vice versa, and (ii) those miRNAs interacting with a cluster of common target genes tend to jointly target at the common lncRNAs. In this work, we developed a novel group preference Bayesian collaborative filtering model called GBCF for picking up a top-k probability ranking list for an individual miRNA or lncRNA based on the known miRNA-lncRNA interaction network. To evaluate the effectiveness of GBCF, leave-one-out and k-fold cross validations as well as a series of comparison experiments were carried out. GBCF achieved the values of area under ROC curve of 0.9193, 0.8354+/− 0.0079, 0.8615+/− 0.0078, and 0.8928+/− 0.0082 based on leave-one-out, 2-fold, 5-fold, and 10-fold cross validations respectively, demonstrating its reliability and robustness. GBCF could be used to select potential lncRNA targets of specific miRNAs and offer great insights for further researches on ceRNA regulation network. The advent of next-generation sequencing has opened up new avenues to understand specific biomechanism from genome wide biomolecular interactions. The essential role of non-coding RNAs (ncRNAs) in biological process reveals that the transcriptional landscape of humans and other organisms is far more complicated than previously thought [1]. As the majority of transcripts expressed in mammals, ncRNAs can measure from around 22 nucleotides up to hundreds of kb. Specially, long non-coding RNA (lncRNA) is a loosely classified group of RNA transcripts (> 200 nucleotide bases) without apparent protein-coding function and can be discovered in any branch of life [2]. Increasing evidence has shown that lncRNAs can participate in various cellular processes including mRNA splicing, protein translation, cell growth/death through influencing chromatin modification, and cell differentiation and transcriptional complex targeting. Even though more than 58,000 human lncRNA genes have been identified, apart from few well-studied lncRNAs like XIST and HOTAIR, most of them are still poorly characterized due to the dynamic and complicated molecular mechanisms [3]. LncRNAs are involved in the pattern regulations of expressed proteins by a specific mechanism comprising a variety of biological interactions such as lncRNA-ncRNA, lncRNA-mRNA and lncRNA-protein interactions [4]. Therefore, the construction of inferred biological interaction network mediated by lncRNAs should be desirable to uncover the potential mechanisms and biological functions of lncRNAs. LncRNA, as a main type of competing endogenous RNAs (ceRNAs), can function as miRNA sponges having a lower regulatory effect of miRNA on mRNAs, i.e., miRNAs have an important influence in the molecular mechanisms of lncRNAs [1]. Previous works of human lncRNA function annotation were mainly based on the expression level between lncRNAs and protein-coding genes in diverse tissues [5, 6], but few functional annotations were explained according to the ceRNA network. Along with the knowledge accumulation on miRNA function for the past decade, miRNA-lncRNA interactions can provide new insights into understanding the complex functions of lncRNA. The important influence of miRNA on lncRNA function, and the converse, is now gaining widespread attention [3, 7]. Numerous of studies have demonstrated that both miRNA and lncRNA get involved in pathological processes including diverse human disorders and diseases, and the regulation role of miRNA-lncRNA interactions in some human complex diseases have been systematically investigated [8]. For example, the miRNA-lncRNA regulatory networks in vascular diseases and cancers (e.g. gastric cancer and prostate cancer) have been well constructed and studied in [9,10,11]. The detailed understanding of the effects of miRNA-lncRNA-mediated interactions in pathophysiology could pave the way for drug toxicology, biomarker discovery and therapeutic approaches. However, the current knowledge of miRNA-lncRNA interactions identified by biological experiments is still limited. In recent years, computational models have been extensively used for predicting bi-partite relationships (e.g. drug-target interactions [12,13,14,15], lncRNA-disease associations [16] and microbe-disease associations [17,18,19]). As an indispensable step to identify miRNA-target interactions, it is a common practice to develop computational prediction for refining the candidate list before further experimental validation [20, 21]. However, most existing miRNA-target inference algorithms were initially proposed for mRNA targets, and the inferences are therefore based on the statistical rules and nature of miRNA-mRNA interactions [22]. The common rules on which most existing miRNA-target prediction tools are based mainly come from four aspects conservation, seed match, free energy, and site accessibility, but some of them could even contradict with the nature of miRNA-lncRNA interactions [3]. For example, based on the observation that the miRNA seed regions of mRNA tend to have apparently higher conservation than the non-seed ones. A few previously proposed prediction approaches for miRNA-target interactions conduct the conservation analysis primarily concentrating on the regions in the 3' UTR and the 5' UTR of mRNA. However, lncRNAs have been found to demonstrate distinctly lower sequence conservation and faster evolution than mRNAs [3]. Moreover, the statistic rules on which the strategy of seed match is based are firstly arising from miRNA-mRNA interactions, and therefore not suitable for miRNA-lncRNA interaction prediction. There have been a number of computational prediction models proposed for lncRNA-RNA interaction via the simple calculation of the free energy of the potential binding sites [3]. For instance, LncTar was proposed to calculate the free energy served as the measurement of the stability of complementarity between lncRNAs and target RNAs [22]. Such sequence-based inference methods achieve successes in various applications, however they could be easily plagued by the high false positive rates [20]. In addition, there exist a few inherent characteristics distinguishing between lncRNAs and mRNAs. For example, unlike mRNAs, lncRNAs are more enriched and lowlier expressed in the nucleus. They are also shorter with fewer exons and have higher specificity of tissue distribution as well as reduced stability [3]. Most previously proposed miRNA-target inference tools fail to incorporate recent achievements of the understanding of miRNA-lncRNA interaction and could therefore not be effective for miRNA-lncRNA interaction inference. Recent studies have provided insights into modeling the crosstalk among diverse types of ceRNAs including miRNAs and lncRNAs within the cell [23]. On top of the well-known factors such as miRNA response element (MRE) accessibility related to RNA-binding protein or secondary structure as well as subcellular localization, the expression profiling of lncRNA and miRNA is an important way to decipher the principles of ceRNA regulation networks [24]. Previous researches including small RNA (sRNA) regulation [25], protein-protein interactions [26,27,28], and miRNA-target threshold effects [29] suggest that miRNAs and lncRNAs serve as two key components of ceRNA network, and a titration mechanism helps to orchestrate their interaction with each other by forming threshold levels of effect. This titration mechanism is based on the basic postulate that limited number of available miRNA could contribute to the inactiveness of lncRNA, conversely the abundant miRNA molecules could result in the completely repressed lncRNA, so the optimal miRNA-lncRNA cross-regulation emerges and sustains at a near-equimolar equilibrium [24]. In other words, RNA dosage for cross-regulation in ceRNA network is particularly critical. It is worth to note that a kinetic mathematical model [24] under such considerations was proposed for the inference of ceRNA interactions mediated via phosphatase and tensin homolog (PTEN). However, all the factors used by this model, such as degradation and transcription rates for association and dissociation of miRNA/ceRNAs complexes [24], are too difficult to be surveyed for most of miRNAs and lncRNAs. Therefore, it is not feasible to extensively use this kinetic model for the inference of miRNA-lncRNA interactions. Increasing evidences [30, 31] demonstrated that lncRNAs are also presumably co-regulated in expression networks, and multiple lncRNAs could involve in the biological regulation processes by synergistically interacting particular miRNA clusters. Accordingly, the expression pattern of lncRNA-lncRNA synergistic network has recently attracted increasing attention. In this work, we develop a group-preference Bayesian collaborative filtering model called GBCF to pick up a top-k probability ranking list for an individual miRNA or lncRNA based on the known miRNA-lncRNA interaction network derived from lncRNASNP database. Since the known miRNA-lncRNA interactions in the lncRNASNP database are all positive, the negative samples are relatively hard to be collected. This prediction task is actually a semi-supervised one only treating the known interactions as positive samples. The semi-supervised prediction task can properly utilize enough side information beneficial for the prediction performance. Particularly, we first propose the local scoring scheme to alleviate the prediction preference caused by the disproportion of the known miRNA-lncRNA interaction network. In this scoring system, we implemented both leave-one-out cross validation (LOOCV) and k-fold cross validation to evaluate the prediction performance of the proposed model. The experimental result demonstrated that GBCF obtain the reliable prediction performance and achieve the higher AUC (area under ROC curve) of 0.9193 compared with a few representative classical classifiers and the state-of-the-art model EPLMI [32]. GBCF obtained the average AUCs of 0.8354+/− 0.0079, 0.8615+/− 0.0078 and 0.8928+/− 0.0082 in the frameworks of 2-fold, 5-fold and 10-fold cross validations, respectively. To better describe the similarities among miRNAs and lncRNAs, we leveraged three diverse types of biological information, i.e., expression profile, coding-non-coding co-expression networks and sequence data. Using a series of 5-fold cross validations and correlation analysis of RNA clusters, the experimental comparison demonstrated that the miRNA and lncRNA similarity should be measured by the biological function-based and expression profile-based correlations, respectively. The experiment result in cross validations Using LOOCV, we compared GBCF with a few classical classifiers including [33,34,35,36] as well as the state-of-the-art model EPLMI [30] as baseline. Note that, all the compared models were built on the same information source as GBCF. EPLMI is a two-way diffusion model first proposed for the prediction of large-scale miRNA-lncRNA interactions. Unlike GBCF, EPLMI adopts a global scoring scheme to rank the most potential novel miRNA-lncRNA interactions among all unobserved samples. We also tried to explore the potential of these classical classifiers from different perspectives. For example, Katz can be categorized as the network-based measurement method by calculating the nodes' similarity in a bipartite graph. Singular-value decomposition (SVD) is used to decompose the known interaction network into three relatively smaller matrices for construction of probability matrix. Latent factor model (LFM) aims to explain observed associations in terms of two latent factors (also called hidden variables), which are iteratively optimized for matrix product as probability matrix. Since GBCF model adopts a specific group-preference Bayesian collaborative filtering (CF) technique, we also compared it with typical lncRNA-based and miRNA-based CF models, respectively. The performance comparison via LOOCV is shown in Fig. 1. Among these models, GBCF achieves the best prediction performance with the highest AUC value of 0.9193. The miRNA-based CF, lncRNA-based CF, EPLMI, SVD-based model and basic LFM obtain the AUC values of 0.9089, 0.8880, 0,8847, 0.8402 and 0.8680 respectively. It is noteworthy that the CF-based models seems to perform better than others do. This phenomenon could be attributed to their capability of automatic collecting extrinsic preferences from other RNAs. Although EPLMI model still maintain reasonable prediction accuracy, the local ranking scheme limit its performance to a certain extent. GBCF is developed from the previous approach of the recommended system, it is more efficient to deal with the sparse dataset than EPLMI. In a word, the LOOCV results demonstrate the reliability of GBCF. - The comparison results between GBCF and four classical classifiers as well as the competitor EPLMI model in terms of LOOCV Insufficient training samples would greatly affect the prediction accuracy (sparsity = 2.49%). To evaluate the performance of GBCF in terms of diverse sparsity, 2-fold, 5-fold and 10-fold cross validations were conducted, respectively. As shown in Table 1, GBCF model achieves the average AUCs of 0.8354+/− 0.0079 when the number of training samples drops to a half. In addition, the result suggests that GBCF model shows a strong robustness to different level of training data sparsities. We also used 5-fold cross validation to assess the performance of GBCF with lncRNA-based group preference instead. The average AUCs of 0.8612+/− 0.0080 obtained suggest that miRNA- and lncRNA- based group preferences contribute equally to the prediction performance of GBCF. Considering the complex competition mechanisms in ceRNA network and the lack of investigation into the competition patterns for sequestering miRNAs, we provided the top-50 ranking lists of candidate target lncRNAs for each type of miRNA with the corresponding prediction scores by using miRNA-based group preference, respectively (publicly available in Additional file 1). It is anticipated that these prediction results could shed light on deciphering the clues of ceRNA regulation networks. Table 1 The experiment result of k-fold cross validation The performance evaluation with different types of RNA similarity In this subsection, we explore the effective measurement of different RNA similarities, i.e. sequence-based similarity, expression profile-based similarity and biological function-based similarity derived from RNA-target gene interactions. To evaluate the prediction performance with different types of RNA similarity, 5-fold cross validation was used in this comparison experiments (see Table 2). When fairly evaluating the usefulness of similarity for one type of RNA, another type of RNA was assigned the best similarity, i.e., the expression profile-based similarity for lncRNA and biological function-based similarity for miRNA. Table 2 To evaluate the usefulness of diverse types of RNA similarity, 5-fold cross validation was implemented on GBCF model With regard to lncRNA, GBCF model yields the highest average AUCs 0.8615+/− 0.0078 using the expression profile-based similarity. In addition, GBCF obtains lower average AUCs of 0.8084+/− 0.0080 and 0.8219+/− 0.0081 based on the sequence- and biological function-based similarities, respectively. Since there is a large difference in the lengths of the lncRNAs, we concentrated the investigation in the range 73 to 59,462 bp. Pairwise global alignment tends to fail the measurement of sequence similarities among lncRNAs via their nucleotide bases. Moreover, unlike miRNAs, lncRNAs could play different biological roles in ceRNA network. For example, miRNAs tend to sequestered via small-binding sites in lncRNAs. The known annotations based on the coding-non-coding co-expression network could not comprehensively describe how biologically similar the regulation mechanisms of two lncRNAs could be. In a word, this result demonstrates that expression profiling could be a promising marker to characterize lncRNA similarity. As for miRNA similarities, the comparison results demonstrate that they make different contribution to the performance of GBCF. The result in Table 2 shows that miRNA sequence-, expression profile- and biological function-based similarities yield average AUCs of 0.7729+/− 0.0078, 0.8382+/− 0.0081 and 0.8615+/− 0.0078, respectively. The AA index as a local similarity-based method could better explore the implicit topological information among miRNAs from the network of miRNA-target gene interactions. Therefore, the biological function-based similarity with the best average AUCs was chosen as the miRNA similarity measurement. We also investigated the prediction performance of GBCF without any similarity but known miRNA-lncRNA interactions as a baseline test. In this case, GBCF achieves average AUCs of 0.6840+/− 0.0116 also in 5-fold cross validation. Similarity analysis of miRNA and lncRNA clusters between observed and unobserved miRNA-lncRNA interactions To further analyze the correlation of utilized RNA similarities between observed and unobserved miRNA-lncRNA interactions and evaluate the effectiveness of GBCF, we compared the differences in miRNA/lncRNA clusters interacting with single lncRNA/miRNA based on the known miRNA-lncRNA interaction network. For example, given the miRNA clusters interacting more than two lncRNAs, lncRNAs were divided into two groups: (i) the observed miRNA group and (ii) the unobserved miRNA group depending on whether they were found to interact with the miRNA. Then we used the average Pearson correlation coefficient (PCC) to measure the difference for each of those two lncRNA group. The average PCC of the unobserved group for each lncRNA served as the baseline of the comparison. LncRNA clusters also undertook the same procedure. To give a clear description, the function-based similarity of miRNA and expression profile-based similarity of both miRNA and lncRNA are representatively illustrated in Fig. 2. The comparison result is shown in Table 3. The remarkable samples with average PCC significantly higher or lower than the baseline (i.e., 0.3 times of the standard deviations of the observed RNA groups) are highlighted. There were 42.3% of lncRNA expression profiles unavailable in our dataset, and the investigated miRNAs had more opportunity to interact with lncRNAs (approximately 19 types of lncRNA for a miRNA). Under this condition, we analyzed the correlation of lncRNA clusters interacting with single miRNA based on expression profile and focused on the 206 well-studied miRNAs that have been identified to interact with more than 5 lncRNAs for more reliable conclusions. - Similarity correlation analysis Table 3 The data statistics of comparison results With respect to miRNA, we found that most miRNA clusters sharing more similarity (average PCC higher than baselines) tend to interact with single lncRNA except for the sequence-based similarity, which is easily plagued by the relatively high false positive rates. Those RNAs which cannot be mapped into corresponding datasets could be considered as invalid. After excluding the invalid miRNA IDs, 72.29% (407/563) of miRNA clusters were found to be higher than the baselines based on the biological function-based similarity derived from the miRNA-target gene interaction network. For those 563 types of lncRNAs, the observed miRNA groups yield an average PCC of 0.2787, which is significantly higher than the average baseline PCC of 0.0994. This result suggests that the miRNAs interacting with a cluster of common target genes could jointly target common biological processes and therefore share more functional similarity. Apart from the miRNA biological function-based similarity, it is interesting to note that most correlations of expression profile-based similarity tend to approach the baseline. In a word, those miRNA clusters interacting with lncRNAs are likely to have similar expression patterns. Based on the miRNA expression profile-based similarity, the average PCC of 83.50% (435/521) of the miRNA clusters are higher than the baselines (0.4551) achieving the value of 0.4947. This result demonstrates that, although containing a number of invalid miRNA IDs (16.4%), miRNA expression profile-based similarity indeed can reflect the regulation mechanisms in miRNA-lncRNA interaction network and therefore deserves more future investigation. We can see that the predictive power of GBCF is not be affected for RNA with low similarity to known miRNAs/lncRNAs. As shown in Fig. 2a, the average PCCs of the baselines are 0.4551 and 0.0994 based on the miRNA expression profile-based similarity and miRNA biological function-based similarity, respectively. Obviously, the value of the expression profile-based similarity is significantly higher than the biological function-based. However, GBCF achieves better prediction performance using the miRNA biological function-based similarity. Therefore, the low RNA similarity to known miRNAs/lncRNAs dose not interfere with the predictive power of GBCF. As for lncRNA expression profile-based similarities, after excluding the invalid lncRNA IDs, 59.22% (122/206) of lncRNA clusters were shown to share more similarity on the observed miRNA-lncRNA network. For those 206 well-studied miRNAs, the average PCC of the observed lncRNA clusters is 0.5476, which is slightly higher than the average baseline PCC of 0.5378. Note that approximately 71.3% (87/122) remarkable samples obtain the average PCC higher than the baselines and above the threshold range. The result also reflects the fact that expression profiling could be a promising feature to measure the correlation of lncRNA clusters with their miRNA-mediated principles of regulation. 22 types of lncRNA expression level we collected could not be sufficient to effectively detect the expression patterns of an individual lncRNA. Certainly, there is a huge potential for lncRNA expression profile-based similarity. Finally, we evaluated the other two types of lncRNA similarities in the same way. As a result, 56.13 and 89.36% of the lncRNAs have the average PCC higher than the baselines. Sequence-based lncRNA similarity cannot be used to differentiate the types of lncRNA. Moreover, the common parts shared among lncRNAs are only a small portion of their total lengths, so the baselines of lncRNA sequence similarity are relatively low. The pairwise global alignment fails to precisely measure the sequence similarities among lncRNAs via their nucleotide bases. The study leads to the following findings. First, the similarities among miRNAs/lncRNAs derived from expression profile and coding-non-coding co-expression networks are effective to be representative measurements. Second, group preference Bayesian collaborative filtering technique shows a strong capability to synergistically incorporate extrinsically implicit topological information in ceRNA regulation network. Finally, the local scoring system proposed in this domain is useful to alleviate the prediction preference brought by the disproportionate learning samples in the known miRNA-lncRNA interaction network. However, we also noticed that a few limitations indeed affected the prediction performance of GBCF. For example, it is insufficient to collect the lncRNA expression levels in 16 different human tissues and 8 cell lines. More remarkable features should be gathered to improve the reliability of lncRNA expression profile-based similarity measurement. There are many parameters to tune, which means that it is difficult to optimize the prediction performance in short term. Based on GBCF, we can carry out the further research from two viewpoints. First, the indirect lncRNA-lncRNA interactions in ceRNA network could be inferred. It has been found that indirect lncRNA-lncRNA interactions in ceRNA network could be considered as the third transcripts supporting the crosstalk between two ceRNAs. As in the correlation analysis of lncRNA similarity between observed and unobserved miRNA-lncRNA interactions, the lncRNA clusters interacting with single miRNA with high scores tend to have frequently an indirect interaction. Second, GBCF can be used to measure different competitive status of how the lncRNAs are competitive to sequester a certain type of miRNA. As competing ceRNAs, target lncRNAs could coexist in ceRNA network where the quantity and effect of their MREs may not be consistent. In LOOCV and k-fold cross validation, the known miRNA-lncRNA interactions ranked in top list could play a more biologically significant role in ceRNA interaction network than others. The lncRNAs in such kind of interactions would have a priority to interact with miRNAs for maintenance of biological stability in ceRNA network. In other words, for the known miRNA-lncRNA interactions ranked in top list, the lncRNAs assigned with higher scores by GBCF are likely to interact with miRNAs more competitively. Enormous evidences focus on the miRNA-lncRNA interactions to explore the potential regulation mechanisms in ceRNA network. It is still insufficient to promote the development of this domain given current knowledge and data regarding to the observed miRNA-lncRNA interactions. Little effort has been devoted to the large-scale prediction of miRNA-lncRNA interactions except some sequence-based prediction methods mainly focusing on predicting target genes/mRNA for a miRNA. We came up with three different measurements for RNA similarity from three diverse types of biological information, namely expression profile, coding-non-coding co-expression networks and sequence data, respectively. Through a series of 5-fold cross validation and correlation analysis of RNA clusters in observed samples, the experimental results suggest that (i) lncRNAs/miRNAs tend to collaboratively interact with miRNAs/lncRNAs of similar expression profiles, and vice versa, and (ii) miRNAs interacting with a cluster of common target genes tend to jointly target common lncRNAs. We utilized group preference Bayesian collaborative filtering technique for a large-scale prediction of miRNA-lncRNA interactions. LOOCV and 5-fold cross validation were used to demonstrate the usefulness of the proposed model through the comparison with a few classical classifiers and the state-of-the-art model EPLMI. Data used for construction of the known miRNA-lncRNA interaction network were taken from the lncRNASNP database (the February 2017 version), which is publicly available at http://bioinfo.life.hust.edu.cn/lncRNASNP [37]. All curated records were confirmed via laboratory examination with research literatures. Based on 108 CLIP-Seq datasets, lncRNASNP provides 8091 pairwise interactions. After excluding the repetitive entries, we collected totally 5348 pairs of interactions (denoted as Pml). These interactions involve 275 (denoted as nm) diverse types of miRNAs and 780 (denoted as nl) diverse types of lncRNAs. To calculate the similarities among lncRNAs from different perspectives, three types of biological information were gathered from various databases. First, the expression profile data and inferred functional annotations of lncRNAs were accessible from the NONCODE database (http://www.noncode.org/) [38],. We obtained the expression profiles for 450 of the lncRNAs and the functional annotations for 264 of the lncRNAs after mapping the NONCODE IDs into the names of the investigated lncRNA. Second, the gathered expression profiles for each type of lncRNAs with 22 attributes, respectively representing the expression level of 16 different human tissues and 8 cell lines. The putative functional annotations for each lncRNA genes refer to the top-10 most possible biological functions, which are inferred by lnc-GFP method [39] based on a coding-non-coding co-expression network. Finally, the sequence data of each lncRNA were downloaded from LNCipedia database (https://lncipedia.org/) [2]. Similarly, the three same types of biological information were collected for measuring the similarities among miRNAs. miRTarBase (http://miRTarBase.mbc.nctu.edu.tw) [40] curates a large number of miRNA and multi-gene interactions. We successfully converted the miRTarBase IDs into the names of 272 investigated miRNAs. microRNA.org database [41] provides the expression profile data of miRNAs, 230 of which were found to be matched. The expression profile of each miRNA has 172 attributes describing the expression levels of 172 various tissues and cell lines in human body. miRBase database (http://www.mirbase.org/index.shtml) [42, 43] offers us the sequence data of mature miRNAs. The sequence-based similarity of RNAs Based on the obtained lncRNA/miRNA sequence data, the Needleman-Wunsch pairwise sequence alignment was implemented to measure the sequence similarity of lncRNAs and miRNAs by leveraging the package of pairwise2 in Biopython [44]. In this work, the identification score, gap-open penalty and gap-open extending penalty were set to 2, − 0.5 and less 0.1, respectively. It need to note that, it is unnecessary to compare miRNA sequence-based similarity and lncRNA sequence-based similarity, since the sequence-based similarity is calculated among the same type of RNA and then normalized as a weight from 0 to 1. In this regard, it has no influence to the final prediction score. The expression profile-based similarity of RNAs The expression pattern could be an important ingredient for RNA similarity measurement. Namely, the more biologically possible lncRNAs/miRNAs could have the more consistent expression levels in human tissues and cell lines. Therefore we simply used PCC to calculate such kind of RNA similarity based on the collected expression profiles as follow: $$ \mathrm{ES}\left(\mathrm{i},\mathrm{j}\right)=\frac{\sum_{k=1}^N\left({e}_{ik}-\overline{e_i}\right)\left({e}_{jk}-\overline{e_j}\right)}{\sqrt{\sum_{k=1}^N{\left({e}_{ik}-\overline{e_i}\right)}^2{\sum}_{k=1}^N{\left({e}_{jk}-\overline{e_j}\right)}^2}} $$ where i and j refer to two same-type RNAs. eik represents the kth attribute of the expression profile of RNA i. Parameter N is the number of attributes of the expression profiles (i.e. N = 22 for lncRNAs, and N = 172 for miRNAs). The higher ES(i,j) is, RNAs i and j are more similarly expressed in general. The biological function-based similarity of RNAs Based on the hypothesis that lncRNAs/miRNAs sharing more similar regulation mechanisms and features tend to have interactions with a cluster of target genes, we compute such the correlation of how a pair of RNAs is functionally similar based on the data of RNA-target gene interactions. According to Cubero's work [45], local similarity-based methods have been extensively applied and shown a very competitive prediction accuracy against more complex approahces. To better exploit the implicit information from the topological network structure, four typical methods were chosen for the functional similarity measurement, i.e. Common Neighbors (CN), the Adamic-Adar (AA) Index, the Jaccard (JA) Index and the Salton (SA) Index [45]. Given two RNAs i and j within the same type, these four methods can be described as follows: $$ \mathrm{CN}\left(\mathrm{i},\mathrm{j}\right)=\left|{\Gamma}_i\cap {\Gamma}_j\right| $$ $$ \mathrm{AA}\left(\mathrm{i},\mathrm{j}\right)=\sum \limits_{z\in {\Gamma}_i\cap {\Gamma}_j}\frac{1}{\mathit{\log}\left|{\Gamma}_z\right|} $$ $$ \mathrm{JA}\left(\mathrm{i},\mathrm{j}\right)=\frac{\left|{\Gamma}_i\cap {\Gamma}_j\right|}{\left|{\Gamma}_i\cup {\Gamma}_j\right|} $$ $$ \mathrm{SA}\left(\mathrm{i},\mathrm{j}\right)=\frac{\left|{\Gamma}_i\cap {\Gamma}_j\right|}{\sqrt{\left|{\Gamma}_i\right|\left|{\Gamma}_i\right|}} $$ here the set of nodes (target genes) connected through an edge to a RNA i is called the neighborhood of i and is denoted as Γi. After 5-fold cross validation, the AA Index and the SA Index achieved the best prediction accuracy for miRNAs and lncRNAs, respectively, and therefore were respectively used as their functional similarity. Group-based Bayesian collaborative filtering computational model Inspired by Pan's work [46], especially the injection of richer interactions via group preference, we explored a novel computational model called GBCF for ceRNA interaction inference based on the lncRNA-lncRNA similarity (denoted as Sl), miRNA-miRNA similarity (denoted as Sm) and known miRNA-lncRNA interaction network (see Fig. 3). Due to the absence of the negative miRNA-lncRNA interactions, i.e., pairs of miRNA and lncRNA have been experimentally confirmed having no interactions, the prioritization for potential candidates is in the basis of Bayesian inference by treating that the unobserved interactions (i, j) are less likely to exist than the observed ones (i, k). Here we use (i, k) ≻ (i, j) to denote that miRNA i is more likely to have interactions with lncRNA k than lncRNA j. The result of 5-fold cross validation suggests that Sl should be expression profile-based while Sm should be replaced by the biological function-based. - The flowchart of GBCF At the beginning of the prediction process of GBCF, Sl and Sm are fed to the information source for the construction of the latent feature vector U(lncRNA) and V(miRNA) as initialization parameters, respectively, i.e. U ∈ ℝ1 ∗ nm, V ∈ ℝ1 ∗ nl. To describe the method more clearly, in this case, we impose the group preference on miRNA uniformly. In this way, the group preference can be considered as an overall preference score of a group of miRNAs on a lncRNA. For example, given a group of miRNAs \( \mathcal{G} \) and a lncRNA j, the overall group preference score of \( \mathcal{G} \) on j can be calculated from individual preferences as \( {\mathrm{Score}}_{\mathcal{G}j}=\frac{1}{\left|\mathcal{G}\right|}{\sum}_{i\in \mathcal{G}}{Score}_{ij} \). \( {\mathcal{M}}^{tr}={\left\{m\right\}}_{m=1}^{nm} \) and \( {\mathcal{L}}^{tr}={\left\{l\right\}}_{l=1}^{nl} \) denote the training sets of miRNAs and lncRNAs, respectively. \( \mathrm{j}\in {\mathcal{L}}_i^{tr} \) means the miRNA-lncRNA pair (i, j) is observed while \( \mathrm{k}\in {\mathcal{L}}^{tr}\backslash {\mathcal{L}}_i^{tr} \) means (i, k) is not observed. Empirically, if \( \mathrm{j}\in {\mathcal{L}}_i^{tr} \) and \( \mathrm{k}\in {\mathcal{L}}^{tr}\backslash {\mathcal{L}}_i^{tr} \), the group pairwise preference can be estimated conceptually, \( \left(\mathcal{G},\mathrm{j}\right)\succ \left(\mathcal{G},\mathrm{k}\right) \) where \( \mathrm{i}\in \mathcal{G} \) and \( \mathcal{G}\subseteq {\mathcal{M}}_j^{tr} \). To precisely learn the unified effect of individual preference and group preference, we linearly combined them as follows: $$ \left(\mathcal{G},\mathrm{j}\right)+\left(\mathrm{i},\mathrm{j}\right)\succ \left(\mathrm{i},\mathrm{k}\right)\ or\ {Score}_{\mathcal{G} ij}>{Score}_{ik} $$ where \( {Score}_{\mathcal{G} ij}=\rho {Score}_{\mathcal{G}j}+\left(1-\rho \right){Score}_{ij} \), and ρ is a tradeoff parameter fusing such two kinds of preferences, ranging from 0 to 1 (ρ=0.5 in this study). In this way, a novel index called group Bayesian collaborative filtering (GBCF) ranking for miRNA i is denoted as follows: $$ \mathrm{GBCF}\left(\mathrm{i}\right)={\prod}_{j\in {\mathcal{L}}_i^{tr}}{\prod}_{k\in {\mathcal{L}}^{tr}\operatorname{}{\mathcal{L}}_i^{tr}}\Pr \left({Score}_{\mathcal{G} ij}>{Score}_{ik}\right)\left[1-\Pr \Big({Score}_{ik}>{Score}_{\mathcal{G} ij}\right] $$ Given two miRNAs i and t, the joint likelihood could be simply approximated by the multiplication operation like GBCF(i, t) ≈ GBCF(i)GBCF(t). As such, the correlation between i and t is introduced via the miRNA group \( \mathcal{G} \). Specifically, these two miRNA groups \( \mathcal{G}\left(\mathrm{i},\mathrm{j}\right)\subseteq {\mathcal{M}}_j^{tr} \) and \( \mathcal{G}\left(\mathrm{t},\mathrm{j}\right)\subseteq {\mathcal{M}}_j^{tr} \) may be overlapped, namely \( \mathcal{G}\left(\mathrm{i},\mathrm{j}\right)\cap \mathcal{G}\left(\mathrm{t},\mathrm{j}\right)\ne \varnothing \). The overall likelihood is estimated for all miRNAs and all lncRNAs as follows: $$ \mathrm{GBCF}={\prod}_{i\in {\mathcal{M}}^{tr}}{\prod}_{\mathrm{j}\in {\mathcal{L}}_i^{tr}}{\prod}_{k\in {\mathcal{L}}^{tr}\operatorname{}{\mathcal{L}}_i^{tr}}\Pr \left({Score}_{\mathcal{G} ij}>{Score}_{ik}\right)\left[1-\Pr \left({Score}_{ik}>{Score}_{\mathcal{G} ij}\right)\right] $$ where\( \mathcal{G}\subseteq {\mathcal{M}}_j^{tr} \). Based on the previous work [47], \( \upsigma \left({Score}_{\mathcal{G} ij}-{Score}_{ik}\right)=\frac{1}{1+\exp \left(-{Score}_{\mathcal{G} ij}+{Score}_{ik}\right)} \)is used to approximate the probability \( \Pr \left({Score}_{\mathcal{G} ij}>{Score}_{ik}\right) \), and finally have \( \Pr \left({Score}_{\mathcal{G} ij}>{Score}_{ik}\right)\left[1-\Pr \left({Score}_{ik}>{Score}_{\mathcal{G} ij}\right)\right]={\sigma}^2\left({Score}_{\mathcal{G} ij}-{Score}_{ik}\right) \). The objective function of GBCF could be reached as follows: $$ \underset{\Theta}{\min }-\frac{1}{2}\ln GBCF+\frac{1}{2}\mathcal{R}\left(\Theta \right) $$ whereΘ = {U, V, bjϵℝ} is a set of model parameters to be learned. \( \mathcal{R}\left(\Theta \right)={\prod}_{i\in {\mathcal{M}}^{tr}}{\prod}_{j\in {\mathcal{L}}_i^{tr}}{\prod}_{k\in {\mathcal{L}}^{tr}\operatorname{}{\mathcal{L}}_i^{tr}}\left[{\alpha}_m{\sum}_{t\in \mathcal{G}}{\left\Vert {U}_t\right\Vert}^2+{\alpha}_l{\left\Vert {V}_j\right\Vert}^2+{\alpha}_l{\left\Vert {V}_k\right\Vert}^2+{\beta}_l{\left\Vert {b}_j\right\Vert}^2+{\beta}_l{\left\Vert {b}_k\right\Vert}^2\right] \) is the regularization term to avoid overfitting, where αm, αl and βl are regulation weights ranging from 0.001 to 0.1. The objective function in Eq. (9) can be rewritten as: $$ \mathrm{f}\left(\mathcal{G},\mathrm{i},\mathrm{j},\mathrm{k}\right)=-\ln \left({Score}_{\mathcal{G} ij}-{Score}_{ik}\right)+\frac{\alpha_m}{2}{\sum}_{t\in \mathcal{G}}{\left\Vert {U}_t\right\Vert}^2+\frac{\alpha_l}{2}{\left\Vert {V}_j\right\Vert}^2+\frac{\alpha_l}{2}{\left\Vert {V}_k\right\Vert}^2+\frac{\beta_l}{2}{\left\Vert {b}_j\right\Vert}^2+\frac{\beta_l}{2}{\left\Vert {b}_k\right\Vert}^2=\ln \left[1+\exp \left(-{Score}_{\mathcal{G} ij; ik}\right)\right]+\frac{\alpha_m}{2}{\sum}_{t\in \mathcal{G}}{\left\Vert {U}_t\right\Vert}^2+\frac{\alpha_l}{2}{\left\Vert {V}_j\right\Vert}^2+\frac{\alpha_l}{2}{\left\Vert {V}_k\right\Vert}^2+\frac{\beta_l}{2}{\left\Vert {b}_j\right\Vert}^2+\frac{\beta_l}{2}{\left\Vert {b}_k\right\Vert}^2 $$ We also use the stochastic gradient descent (SGD) algorithm to solve this optimization problem. The model parameters Θ can be updated as follows: $$ \Theta =\Theta -\upgamma \frac{\partial f\left(\mathcal{G},i,j,k\right)}{\mathrm{\partial \Theta }} $$ where γ denotes the learning rate and is set to 0.1 in this study. The prediction score of miRNA i on lncRNA j is computed as \( {Score}_{ij}={U}_i\bullet {V}_j^T+{b}_j \) each time until the model reaches the maximum number of iterations (default: 500). Using the 5-fold CV, we have tested the performance difference of GBCF with increasing maximum iteration (100, 300, 500 and 700). The result is tabulated in Table 4. We can see that GBCF achieved the highest average AUC of 0.8615+/− 0.0078 with 500 iterations. Running 700 iterations, the proposed model suffers from over-fitting and performance degradation. As such, the maximum iteration is empirically set to 500 by default. Note that a subset of miRNAs is randomly sampled as a miRNA group \( \mathcal{G} \) before carrying out the SGD algorithm. To further enhance the prediction accuracy, for an unobserved pair miRNA i and lncRNA j, we aggregate Scoreij with the mean weight of Sm(i′) and Sl(j′), where \( {i}^{\prime}\in {\mathcal{M}}_j^{tr} \) and \( {j}^{\prime}\in {\mathcal{L}}_i^{tr} \) as follows. $$ {Score}_{ij}+=\frac{\delta_m}{\left|{i}^{\prime}\right|}\sum \limits_{i^{\prime}\in {\mathcal{M}}_j^{tr}}{S}_m\left(i,{i}^{\prime}\right)+\frac{\delta_l}{\left|{j}^{\prime}\right|}\sum \limits_{j^{\prime}\in {\mathcal{L}}_i^{tr}}{S}_l\left(j,{j}^{\prime}\right) $$ where parameters δm and δl regulate the tradeoff of Sm and Sl respectively (δm=δl=1). The final Scoreij represents the existence probability of the unobserved miRNA-lncRNA pair. The pseudo-code of the proposed model is described in Algorithm 1. The model of GBCF is computationally efficient. The complexity of updating the objective function is \( O\left(\left|\mathcal{G}\right|d\right) \), and the total time complexity of GBCF is \( O\left( Tn\left|\mathcal{G}\right|d\right) \), where T is the maximum iteration, n is the number of miRNAs, \( \left|\mathcal{G}\right| \) is the size of miRNA group and d is the total dimension number of latent feature vectors U and V. Table 4 We used 5-fold cross validation to fine-tune the maximum number of iterations T ceRNAs: competing endogenous RNAs Collaborative filtering CN: Common neighbors LFM: Latent factor model lncRNA: Long non-coding RNA MRE: miRNA Response element ncRNAs: Non-coding RNAs PCC: Pearson correlation coefficient PTEN: Phosphatase and tensin homolog SGD: Stochastic gradient descent sRNA: small RNA SVD: Singular-value decomposition Salmena L, Poliseno L, Tay Y, Kats L, Pandolfi PP. A ceRNA hypothesis: the Rosetta stone of a hidden RNA language? Cell. 2011;146(3):353–8. Volders PJ, Helsens K, Wang X, Menten B, Martens L, Gevaert K, Vandesompele J, Mestdagh P. LNCipedia: a database for annotated human lncRNA transcript sequences and structures. Nucleic Acids Res. 2013;41(Database issue):D246–51. Quinn JJ, Chang HY. Unique features of long non-coding RNA biogenesis and function. Nat Rev Genet. 2016;17(1):47–62. Li JH, Liu S, Zhou H, Qu LH, Yang JH. starBase v2.0: decoding miRNA-ceRNA, miRNA-ncRNA and protein-RNA interaction networks from large-scale CLIP-Seq data. Nucleic Acids Res. 2014;42(Database issue):D92–7. Cabili MN, Trapnell C, Goff L, Koziol M, Tazon-Vega B, Regev A, Rinn JL. Integrative annotation of human large intergenic noncoding RNAs reveals global properties and specific subclasses. Genes Dev. 2011;25(18):1915–27. Derrien T, Johnson R, Bussotti G, Tanzer A, Djebali S, Tilgner H, Guernec G, Martin D, Merkel A, Knowles DG, et al. The GENCODE v7 catalog of human long noncoding RNAs: analysis of their gene structure, evolution, and expression. Genome Res. 2012;22(9):1775–89. Yoon JH, Abdelmohsen K, Gorospe M. Functional interactions among microRNAs and long noncoding RNAs. Semin Cell Dev Biol. 2014;34:9–14. Yang G, Lu X, Yuan L. LncRNA: a link between RNA and cancer. Biochim Biophys Acta. 2014;1839(11):1097–109. Xia T, Liao Q, Jiang X, Shao Y, Xiao B, Xi Y, Guo J. Long noncoding RNA associated-competing endogenous RNAs in gastric cancer. Sci Rep. 2014;4:6088. Ballantyne MD, McDonald RA, Baker AH. lncRNA/MicroRNA interactions in the vasculature. Clin Pharmacol Ther. 2016;99(5):494–501. Du Z, Sun T, Hacisuleyman E, Fei T, Wang X, Brown M, Rinn JL, Lee MG, Chen Y, Kantoff PW, et al. Integrative analyses reveal a long noncoding RNA-mediated sponge regulatory network in prostate cancer. Nat Commun. 2016;7:10982. Shi JY, Li JX, Chen BL, Zhang Y. Inferring interactions between novel drugs and novel targets via instance-neighborhood-based models. Curr Protein Pept Sci. 2018;19(5):488–97. Shi JY, Li JX, Lu HM. Predicting existing targets for new drugs base on strategies for missing interactions. BMC bioinformatics. 2016;17(Suppl 8):282. Shi JY, Yiu SM, Li Y, Leung HC, Chin FY. Predicting drug-target interaction for new drugs using enhanced similarity measures and super-target clustering. Methods (San Diego, Calif). 2015;83:98–104. Shi JY, Liu Z, Yu H, Li YJ. Predicting drug-target interactions via within-score and between-score. Biomed Res Int. 2015;2015:350983. Shi JY, Huang H, Zhang YN, Long YX, Yiu SM. Predicting binary, discrete and continued lncRNA-disease associations via a unified framework based on graph regression. BMC Med Genet. 2017;10(Suppl 4):65. Shi J-Y, Huang H, Zhang Y-N, Cao J-B, Yiu S-M. BMCMDA: a novel model for predicting human microbe-disease associations via binary matrix completion. BMC bioinformatics. 2018;19(9):169. Huang YA, You ZH, Chen X, Huang ZA, Zhang S, Yan GY. Prediction of microbe-disease association from the integration of neighbor and graph with collaborative recommendation model. J Transl Med. 2017;15(1):209. Wang F, Huang ZA, Chen X, Zhu Z, Wen Z, Zhao J, Yan GY. LRLSHMDA: Laplacian regularized least squares for human microbe-disease association prediction. Sci Rep. 2017;7(1):7601. Poliseno L, Pandolfi PP. PTEN ceRNA networks in human cancer. Methods, (San Diego Calif). 2015;77-78:41–50. Huang YA, You ZH, Chen X. A systematic prediction of drug-target interactions using molecular fingerprints and protein sequences. Curr Protein Pept Sci. 2018;19(5):468–78. Li J, Ma W, Zeng P, Wang J, Geng B, Yang J, Cui Q. LncTar: a tool for predicting the RNA targets of long noncoding RNAs. Brief Bioinform. 2015;16(5):806–12. Cesana M, Daley GQ. Deciphering the rules of ceRNA networks. Proc Natl Acad Sci U S A. 2013;110(18):7112–3. Ala U, Karreth FA, Bosia C, Pagnani A, Taulli R, Leopold V, Tay Y, Provero P, Zecchina R, Pandolfi PP. Integrated transcriptional and competitive endogenous RNA networks are cross-regulated in permissive molecular environments. Proc Natl Acad Sci U S A. 2013;110(18):7154–9. Levine E, Hwa T. Small RNAs establish gene expression thresholds. Curr Opin Microbiol. 2008;11(6):574–9. Buchler NE, Louis M. Molecular titration and ultrasensitivity in regulatory networks. J Mol Biol. 2008;384(5):1106–19. Huang Y-A, You Z-H, Li X, Chen X, Hu P, Li S, Luo X. Construction of reliable protein–protein interaction networks using weighted sparse representation based classifier with pseudo substitution matrix representation features. Neurocomputing. 2016;218:131–8. Huang YA, You ZH, Chen X, Yan GY. Improved protein-protein interactions prediction via weighted sparse representation model combining continuous wavelet descriptor and PseAA composition. BMC Syst Biol. 2016;10(Suppl 4):120. Mukherji S, Ebert MS, Zheng GX, Tsang JS, Sharp PA, van Oudenaarden A. MicroRNAs can generate thresholds in target gene expression. Nat Genet. 2011;43(9):854–9. Yang S, Ning Q, Zhang G, Sun H, Wang Z, Li Y. Construction of differential mRNA-lncRNA crosstalk networks based on ceRNA hypothesis uncover key roles of lncRNAs implicated in esophageal squamous cell carcinoma. Oncotarget. 2016;7(52):85728–40. Li Y, Chen J, Zhang J, Wang Z, Shao T, Jiang C, Xu J, Li X. Construction and analysis of lncRNA-lncRNA synergistic networks to reveal clinically relevant lncRNAs in cancer. Oncotarget. 2015;6(28):25003–16. Huang YA, Chan KCC, You ZH. Constructing prediction models from expression profiles for large scale lncRNA-miRNA interaction profiling. Bioinformatics (Oxford, England). 2018;34(5):812–9. Katz L. A new status index derived from sociometric analysis. Psychometrika. 1953;18(1):39–43. Mees AI, Rapp PE, Jennings LS. Singular-value decomposition and embedding dimension. Phys Rev A. 1987;36(1):340. Jenatton R, Roux NL, Bordes A, Obozinski G. A latent factor model for highly multi-relational data. In: International conference on neural information processing systems; 2012. p. 3167–75. Herlocker JL, Konstan JA, Terveen LG, Riedl JT. Evaluating collaborative filtering recommender systems. ACM Trans Inf Syst. 2004;22(1):5–53. Gong J, Liu W, Zhang J, Miao X, Guo AY. lncRNASNP: a database of SNPs in lncRNAs and their potential functions in human and mouse. Nucleic Acids Res. 2015;43(Database issue):D181–6. Bu D, Yu K, Sun S, Xie C, Skogerbo G, Miao R, Xiao H, Liao Q, Luo H, Zhao G, et al. NONCODE v3.0: integrative annotation of long noncoding RNAs. Nucleic Acids Res. 2012;40(Database issue):D210–5. Guo X, Gao L, Liao Q, Xiao H, Ma X, Yang X, Luo H, Zhao G, Bu D, Jiao F, et al. Long non-coding RNAs function annotation: a global prediction method based on bi-colored networks. Nucleic Acids Res. 2013;41(2):e35. Chou CH, Chang NW, Shrestha S, Hsu SD, Lin YL, Lee WH, Yang CD, Hong HC, Wei TY, Tu SJ, et al. miRTarBase 2016: updates to the experimentally validated miRNA-target interactions database. Nucleic Acids Res. 2016;44(D1):D239–47. Betel D, Wilson M, Gabow A, Marks DS, Sander C. The microRNA.org resource: targets and expression. Nucleic Acids Res. 2008;36(Database issue):D149–53. Kozomara A, Griffiths-Jones S. miRBase: annotating high confidence microRNAs using deep sequencing data. Nucleic Acids Res. 2014;42(Database issue):D68–73. Huang ZA, Wen Z, Deng Q, Chu Y, Sun Y, Zhu Z. LW-FQZip 2: a parallelized reference-based compression of FASTQ files. BMC bioinformatics. 2017;18(1):179. Cock PJ, Antao T, Chang JT, Chapman BA, Cox CJ, Dalke A, Friedberg I, Hamelryck T, Kauff F, Wilczynski B, et al. Biopython: freely available Python tools for computational molecular biology and bioinformatics. Bioinformatics (Oxford, England). 2009;25(11):1422–3. Martínez V, Berzal F, Cubero J-C. A survey of link prediction in complex networks. ACM Computing Surveys (CSUR). 2017;49(4):69. Pan W, Chen L. GBPR: group preference based Bayesian personalized ranking for one-class collaborative filtering. In: IJCAI; 2013. p. 2691–7. Rendle S, Freudenthaler C, Gantner Z, Schmidt-Thieme L. BPR: Bayesian personalized ranking from implicit feedback. In: Conference on uncertainty in artificial intelligence; 2009. p. 452–61. Publication of this article was sponsored by National Natural Science Foundation of China, under grants No. 61702424, 61572506, 61871272, 61471246, and 61575125, Guangdong Special Support Program of Top-notch Young Professionals, under grants 2014TQ01X273, and 2015TQ01R453, Guangdong Foundation of Outstanding Young Teachers in Higher Education Institutions, under grant Yq2015141, Shenzhen Fundamental Research Program, under grant JCYJ20170302154328155. The datasets used in this study are publically available from lncRNASNP, NONCODE, LNCipedia, miRTarBase and miRBase as cited in the paper. Executable routine is available at https://github.com/yahuang1991polyu/GBCF/. About this supplement This article has been published as part of BMC Medical Genomics Volume 11 Supplement 6, 2018: Proceedings of the 29th International Conference on Genome Informatics (GIW 2018): medical genomics. The full contents of the supplement are available online at https://bmcmedgenomics.biomedcentral.com/articles/supplements/volume-11-supplement-6. Zhi-An Huang and Yu-An Huang contributed equally to this work. College of Computer Science and Software Engineering, Shenzhen Universit1y, Shenzhen, 518060, China Zhi-An Huang & Zexuan Zhu Department of Computer Science, City University of Hong Kong, Hong Kong, 999077, China Department of Computing, Hong Kong Polytechnic University, Hong Kong, 999077, China Yu-An Huang & Zhu-Hong You School of Medicine, Shenzhen University, Shenzhen, 518060, China Yiwen Sun Search for Zhi-An Huang in: Search for Yu-An Huang in: Search for Zhu-Hong You in: Search for Zexuan Zhu in: Search for Yiwen Sun in: YWS, YAH & ZAH conceived the algorithm, carried out analyses, prepared the data sets, carried out experiments, and wrote the manuscript. ZHY designed, performed and analyzed experiments. ZXZ helped with manuscript editing and program design. All authors read and approved the final manuscript. Correspondence to Yiwen Sun. The top-50 ranking lists of candidate target lncRNAs for each type of miRNA with the corresponding prediction scores by using miRNA-based group preference. (XLSX 410 kb) Huang, Z., Huang, Y., You, Z. et al. Novel link prediction for large-scale miRNA-lncRNA interaction network in a bipartite graph. BMC Med Genomics 11, 113 (2018) doi:10.1186/s12920-018-0429-8 miRNA-lncRNA interaction ceRNA network Expression profile Computational prediction
CommonCrawl
CABI Agriculture and Bioscience Journal Videos A double hurdle estimation of crop diversification decisions by smallholder wheat farmers in Sinana District, Bale Zone, Ethiopia Dereje Derso1, Degefa Tolossa2 & Abrham Seyoum2 CABI Agriculture and Bioscience volume 3, Article number: 25 (2022) Cite this article Ethiopia is heterogeneous in agro-ecological, social, and economic conditions. In such heterogeneous environment, crop production needs to be diversified to meet household consumption and market needs. This study analyzed determinants of crop diversification in wheat dominant producer rural households in Ethiopia's Sinana District of the Oromia Regional State. The study utilized a structured survey of 384 households, and both inferential and descriptive statistics were employed to analyze the data. A Cragg's double hurdle model was applied to identify factors influencing decision and the extent of crop diversification. We found that decision to crop diversification was positively associated with household size, access to fertile farm plots, and access to extension services and negatively associated with age of household head, and participation in off/non-farm activities. The extent of crop diversification is positively associated with access to extension services, labor availability, membership to farmers cooperatives, and distance to market. These findings support the need for resources to strengthen available extension packages, support existing farmers' cooperatives, and develop rural infrastructures in order to improve the smallholder farmers' extent of crop diversifications. In sub-Saharan Africa, the agricultural sector is key for spurring economic growth, overcoming poverty, and enhancing food security (Michler and Josephson 2017). However, in this region, agriculture is often characterized by low productivity (Kassie et al. 2011). This is due to several factors, including limited natural resources, socio-economic, technological, and infrastructure problems (FAO et al. 2017). Block and Timmer (1994) and Pellegrini and Tasciotti (2014) noted that crop-diversification increases farm income, creates employment opportunities, reduces poverty, and enhances soil health and water retention. Through crop diversification, farm families can spread production and economic risk over a wider range of crops, thereby reducing financial risks associated with adverse weather conditions or market shocks. Growing diverse products can also help financially by expanding market potential (Truscott et al. 2009). Further, diverse cropping systems generally provide more varied and healthier food for humans and livestock. As mentioned through FAO (2017) and Acharya et al. (2011) crop diversification is used as one of the fundamental instruments for ensuring food security, poverty reduction, and nutritional adequacy and is a source of overall agricultural development in most developing countries. Theoretically, there are two main pathways through which crop diversification affects farm household poverty. (i) It can enhance access to a wide variety of food products necessary for a balanced diet for the households. As diversified production improves dietary diversity, it can enhance the nutritional balance of people's diet and, in doing so, help improve their health and earning capacity, (ii) Farming diversified crops including marketable higher-value crops can lead to increased income for farm families (Mazunda et al. 2018). The increasing risks of crop failure due to erratic rainfall and crop disease usually force farmers to diversify their portfolio as a hedge against these risks (Asante et al. 2017; Khanal and Mishra 2017). Concurrently, Winters et al. (2006) have identified three key factors that derive farmers' 'demand' for crop diversity: managing risk, adapting to heterogeneous agro-ecological production conditions, and meeting market demands and food and nutritional security. Likewise, studies (FAO 2012; Feliciano 2019) indicated that crop diversification has beneficial effects for the soil. This is mainly because crop diversification usually associated with reduced weed and insect pressures, reduced need for nitrogen fertilizers (especially if the crop mix include leguminous crops), and reduced erosion (because of cover crops inclusion). It also potentially reduces pests and diseases as well as increases food security by offering farmers access to sufficient, nutritious and diverse foods in areas where markets are not available (Mukherjee 2015; Van den Broeck and Maertens 2016). Furthermore, it has also argued that diversifying by growing more enterprises may lead to farm income stability (Mazunda et al. 2018). Indeed, Habte and Krawinkel (2012) found that pulses are more profitable than most cereals. In addition, the FAO (FAO 2016) reported that the economic benefit of faba bean production is 70% greater than that of wheat. In Ethiopia, agriculture continues to be the dominant economic sector in terms of share in the gross domestic product (GDP) (34.9%), employment generation (80%), and share of export (70%), and it provide about 70% of raw materials for the country industries during the 2015/2016 fiscal year (UNDP 2016). Major Ethiopian policy documents related to economic growth and poverty reduction strategies, place emphasis on the importance of agricultural diversification. Specifically, the Agricultural Development Led Industrialization (ADLI) and the Growth and Transformation Plans (GTPs) I and II embodied all aspects of diversification (Ministry of Finance and Economic Development [MoFED] 2015). Even though it has been emphasized in public policy, crop diversification in Ethiopia is not well practiced (Mesfin et al. 2011; Sibhatu et al. 2015; Mussema et al. 2015). In the Sinana District, agriculture is traditional, and dominated by monoculture cropping system. Productivity is constrained by several factors such as technology, resources availability, environmental factors, socio-economic conditions, poor infrastructure, low soil fertility, and crop pests and insects (Bale Zone Finance and Economic Development [BZFED] 2018). Farmers in this monocropping region are therefore vulnerable to marketing risks, income instability, and hunger. Crop diversification is believed to be a solution, and an effective strategy for the improvement of subsistence farming practice. Nevertheless, the importance and potential of crop diversification has not yet been fully exploited in the study areas. The determinants were also not studied. Thus, this study was designed to examine the determinants of crop diversification in the study area. Different studies (including Fetien et al. 2009; Degye et al. 2012; Mandal and Bezbaruah 2013; Kanyua et al. 2013; Veljanoska 2014; Ainembabazi and Mugisha 2014; Rehima et al. 2013, 2015; Hitayezu et al. 2016; Eisenhauer 2016; World Bank 2018) have identified that crop diversification is shaped by various factors within the farming households. These include available inputs such as labor, farm experience, availability of seed, prices, government policy, land availability, market access, extension service, household characteristics as well as environmental factors such as climatic and soil conditions. However there has been limited research examining determinants of crop diversification at the household level in study area. As part of our contribution to the poverty reduction in the study area, this study was intended to identify and analyze the factors affecting the decision and extent of crop diversification by rural households in the Sinana District for the first time. The findings of this study contribute to the growing body of crop diversification literature. It can also be used as an input to improve production income, food and nutrition security, and poverty reduction for most rural households elsewhere in monocroping areas of Ethiopia. Theoretical framework of the study Agricultural production is subject to complex socioeconomic and environmental constraints. As stated by Singh et al. (1986) households' decision is to ensure a balance between production, consumption and labor input. Farmers' production decision objectives go beyond profit maximization, comprising multiple objectives, namely profit, risk and crop complexity (Van Dusen and Taylor 2005). The objective of smallholder agriculture households is to maximize utility as consumers, unlike the traditional theory of profit maximization (Singh et al. 1986). The analytical model used for this study was drawn from the theory of the farm household model (De Janvry et al. 1991). The householdFootnote 1 combines farm resources and family labor to maximize utility over consumption goods produced on the farm or purchased on the market. Household decisions are constrained by a production technology, farm physical environment and land area, family labor time allocated to labor and leisure and income constraint. A farm household's expected utility is dependent on its attitude toward risk. Even in a one-season model, a farm household's utility is subject to uncertainty in, levels of rainfall, output prices, and consumption prices. A farmer's production decisions, including optimal crop allocation, are therefore dependent on that farmer's attitude toward risk (Fafchamps 1992; Van Dusen and Taylor 2005) as well as the presence of markets problems (De Janvry et al. 1991; Hitayezu et al. 2016); . For example, markets for price information or crop insurance would decrease a farmer's perceived level of risk, affecting crop allocation decisions. The literature (Hitayezu et al. 2016) suggests that farmers in developing countries tend to be risk averse and crop diversification may be a strategy to insure against production and price risk. These conditions pose risks to farm production and make farmers cautious in their farming decisions. Hence, farmers are assumed to use the risk-aversion strategy and maximize utility in their decision-making process. As a mechanism for incorporating risk aversion into a farmer's decision-making process, crop diversification played a vital role (Davis et al. 1987). The analytical framework used for this study was drawn from both risk-averse and utility-maximizing theories of farm household production behavior. The fundamental assumption is that the farmer's decision on whether to minimize risks (diversify crops) or not is based upon the utility- maximization theory (Ellis 1993; Rahm and Huffman 1984). The expression \(U\left({{CD}_{ji}}^{*} {P}_{ji}\right)\) is a non-observable underlying utility function, which ranks the preference of the ith farmer for the jth diversification process or status of diversification (j = 0, 1; where 0 = no diversification and 1 = diversification). Thus, the utility derived from crop diversification depends on CD, which is a vector of demographic, socio-economic, farm specific, marketing and institutional attributes of the diversifier and P, which is a vector of the attributes associated with crop diversification. Although the utility function is unobserved, the relation between the utility derivable from the jth diversification process is postulated to be a function of the vector of explanatory variables and a disturbance term having a zero mean: $${U}_{ji}={\alpha }_{j }F({CD}_{i}{P}_{i}+{\varepsilon }_{ji})$$ Since the utilities \({U}_{ji}\) are random, the ith farmer will select the alternative j = 1 if \({U}_{1i}\) > \({U}_{0i}\) or if the non-observable (latent) random variable Y* = \({U}_{1i}-{U}_{0i}\) > 0. The probability that Yi equals one (i.e., that the farmer practices crop diversification) is a function of the explanatory variables: $$\begin{aligned} {P_i} & = \Pr \left( {{p_i} = 1} \right) = \Pr \left( {{U_{1i}} > {U_{0i}}} \right)\\ & = Pr~\left( {{\alpha _1}{F_1}\left( {C{D_i}^*~{P_i}} \right) + {\varepsilon _{1i}}} \right) > Pr~\left( {{\alpha _0}{F_1}\left( {C{D_i}^*~{P_i}} \right) + {\varepsilon _{0i}}} \right)\\ & = Pr~\left[ {({\varepsilon _{1i}} - {\varepsilon _{0i~}}) > ~{F_i}~\left( {C{D_i}^*~{P_i}} \right)~\left( {{\alpha _1} - {\alpha _0}} \right)} \right]\\ & = {\rm{Pr}}(~{\upsilon _i} > - {F_i}~\left( {C{D_i}^*~{P_i}} \right)\beta )~\\ & = {F_i}~\left( {{X_i},~\beta } \right)\end{aligned}$$ where X is the n × k matrix of the explanatory variables and β is a k × 1 vector of parameters to be estimated, Pr(.) is the probability function, \({\upsilon }_{i}\) is the random error term, and Fi (Xi β) is the cumulative distribution function for \({\upsilon }_{i}\) evaluated at \({X}_{i} \beta\). The probability that a farmer will diversify in crop production is a function of the vector of explanatory variables and of the unknown parameters and error term. Equation 2 cannot be estimated directly without knowing the form of F. It is the distribution of \({\upsilon }_{i}\) that determines the distribution of F. The functional form of F is specified with double hurdle model. It is used to assess the determinants of crop diversification as well as the factors influencing the extent of crop diversification by rural households. Material and methods Description of the study area The study was conducted in Sinana District, Bale Zone, Oromia Regional State of Ethiopia. The District is located in the southwest of Ethiopia and is 412 km from Addis Ababa, capital city of the country. Robe Town is the major town of the District and of the center of the Zone. In the District, agriculture (crop and livestock production) is dominant livelihood strategies of rural households. Farmers in the District utilize mixed farming system of both crops and livestock. The major crops produced in the District are cereals (wheat, barley, maize, and teff), pulses (bean, field pea), and oil crops (Bale Zone Agriculture Development Organization [BZADO] 2017). In addition to farming activities, off-farm and non-farm activities were also practiced in small scale. Off-farm activities such as wage employment, participation in crop production on someone else 's land for harvest share (especially very small land size holders) were commonly practiced. In the study site, two Kebeles (the lowest administrative unit) out of six were identified as food insecure (BZADO 2017). In response to this level of food insecurity in the District, the Productive Safety Net Program (a livelihoods supporting scheme) was the major food insecurity intervention (BZADO 2017) (Fig. 1). Map of study area A combination of quantitative and qualitative data were collected from primary and secondary sources. Primary data was collected from rural households through structured and semi-structured survey. To complement this primary data, secondary data was collected from records of government officials, published and unpublished reports, journals, books, and websites. The survey interview guide, which consists of structured and semi-structured questions, was prepared in English and translated into the local language (Afan Oromo) to collect information on socio-economic, demographic, and characteristics of households. Furthermore, it was pretested in Kebele, different from sampled Kebeles, and necessary adjustments were made before the actual survey. Sampling design To select the sampled respondents, a multistage sampling procedure was employed. In the first stage, the Sinana District was purposely selected from ten districts of the Bale Zone due to dominance of wheat production. In the second stage, six kebeles, namely Ilu-sanbitu, Hamida, Hisu, Salka, Robe-Akababi and Weltei-berisa were selected based on a simple random sampling method. In the third stage, 384 sampled households were selected by using a simple random sampling technique following a scientific sample size determination formula developed by Yamane (1967). $$\mathrm{n}=N/\left(1+N{\mathrm{e}}^{2}\right) 9768/\left(1+9768*{0.05}^{2}\right)= 384$$ where n is sample size to be included in this study, N is population size and e level of precision and the size of our population (wheat producers) is 9768 (BZSP 2015). In this study STATA software version 14.2 was used to analyze data. The t test was used to assess mean differences between crop diversifier and non-diversifier as well as continuous explanatory variables. Chi-square test was also used to assess the association of households and farm-related characteristics between groups (diversifier vs. non-diversifier). Moreover, to investigate the determinants of rural household decisions and extent of crop diversification, double hurdle model was used. Empirical model specification There are various methods to measure crop diversification (Magurran 1988; Malik and Singh 2002). The current study used Herfindahl Index (HIFootnote 2) as measures of crop diversification to represent relative land sizes of farming activities operated by a given farm, widely used in crop diversification literature (Magurran 1988; Malik and Singh 2002; Sichoongwe et al. 2014). This study employs Herfindahl index because it is widely used in the literature of agricultural diversification (Malik and Singh 2002; Kanyua et al. 2013; Ojo et al. 2014; Sichoongwe et al. 2014; Asante et al. 2017). Besides, the index is easy to compute. The CDI (Crop Diversification Index) has a direct relationship with crop diversification, such that a zero value implies specialization and a value greater than zero means crop diversification. The CDI is obtained by subtracting the Herfindahl index (HI) from one (1 − HI). Precisely, the CDI is calculated as follows: $${p}_{i}=\frac{{A}_{i}}{{\sum }_{i=1}^{n}{A}_{i}}$$ where, Pi = proportion of ith crop, Ai = Area under ith crop (ha), \({\sum }_{\mathrm{i}=1}^{\mathrm{n}}{\mathrm{A}}_{\mathrm{i}}\mathrm{ total\, crop\, land }\left(\mathrm{ha}\right)\mathrm{and\, i}=1,2,3\dots ,\mathrm{ n }(\mathrm{number\, of\, crop})\) $$\mathrm{Herfindahl \,Index }=\mathrm{ HI}={\sum }_{i=1}^{n}{{p}_{i}}^{2}$$ $$Crop\, diversification\, index=CDI=1-HI$$ The analysis of crop diversification entails a situation where at each observation the event may or may not occur. An occurrence (crop diversification) is associated with a continuous non-negative random variable, while a non-occurrence (not diversifying) yields a variable with zero value (Cragg 1971). Such a scenario presents a limited dependent-variable (Engel et al. 2014) modeling problem where the lower bound of the variable, zero value, occurs in a considerable number of observations. The occurrence of the event allows a continuous distribution over positive values, but an "accumulation" at zero exists (due to non-occurrence), which is a corner solution for the diversification problem (García 2013). Such common cases in the social sciences invalidate the use of the usual regression model and require models capable of handling binary endogenous variables. The common approaches of modeling such situations include the Tobit, Heckman and double-hurdle models (Komarek 2010). Several studies have applied the Tobit model to approach this type of study (Bellemare and Barrett 2006; Gebremedhin and Jaleta 2010; Martey et al. 2012; Gani and Adeoti 2011), but the major drawback of this approach is that it imposes a restriction that both diversification decisions are simultaneously influenced by the same set of explanatory variables (Ground and Koch 2008). Since we assume, in this study, that the decisions on crop diversification and level of diversification are influenced by different sets of independent variables, the Tobit model is not recognized. It is also argued that the model yields biased parameter estimates (Wanyoike et al. 2015) and recent studies have stressed the inadequacy of the Tobit, proposing the use of less restrictive alternative approaches to Heckman's model (Heckman 1979) and Cragg's double obstacle model (Cragg 1971). These two-step alternative models are relevant for our study because separate vectors of independent variables influence the farmer's crop diversification decision. The double hurdle is a less restrictive variant of the Heckman and is best suited for samples drawn through random probabilistic sampling procedures (Komarek 2010).Therefore, the double hurdle was adopted for the analysis of our randomly selected sample data. The model is a generalization of Tobit, where two distinct stochastic processes determine participation and quantitative decisions. The remarkable difference between the two-step models is based on the contribution of the Heckman, which non-participants will not participate under any circumstance (Kiwanuka and Machethe 2016). Contrary, the double hurdle assumes that the decision not to participate is a deliberate choice (Tura et al. 2016), then the zeros of the non-participants are considered as the solution of the angles in the utility maximization model (Yami et al. 2013). The model is also flexible, assuming that there are no restrictions on the components of the independent variables in each phase of estimation. The double hurdle model is more flexible than the Tobit and allows the participation and extent of crop diversification to be determined separately (Burke 2009). The model requires a joint application of the probit and truncated regression models, sequentially or simultaneously (Yami et al. 2013). The theoretical basis of the double-hurdle estimation framework by Cragg (1971) is grounded on the probit model where the probability of crop diversification at observation t, p(Et), is given by: $$P\left({E}_{t}\right)={\int}_{-\infty }^{x\prime\beta }({2\pi )}^{-\frac{1}{2}}\mathrm{exp}\left\{{-z}^{2}/2\right\}dz$$ where Xt is a K* 1 vector of exogenous variables at observation t and \(\beta\) represents a vector of parameter estimates. Then the cumulative unit normal distribution is designated as $$C\left(z\right)={\int }_{-\infty }^{z}({2\pi )}^{-\frac{1}{2}}\mathrm{exp}\left\{{-t}^{2}/2\right\}dt$$ The probit model estimates the probability of a farmer to participate in crop diversification (first diversification decision). The second quantity of diversification occurs when favorable circumstances (search, information and transaction costs) prevail to allow the diversification to be completed (Moffatt 2005). This non-negative quantity decision can only be measured for non-zero values in the first decision, thus estimated by the truncated regression (Ground and Koch 2008). Therefore, the double-hurdle two-equation framework (Matshe and Young 2004; Kefyalew 2012) is presented as: $${CD}_{i }^{*}={z}_{i }^{*}\alpha + {\varepsilon }_{i} Diversification\, decision {Q}^{CD**}={X}_{I }^{^{\prime}}\upbeta +{\mu }_{i}\mathrm{Quantity\, diversified}\left(\begin{array}{c}{\varepsilon }_{i}\\ {\mu }_{i}\end{array}\right)\sim N\left[\left(\begin{array}{c}0\\ 0\end{array}\right) \left(\begin{array}{c}1\\ 1\end{array} \begin{array}{c}0\\ {\partial }^{2}\end{array}\right)\right]$$ where \({CD}_{i }^{*}\) is the latent variable for the binary dependent variable taking a value of one for crop diversification and zero indicates otherwise. \({Q}^{CD**}\) is the latent variable reflecting the number of crop diversified. \({z}_{i }^{*}, \alpha\, and\, {\varepsilon }_{i}\) represent vectors of explanatory variables, parameter estimates and the error term for the crop diversification decision. Likewise, \({X}_{I }^{^{\prime}}\), β and \({\mu }_{i}\) represent vectors of explanatory variables, parameter estimates and the error term for the level of crop diversification. Since an individual farmer is involved in both sales decisions, the error terms are assumed to be independently and normally distributed, thus the first hurdle corresponds to a probit model (Kefyalew 2012). The binary dependent variable of the diversification decision in Eq. (9) is defined by $${CD}_{i }^{*}=1\, if\, {CD}_{i }^{*}>0, {CD}_{i }^{*}=0\, if\, {CD}_{i }^{*}\le 0$$ and decisions on the level of crop diversification is defined by $${Q}^{CD*}=max\left({Q}^{**}, 0\right)$$ The observed variable, \({Q}^{CD*}\) (normally presented as yi in literature) is determined as $${Q}_{i}^{CD*}={CD}_{i }{Q}_{i}^{CD*}$$ and log-likelihood function for the double hurdle is: $$LogL =\sum_{0}ln \left[1-\Phi \left({z}_{i}^{^{\prime}}\right) \Phi \left(\frac{{X}_{i}^{^{\prime}}\beta }{\sigma }\right)\right]+ \sum_{+}ln\left[\Phi \left({z}_{i}^{^{\prime}}\right)\frac{1}{\sigma }\Phi \left(\frac{{y}_{i-}{X}_{i}^{^{\prime}}\beta }{\sigma }\right)\right]$$ Variables definition There are a number of studies that identify level and determinants of crop diversification at different level. Reviewed literature indicates that the decision and extent of crop diversification practices depend on demographic, socioeconomic, farm attributes, and institutional factors. These factors include sex, age, family size, education level, size of livestock holding, amount of off/non-farm-income, farm experience, farm size, land fragmentation, plot fertility, access to credit, extension services, distance to nearest market and cooperative membership. The list of explanatory variables used in the Cragg's double hurdle model and their expected signs are summarized in Table 1. Table 1 Summary of the explanatory variables used in double hurdle model Result and discussions Household characteristics and status of crop diversification The finding indicates that 58.72% of rural households in study area did not diversify cropping systems. Further, using the Hirschman and Albert (1964) formula, the study found that the average CDI of sampled households is 41.28%. This implies that the cropping system is less diverse. The mean index in this study was less than the national crop diversification index which was 0.83 (CSA 2020). Further, it was less than the findings of Dessie et al. (2019) and Mekuria and Mekonnen (2018) who found 0.77, and 0.57 in other rural areas of Ethiopia. Table 2 shows descriptive analyses that aim to give a picture of demographic and socio-economic characteristics of the diversifier and non-diversifier farmers in the study area. Table 2 Descriptive result of factors affecting crop diversification The survey result shows that two third of the respondents were male-headed households while the remaining one third were female-headed households. The average family size of household was 7.06 with standard deviation of 2.24. A closer look at the economically active family members (15 to 64 years old) shows that crop diversifiers had larger economically active members (3.6 persons) than non-diversifiers (2.1 persons).This implies that crop diversifiers had relatively more labor resource than non-diversifiers. However, non- diversifiers had higher average dependency ratio (2.34) than that of diversifiers (1.4). The dependency ratios indicate that every economically active person in a household had to support more than one economically inactive person in both diversifiers and non-diversifiers cases. The results of the survey showed that 82.77% of household heads have no education. The remaining 17.23% attend different educational levels, i.e. primary school at 15.44%, secondary school at 1.57%, and university or college at 0.52%. The average years of farm experience of household head is 24.74 years with standard deviation of 8.68. Education level and farm experience of household head determine crop diversification because educated and farmers with more farm experience easily understand agricultural instructions provided by the extension workers (Gauchan et al. 2005; Ibrahim et al. 2009). The finding also indicates that most of rural households (66.67% of crop diversifiers and 74.29% of non-diversifier) were members of farmers' cooperatives. Findings of the study showed that almost all sampled households own livestock though the number of livestock varies. The mean livestock holding in Tropical Livestock Unit (TLU) for the sample households is 7.48. Non-diversifier households have a better livestock holding than the diversifier households. The mean livestock holding in TLU for crop diversifier households is 7.57 and 7.24 for non-diversifiers. Regardless of the size, all respondents have ensured that they own land they operate. The landholding of the sample households ranges from 0.5 to 9 ha. The average land holding is 2.99 ha. The mean landholding for crop diversifiers is 3.06 ha, and the corresponding figure for non-diversifiers is 2.67 ha. The survey results indicate that the mean number of farm plots that farmers own is 3.13. The mean farm plots for crop diversifiers are 3.22; the corresponding figure for non-diversifiers is 2.64 farm plots. Findings of the survey results indicates that 49.87% of households have fertile plots. The comparison between crop diversifiers and non-diversifiers showed that 97.14 and 61.22% of the households perceived that they have infertile land (Table 2). Factors influencing smallholder farmers decision to crop diversification This result presented in Table 3 shows that overall, the model is statistically significant at the < 0.1 with Wald test Wald χ2 (18) = 231.19, Pseudo R2 = 1.0294 and log likelihood = 3.3021182, which indicates that the model fulfilled the condition of good fit. Multicollinearity was checked using variance inflation factor (VIF) and the calculated VIF values are all less than ten (the cut-off point), which indicated that multicollinearity is not a problem. A test for normality of CDI was made using Kernel density plot residuals. Was completed the usage of the kernel density plot of residuals. The kernel density plot supplied a reasonably clean curve that closely resembles a normally distributed curve, indicating that the normality assumption was not violated (Fig. 2). Table 3 Probit regression estimates for determinants of crop diversifications Kernel density estimate for crop diversification index The coefficient of age of household head is negative and significant at 1%, indicating an inverse relationship between age of household head and decision to crop diversification. The result indicates that a 1-year increase in age of household reduces the probability of crop diversification by 0.3%. Elderly farmers were less likely to participate in crop diversification and probably are engaging only in food crop production. The reason is that older farmers cannot manage the farm properly and usually rely on old farming systems. This agrees with the findings of Ojo et al. (2014) and Lighton et al. (2016), who also found that a farmer's risk-bearing ability reduces as his/her age increases. Farm size has a negative significant effect on probability of crop diversification at 10% level of significance. The negative impact of farm size suggests that farmers with relatively small farm practice crop diversification than large farms. This is in agreement with our hypothesis formulated regarding the relationship between crop diversification and land holding size of the household. This implies that probably because sizable farm land demands more management skill, inputs and draft power, households may not be able to produce multiple crops. On average, each additional hectare of land decreases the probability of farmer crop diversification by 25.4%. Similar to this, studies by Assefa and Gezahegn (2010) found a similar result. Plot fertility is negative and significant at p < 0.1, reflecting that holding of infertile plot decrease probability of crop diversification. The study further revealed farmers with access to fertile farm plots are 4.60% less likely to diversify their agricultural production than farmers who have access to fertile agricultural land. The negative coefficient for the fertile plots owned and operated by a household indicates that households with fertile farm plots are less likely to diversify by growing different crops. We surmise that if the soils are productive, the farmer will have more cropping options and is probably to participate in more than one crop enterprise on the farm. Farmers with low soil fertility farms are more likely to adopt diversified crop rotations as they have been shown to contribute to higher and more stable net farm income when compared to traditional monoculture, which over extended periods of time, has shown evidence of degradation of soil quality and reduced crop productivity (Clark 2004). It appears a positive and significant relationship between frequency of extension contacts per year and crop diversification and the coefficient is significant at < 0.05. This might be associated with the extension system, which is focused on enhancing farmers' productivity and profitability. Extension service providers favor crop diversification at the micro level and are generally aware of the role of crop diversification in risk minimization. We found that access to extension services increases the probability of a farmer's participation in crop diversification by 50.36%. The result is consistent with the findings of Mesfin et al. (2011), Rehima et al. (2013), Sisay (2016) and Asante et al. (2017), where a positive relationship was found between probability of crop diversification of the household and their access to extension services. Participation in off/non-farm activities negatively and significantly affects probability of crop diversification at 5% probability level. In those households that participated in off/non-farm activities, the likelihoods of farmers participating in crop diversification decreases by 200.70%. The plausible explanation is that if a household receives income from off-farm work, it is less likely to pursue crop diversification as a method of reducing financial risk associated with farming (Sandretto et al. 2004). This finding is similar with findings of Lighton and Emmanuel (2016) and Dessie et al. (2019), who also found out that off-farm income had a significant and negative effect on crop diversification. Factors influencing the extent of crop diversification Ceteris paribus, the statistically significant variables allude to an increase or decrease in the extent of crop diversification, subject to the sign of the relevant parameter estimate (see Table 4). The key factors affecting the level of crop diversification that reveal statistical significance at p < 0.01 include farm land size, plot fertility, extension visits, distance to nearest market, distance to farm, and total annual income. Table 4 Truncated regression estimates for determinants of level of crop diversification As expected, gender of household head affected the level of crop diversification positively and but insignificantly. The finding indicates that households headed by women grow more diverse varieties of crops than household headed by male. These study results is in agreement with findings by Assefa et al. (2022) in which they reported a positive relationship between higher crop diversity on women-managed farmlands than men-managed farmlands. The coefficient of livestock ownership is negative and significant at 10% indicating an inverse relationship between livestock ownership and extent of crop diversification. The results indicate that a one unit increase in TLU among rural households decreased the level of crop diversification by 0.4%. The explanation for the result is that livestock as a measure of wealth may act as insurance against crop production risk, bearing a negative relationship with crop diversification. So, households with large number of livestock are less likely to grow more crops. The result is consistent with the findings of Benin et al. (2004), but in contrast to that of Fetien et al. (2009). The amount of land owned by the farmer has a positive and significant effect on the extent of crop diversification at 1% level of significance. This indicates that an addition of one hectare of land increases the extent of diversification by 20.9%. This implies that large farm size may enable households to allot their land for multiple crops, thereby, minimize income, production and price risks than small land holders. The result supports the finding of Benin et al. (2004), Fetien et al. (2009), Rehima et al. (2013). They found a positive relationship between the level of crop diversification and total farm size in their respective studies. The coefficient of fertile plot/plot fertility has significantly and negatively affected extent of crop diversification at 5% level of significance. Households that had access to fertile farm plots decreased their levels of diversification by 7.30%. This implies that fertile land is promising to increase production and yield, and the households might have motivated to produce a more profitable crop because they can easily increase production and yield levels. This is consistent with the findings of Rehima et al. (2013) and Lighton and Emmanuel (2016), who also found that a fertile plot had a significant and negative effect on crop diversification. Extension service (frequency of contacts) has positively and significantly affected the extent of crop diversification practice at 5% level of significance. This implies that extension workers have an important role to play in creating awareness among farmers as well as educating them on the importance of diversification. A household who had more frequency of extension contact during the cropping period increased probability of being engaged in crop diversification practice by 4.80%. This finding is consistent with the research results of Ibrahim et al. (2009) and Rehima et al. (2013). The coefficient of walking distance from residence to the nearest market significantly and positively affected extent of crop diversification at less than 5% significance level. An increase in one minute to walk to the nearest market increased the extent of crop diversification of households by 0.2%. The possible explanation is that the households that have poor market access are more likely to rely on diversification to meet their consumption needs and to avoid transaction costs. The finding concurs with Alpízar (2007), Rehima et al. (2013) and Dessie et al. (2019) indicating a household far from a market was positively related to crop and variety diversification. Walking distance from residence to the farm plot significantly and negatively affected extent of crop diversification at less than 5% level of significance. As walking to the farm plot increases by a minute, the extent of crop diversification of households decreased by 0.50%. This justifies that in terms of time, labor, safety and management, households may prefer to diversify their crop on the nearest farm. This finding is similar with the finding of Benin et al. (2004) and Sichoongwe et al. (2014), who indicated households living farther from their farms manage fewer crop diversity. The coefficient of total annual income positively and significantly affected extent of crop diversification at less than 10% level of significance. We found out that a 1 Birr increase in income increased the extent of crop diversification by 1.00%. This implies that higher incomes allow farmers to have access to critical productive resources such farm assets, inputs, and land, which increase the extent of crop diversification. The extra income earned by farmers from one crop is also important in providing financial resources that are used for diversification into other crops. Similar studies by Bonham et al. (2012), Rehima et al. (2015) and Basantaray and Nancharaiah (2017) indicated that crop diversification is strongly associated with higher farm income. Conclusion and recommendations The objective of this study was to analyze crop diversification among wheat-dominant producer rural households. Crop diversification was measured by Herfindahl–Hirschman Index, while double hurdle model was used to identify probability and extent of crop diversification in study area. The study found that the average CDI of sampled households was 41.28%. The results also revealed that age of household head and participation in off/non-farm activities negatively influence probability of crop diversification, while household size, access to fertile plots of land, and access to extension services positively influence probability of crop diversification in study area. The results further showed that farm land size, extension visit, distance to nearest market, total annual income, and access to remittance positively influence extent of crop diversification. In contrast, distance to farm plots, access to fertile plots of land, and TLU negatively affect extent of crop diversification in study area. In this study it can be synthesized that wheat farming households asserted that they hope with the incomes from wheat they can procure pulses and other food items. However, one critical factor which needs further reflection relates to the over exploitation of the environment to support preserving wheat farming and access to market in such a remote area. Mitigation of this risk requires the improvement of market and other infrastructure among other strategies to enhance crop diversification. Access to extension services significantly and positively affect likelihoods of crop diversification practices. Given the positive effects of access to extension services on crop diversification, there is now a strong need to strengthening available extension packages to help smallholder farmers improve probability of crop diversifications. Hence, the local government should arrange experience-sharing and short-term training programs so as to share the rich knowledge to inexperienced farmers. Membership to farmers' cooperatives significantly and positively affect extent of crop diversification practices. Thus, we recommend that efforts must be made to strengthen the role of farmers' cooperatives in information dissemination, farmer-to-farmer extension, and smallholder farmers' access to markets and their bargaining power for higher producer prices. Labor availability positively and significantly affects likelihoods of crop diversification. The results also indicated that the availability of labor positively and significantly affects the probability of crop diversification in the study area. The results suggested that policy and strategy makers should consider availability of labor force before introducing labor-intensive technology in other similar agro-ecology areas of the country. As the same time, regional and local governments should encourage the use of labor-saving technologies in diversified farming systems. To this end, the findings suggest that policies that target achieving food and nutritional gains should focus on promoting crop diversification to improve the quality and variety of the products from own farming. These needs supporting farmers through subsidy and providing access to reliable price information and inputs. Integrating diversification strategies into the extension system of the country could also help promote diverse production systems that feature cereals, cash crops, and legumes. The study did not consider crop production efficiency nor the role of PSNP in crop diversification. Future research should focus on understanding the association that exists between crop diversification and household crop production efficiency that could impede productivity. To address this knowledge gap, further research is needed to determine the optimum number and combination of crops and income that the household can efficiently manage without compromising the benefits of crop diversification. Further, we would like to acknowledge that this study did not collect the role of PSNP on household crop diversification. This limit our analysis and inference cannot be made about contribution of PSNP to household crop diversification. Given the importance of crop diversification, it is crucial to examine how PSNP helped to transform livelihoods through crop diversification and make beneficiaries resilient in the face of shocks. In this case, it is important to analyze whether the transfer of resources in the form of food, cash, or both has a role on the crop diversification of households, which in turn relates to their ability to improve their food security status. The authors want to declare that they can submit the data at whatever time based on your request. The datasets used and/or analyzed during the current study will be available from the corresponding author on reasonable request. A small group of persons who share the same living accommodation, who pool some, or all, of their income and wealth and who consume certain types of goods and services collectively, mainly housing and food. Herfindahl Index (HI): As proposed by Orris C. Herfindahl and Albert O. Hirschman (1964) "to quantify the amount of competition in a given industry where the market shares are expressed as fractions between 0 and 1". The method was applied to measure the extent of crop diversification for those farmers who diversified their crops. i.e., a measure of the crop concentration of the farm. ADLI: Agricultural Development Led Industrialization BZADO: Bale Zone Agriculture Development Organization BZFED: Bale Zone Finance and Economic Development CDI: Crop Diversification Index FGDs: Focus group discussions GDP: GTPs: Growth and transformation plans Herfindahl Index IMR: Inverse Mill's ratio KIIs: Key informant interviews TLU: Tropical Livestock Unit Acharya SP, Basavaraja LB, Kunnal SB, Mahajanashetti, Bhat AR. Crop diversification in Karnataka: an economic analysis. Agric Econ Res Rev. 2011;24:351–7. Ainembabazi JH, Mugisha J. The role of farming experience on the adoption of agricultural technologies: evidence from smallholder farmers in Uganda. J Dev Stud. 2014;50(5):666–79. Alpizar C. Risk coping strategies and rural household production efficiency: quasi experimental evidence from El Salvador. Electronic Thesis or Dissertation. 2007. https://etd.ohiolink.edu/. Asante BO, Rene AV, George B, Lan P. Determinants of farm diversification in integrated crop-livestock farming systems in Ghana. Renew Agric Food Syst. 2017;33(2):131. Ashfaq MS, Hassan MZ, Naseer IA, Baig J, Asma J. Factors affecting farm diversification in rice-what. Pak J Agric Sci. 2008;45(3):91–4. Assefa A, Gezahegn A. Adoption of Improved Technology in Ethiopia. Ethiop J Econ. 2010;19(1):155–180. Assefa W, Kewessa G, Datiko D. Agrobiodiversity and gender: the role of women in farm diversification among smallholder farmers in Sinana district, Southeastern Ethiopia. Biodivers Conserv. 2022. https://doi.org/10.1007/s10531-021-02343-z. Basantaray AK, Nancharaiah G. Relationship between crop diversification and farm. Agric Econ Res Rev. 2017;30:45–58. Bellemare MF, Barrett CB. An ordered tobit model of market participation: evidence from Kenya and Ethiopia. Am J Agric Econ. 2006;88:324–37. Benin S, Smale M, Gebremedhin B, Pender J, Ehui S. The determinants of cereal crop diversity on farms in the Ethiopian Highlands. Contributed paper for the 25th international conference of agricultural economists, Durban, South Africa. 2004. https://doi.org/10.1016/j.agecon.2004.09.007. Block S, Timmer P. Agriculture and economic growth: conceptual issues and the Kenyan experience. Development Discussion Paper No. 498. Cambridge, MA, USA. 1994. Bonham CA, Gotor E, Beniwal BR, Canto GB, Ehsan MD, Mathur P. The patterns of use and determinants of crop diversity by pearl millet (Pennisetum glaucum (L.) R. Br.) farmers in Rajasthan. Ind J Plant Genet Resour. 2012;25(1):85–96. Burke WJ. Fitting and interpreting Cragg's Tobit alternative using Stata. Stata J. 2009;9:584–92. BZADO (Bale Zone Agricultural Development Office). Annual Report 2017. Ethiopia: Bale Zone Agricultural Development Office (Unpublished). Bale-Robe. 2017. BZFED (Bale Zone Finance and Economic Development). Annual Report 2018. Ethiopia: Bale Zone Finance and Economic Development Office (Unpublished), Bale-Robe. 2018. BZSP (Bale Zone Socio-Economic Profile). Bale-Zone Culture and Tourism office. Robe-Bale, Ethiopia. 2011. BZSP (Bale Zone Socio-Economic Profile). Bale-Zone Agriculture and Natural Resource Office. Robe-Bale, Ethiopia. 2015. CSA (Central Statistical Agency). Agricultural Sample Survey, 2018/19 Volume I: Report On Area and Production of Major Crops (Private Peasant Holdings, Meher Season). In: Statistical Bulletin 589, Addis Ababa. 2020. p. 54. Clark D. Sustainable maize production-crop rotation. Templeton: Foundation for Arable Research (New Zealand); 2004. Cragg JG. Some statistical models for limited dependent variables with application to the demand for durable goods. Econometrica. 1971;39:829–44. Davis T, Schirmer J, Isabelle A. Sustainability issues in agricultural development. In: Proceedings of seventh agriculture sector symposium. World Bank, Washington DC. 1987. De Janvry A, Fafchamps M, Sadoulet E. Peasant household behavior with missing markets: some paradoxes explained. Econ J. 1991;101(409):1400–17. Degye G, Belay K, Mengistu K. Does crop diversification enhance household food security? Evidence from rural Ethiopia. Adv Agric Sci Eng Res. 2012;2(11):503–15. Dessie AB, Abate TM, Mekie TM. Crop diversification analysis on red pepper dominated smallholder farming system: evidence from Northwest Ethiopia. Ecol Processes. 2019;8:50. Eisenhauer N. Plant diversity effects on soil microorganisms: spatial and temporal heterogeneity of plant inputs increase soil biodiversity. Pedobiologia. 2016;59(4):175–7. Ellis F. Peasant economics: farm household and agrarian development. Cambridge: Cambridge University Press; 1993. Ellis F. The determinants of rural livelihood diversification in developing countries. J Agric Econ. 2000;51(2):289–302. https://doi.org/10.1111/j.1477-9552.2000.tb01229.x. Engel C, Moffatt PG. dhreg, xtdhreg, and bootdhreg: commands to implement Double Hurdle Regression. Stata J. 2014;14:778–97. Fafchamps M. Cash crop production, food price volatility, and rural market integration in the third world. Am J Agr Econ. 1992;74(1):90–9. FAO. Human energy requirements: FAO food and nutrition technical report series 1. Rome: FAO; 2004. FAO. Crop diversification for sustainable diets and nutrition: the role of FAO's Plant Production and Protection Division. Rome: Technical report Plant Production and Protection Division, Food and Agriculture Organization of the United Nations; 2012. FAO. Pulse contribution to food security: International Pulse Day. Rome: FAO; 2016. FAO, IFAD, UNICEF, WFP, WHO. The state of food security and nutrition in the world. Building resilience for peace and food security. Rome: FAO; 2017. Feliciano D. A review on the contribution of crop diversification to Sustainable Development Goal 1 "No poverty" in different world regions. Sustain Dev. 2019;27:795–808. Fetien A, Bjornstad A, Smale M. Measuring on farm diversity and determinants of barley diversity in Tigray, Northern Ethiopia. MEJS. 2009;1(2):44–66. Gani B, Adeoti A. Analysis of market participation and rural poverty among farmers in northern part of Taraba State, Nigeria. J Econ. 2011;2:23–36. García B. Implementation of a Double-hurdle model. Stata J. 2013;13:776–94. Gauchan D, Smale M, Maxted N, Cole M, Sthapit B, Jarvis D. Socioeconomic and Agroecological Determinants of Conserving Diversity On-farm: The Case of Rice Genetic Resources in Nepal. Nepal Agric Res J. 2005;6. Gebremedhin B, Jaleta M. Commercialization of smallholders: is market participation enough?. In: Proceedings of the joint 3rd African Association of Agricultural Economists (AAAE) and 48th Agricultural Economists Association of South Africa (AEASA), Cape Town, South Africa, 19–23 September 2010. 2010. Ground M, Koch SF. Hurdle models of alcohol and tobacco expenditure in South African households. S Afr J Econ. 2008;76:132–43. Heckman JJ. Sample selection bias as a specification error. J Econom. 1979;47(1):153–62. Hirschman O, Albert O. The paternity of an index. J Am Econ Rev. 1964;54(5):761. Hitayezu P, Zegeye EW, Ortmann GF. Farm-level crop diversification in the Midlands region of Kwazulu-Natal, South Africa: patterns, microeconomic drivers, and policy implications. Agroecol Sustain Food Syst. 2016;40(6):553–82. Ibrahim H, Rahman S, Envulus E, Oyewole S. Income and crop diversification among farming households in a rural area of north central Nigeria. J Trop Agric Food Envi Exten. 2009:8(2):84–9. Kanyua MJ, Ithinji GK, Muluvi AS, Gido OE, Waluse SK. Factors influencing diversification and intensification of horticultural production by smallholder tea farmers in Gatanga District, Kenya. Curr Res J Soc Sci. 2013;5(4):103–11. Kassie M, Shiferaw B, Geoffrey M. Agricultural technology, crop income, and poverty alleviation in Uganda. World Dev. 2011;39(10):1784–95. Kefyalew G. Analysis of smallholder farmer's participation in production and marketing of export potential crops: the case of sesame in Diga District, East Wollega Zone of Oromia Regional State. Master's Thesis, Addis Ababa University, Addis Ababa, Ethiopia, 2012. Kiwanuka RN, Machethe C. Determinants of smallholder farmers' participation in Zambian dairy sector's interlocked contractual arrangements. J Sustain Dev. 2016;2016(9):230–45. Khanal R, Mashra K. Enhancing food security: Food crop portfolio choice in response to climatic risk in India. Glob Food Sec. 2017;1(12):22–30. Komarek A. The determinants of banana market commercialization in Western Uganda. Afr J Agric Res. 2010;5:775–84. Lighton D, Emmanuel G. Factors influencing smallholder crop diversification: a case study of Manicaland and Masvingo Provinces in Zimbabwe. Int J Reg Dev. 2016;3(2):2373–9851. Magurran A. Ecological diversity and its measurement. Princeton: Princeton University Press; 1988. https://doi.org/10.1007/978-94-015-7358-0. Malik DP, Singh IJ. Crop diversification-an economic analysis. Indian J Agric Res. 2002;36(1):61–4. Mandal R, Bezbaruah M. Diversification of cropping pattern: its determinants and role in flood affected agriculture of Assam Plains. Indian J Agricu Econ. 2013;68(2):170–81. Martey E, Al-Hassan RM, Kuwornu JK. Commercialization of smallholder agriculture in Ghana: a Tobit regression analysis. Afr J Agric Res. 2012;7:2131–41. Matshe I, Young T. Off-farm labour allocation decisions in small-scale rural households in Zimbabwe. Agric Econ. 2004;30:175–86. Mazunda J, Kankwamba H, Pauw K. Chapter 5: Food and nutrition security implications of crop diversification in Malawi's farm households. In: Aberman N-L, Meerman J, Benson T, editors. Agriculture, food security, and nutrition in Malawi: leveraging the links. Washington, D.C.: International Food Policy Research Institute (IFPRI); 2018. p. 53–60. https://doi.org/10.2499/9780896292864_05. Mekuria W, Mekonnen K. Determinants of crop-livestock diversification in the mixed farming systems: evidence from central highlands of Ethiopia. Agric Food Secur. 2018;7(1):60. Mesfin W, Fufa B, Haji J. Pattern, trend and determinants of crop diversification: empirical evidence from smallholders in Eastern Ethiopia. J Econ Sustain Dev. 2011;2(8):78–89. Michler JD, Josephson AL. To specialize or diversify: agricultural diversity and poverty dynamics in Ethiopia. World Dev. 2017;89:214–26. MoFED. Growth and transformation plan (GTP) 2015/16–2019/20. 2015; Addis Ababa, Ethiopia. Moffatt PG. Hurdle models of loan default. J Oper Res Soc. 2005;56:1063–71. Mugendi NE. Crop diversification: a potential strategy to mitigate food insecurity by smallholders in sub-Saharan Africa. J Agric Food Syst Community Dev. 2013;3(4):63–9. Mukherjee A. Evaluation of the policy of crop diversification as a strategy for reduction of rural poverty in India. In: Heshmati A, Maasoumi E, Wan G, editors. Poverty reduction policies and practices in developing Asia. Economic studies in inequality, social exclusion and well-being. Mandaluyong: Asian Development Bank; 2015. https://doi.org/10.1007/978-981-287-420-7_7. Mussema R, Belay K, Dawit A, Shahidur R. Determinants of crop diversification in Ethiopia: evidence from Oromia Region. Ethiop J Agric Sci. 2015;25(2):65–76. Ojo M, Ojo A, Odine A, Ogaji A. Determinants of crop diversification among small-scale food crop farmers in north central, Nigeria. Prod Agric Technol J. 2014;10(2):1–11. Pellegrini L, Tasciotti L. Crop diversification, dietary diversity and agricultural income: empirical evidence from eight developing countries. Can J Dev Stud. 2014;35(2):211–27. Poudel S, Basavaraja H, Kunnal L, Mahajanashetti S, Bhat A. Crop diversification in Karnataka: an economic analysis. Dharwad: Department of Agricultural Economics, University of Agricultural Sciences; 2012. Rahm R, Huffman E. The adoption of reduced tillage: the role of human capital and other variables. Am J Agric Econ. 1984;66(4):405–13. https://doi.org/10.2307/1240918. Rehima M, Belay M, Dawit A, Shahidur R. Factors affecting farmers' crops diversification: evidence from SNNPR, Ethiopia. Int J Agric Sci. 2013;3(6):558–65. Rehima M, Belay K, Dawit A, Rashid S. Determinants of crop diversification in Ethiopia: evidence from Oromia Region. Ethiop J Agric Sci. 2015;25(2):65–76. Ruben R, Van den Berg M. Nonfarm employment and poverty alleviation of rural farm households in Honduras. World Dev. 2001;29(3):549–60. Sandretto CL, Mishra AK, El-Osta HS. Factors affecting farm enterprise diversification. Agric Financ Rev. 2004;64(2):151. Sibhatu KT, Krishna V, Qaim M. Production diversity and dietary diversity in smallholder farm households. Proc Nat Acad Sci USA (PNAS). 2015;112:10657–62. https://doi.org/10.1073/pnas.1510982112. Sichoongwe K, Laqrene M, Ng'ng'ola D, Temb G. The Determinants and Extent of Crop Diversification among Smallholder Farmers. A case study of Southern Province, Zambia, Malawi strategy support program, working papers 05, Washington DC. 2014. https://doi.org/10.5539/jas.v6n11p150. Singh I, Squire L, Strauss J. Agricultural household models: extensions, policy and applications. Baltimore: John Hopkins University; 1986. Sisay D. Agricultural technology adoption, crop diversification and efficiency of maize-dominated smallholder farming system in Jimma Zone, Southwestern Ethiopia. Electronic Thesis or Dissertation. Haramaya University, Ethiopia; 2016. Swades P, Shyamal K. Implications of the methods of agricultural diversification in reference with Malda district: drawback and rationale. Int J Food Agric Vet Sci. 2012;2(2):97–105. Truscott L, Aranda D, Nagarajan P, Tovignan S, Travaglini AL. A snapshot of crop diversification in organic cotton farms. In Discussion paper. Soil Association. 2009. Tura EG, Goshu D, Demisie T, Kenea T. Determinants of market participation and intensity of marketed surplus of teff producers in Bacho and Dawo Districts of Oromia State, Ethiopia. J Agric Econ Dev. 2016;5:20–32. UNDP (United Nation Development Program). Towards building resilience and supporting transformation in Ethiopia. Annual report. Addis Ababa, Ethiopia. 2016. Van den Broeck G, Maertens M. Horticultural exports and food security in developing countries. Glob Food Sec. 2016;10:11–20. Van Dusen ME, Taylor JE. Missing markets and crop diversity: evidence from Mexico. Environ Dev Econ. 2005;10(4):513–31. Veljanoska S. Agricultural risk and remittances: the case of Uganda. International Congress, EAAE. Congress 'Agri-Food and Rural Innovations for Healthier Societies', August 26–29, 2014; Ljubljana, Slovenia. European Association of Agricultural Economists. 2014. Wanyoike F, Mtimet N, Ndiwa N, Marshall K, Godiah L, Warsame A. Knowledge of livestock grading and market participation among small ruminant producers in northern Somalia. East Afr Agric for J. 2015;81:64–70. Winters P, Cavatassi R, Lipper L. Sowing the seeds of social relations: the role of social capital in crop diversity. ESA Working Paper. 2006; No. 6, (16): pp. 1–40. World Bank. Productive diversification of African agriculture and its effects on resilience and nutrition. Washington, DC: World Bank; 2018. Yamane T. Statistics: an introductory analysis. 2nd ed. New York: Harper and Row; 1967. Yami M, Teklu T, Adam B. Determinants of farmers' participation decision on local seed multiplication in Amhara region, Ethiopia: a double hurdle approach. Int J Sci Res. 2013;2:423–30. The authors acknowledge respondent farmers and data collectors. Addis Ababa University, Madda Walabu University were also acknowledged for their financial and logistical support in accomplishing this paper. The authors also would like to extend their deep thanks to all the contributors of this study, namely local administrators, and development agent workers who took part in the survey. This research was supported and funding by Addis Ababa University. Department of Rural Development and Agricultural Extension, College of Agriculture and Natural Resources, Madda Walabu University, Box: 247, Bale-Robe, Ethiopia Dereje Derso Centre for Rural Development, College of Development Studies, Addis Ababa University, Addis Ababa, Ethiopia Degefa Tolossa & Abrham Seyoum Degefa Tolossa Abrham Seyoum DD designed the study, collected the data, performed the analysis, and developed the manuscript. DT and AS contributed to the research design and analysis, reviewed and made editorial comments on the draft manuscript. All authors read and approved the final manuscript. Correspondence to Dereje Derso. The researchers have obtained a support letter from Addis Ababa University. The letter was submitted to Bale Zone Agriculture and Natural Resource Office and obtained consent. Then, the zonal offices have written an official letter to Sinana District Agriculture and Natural Resource Office where the study was conducted. Informed consents were also obtained from the households, discussants, and informants before data collection in conformity with anonymity of the study participant. The authors declare they have no competing interests. See Table Table 5 Collinearity statistics for variables in the double hurdle model Derso, D., Tolossa, D. & Seyoum, A. A double hurdle estimation of crop diversification decisions by smallholder wheat farmers in Sinana District, Bale Zone, Ethiopia. CABI Agric Biosci 3, 25 (2022). https://doi.org/10.1186/s43170-022-00098-3 Crop diversification Double hurdle Sinana District Bale Zone
CommonCrawl
Gradshteyn and Ryzhik Gradshteyn and Ryzhik (GR) is the informal name of a comprehensive table of integrals originally compiled by the Russian mathematicians I. S. Gradshteyn and I. M. Ryzhik. Its full title today is Table of Integrals, Series, and Products. Table of Integrals, Series, and Products Gradshteyn and Ryzhik, seventh English edition, 2007 AuthorRyzhik, Gradshteyn, Geronimus, Tseytlin et al. CountryRussia LanguageRussian, German, Polish, English, Japanese, Chinese GenreMath PublisherAcademic Press Publication date 1943, 2015 Since its first publication in 1943, it was considerably expanded and it soon became a "classic" and highly regarded reference for mathematicians, scientists and engineers. After the deaths of the original authors, the work was maintained and further expanded by other editors. At some stage a German and English dual-language translation became available, followed by Polish, English-only and Japanese versions. After several further editions, the Russian and German-English versions went out of print and have not been updated after the fall of the Iron Curtain, but the English version is still being actively maintained and refined by new editors, and it has recently been retranslated back into Russian as well. Overview One of the valuable characteristics of Gradshteyn and Ryzhik compared to similar compilations is that most listed integrals are referenced. The literature list contains 92 main entries and 140 additional entries (in the eighth English edition). The integrals are classified by numbers, which haven't changed from the fourth Russian up to the seventh English edition (the numbering in older editions as well as in the eighth English edition is not fully compatible). The book does not only contain the integrals, but also lists additional properties and related special functions. It also includes tables for integral transforms. Another advantage of Gradshteyn and Ryzhik compared to computer algebra systems is the fact that all special functions and constants used in the evaluation of the integrals are listed in a registry as well, thereby allowing reverse lookup of integrals based on special functions or constants. On the downsides, Gradshteyn and Ryzhik has become known to contain a relatively high number of typographical errors even in newer editions, which has repeatedly led to the publication of extensive errata lists. Earlier English editions were also criticized for their poor translation of mathematical terms[1][2][3] and mediocre print quality.[1][2][4][5] History The work was originally compiled by the Russian mathematicians Iosif Moiseevich Ryzhik (Russian: Иосиф Моисеевич Рыжик, German: Jossif Moissejewitsch Ryschik)[6][nb 1] and Izrail Solomonovich Gradshteyn (Russian: Израиль Соломонович Градштейн, German: Israil Solomonowitsch Gradstein).[6][nb 2] While some contents were original, significant portions were collected from other previously existing integral tables like David Bierens de Haan's Nouvelles tables d'intégrales définies (1867),[6][7] Václav Jan Láska's Sammlung von Formeln der reinen und angewandten Mathematik (1888–1894)[6][8] or Edwin Plimpton Adams' and Richard Lionel Hippisley's Smithsonian Mathematical Formulae and Tables of Elliptic Functions (1922).[6][9] The first edition, which contained about 5 000 formulas,[10][11][nb 3] was authored by Ryzhik,[nb 1] who had already published a book on special functions in 1936[6][12] and died during World War II around 1941.[6] Not announcing this fact, his compilation was published posthumously[6][nb 1] in 1943, followed by a second corrected edition in his name in 1948.[nb 4] The third edition (1951) was worked on by Gradshteyn,[13] who also introduced the chapter numbering system in decimal notation. Gradshteyn planned considerable expansion for the fourth edition, a work he could not finish due to his own death.[6][nb 2] Therefore, the fourth (1962/1963) and fifth (1971) editions were continued by Yuri Veniaminovich Geronimus (Russian: Юрий Вениаминович Геронимус, German: Juri Weniaminowitsch Geronimus)[6][nb 5] and Michail Yulyevich Tseytlin (Russian: Михаил Ю́льевич Цейтлин, German: Michael Juljewitsch Zeitlin).[nb 6] The fourth edition contained about 12 000 formulas already.[14][nb 3] Based on the third Russian edition, the first German-English edition with 5 400 formulas[15][nb 3] was published in 1957 by the East-German Deutscher Verlag der Wissenschaften (DVW) with German translations by Christa[nb 7] and Lothar Berg[nb 8] and the English texts by Martin Strauss.[nb 9] In Zentralblatt für Mathematik Karl Prachar wrote:[16] "Die sehr reichhaltigen Tafeln wurden nur aus dem Russischen ins Deutsche und Englische übersetzt." (Translation: The very comprehensive tables were only translated into German and English language.) In 1963, it was followed by the second edition, a reprint edition with a four-page inlet listing corrections compiled by Eldon Robert Hansen. Derived from the 1963 edition, but considerably expanded and split into two volumes, the third German-English edition by Ludwig Boll[nb 10] was finally published by MIR Moscow in 1981 (with permission of DVW and distributed through Verlag Harri Deutsch in the Western world); it incorporated the material of the fifth Russian edition (1971) as well.[nb 11] Pending this third German-English edition an English-only edition by Alan Jeffrey[nb 12] was published in 1965. Lacking a clear designation by itself it was variously known as first, third or fourth English edition, as it was based on the then-current fourth Russian edition. The formulas were photographically reproduced and the text translated. This still held true for the expanded fourth English edition in 1980, which added chapters 10 to 17.[17] Both of these editions saw multiple print runs each incorporating newly found corrections. Starting with the third printing, updated table entries were marked by adding a small superscript number to the entry number indicating the corresponding print run ("3" etc.), a convention carried over into later editions by continuing to increase the superscript number as kind of a revision number (no longer directly corresponding with actual print runs). The fifth edition (1994), which contained close to 20 000 formulas,[18][nb 3] was electronically reset[3] in preparation for a CD-ROM issue of the fifth edition (1996) and in anticipation of further editions. Since the sixth edition (2000), now corresponding with superscript number "10", Daniel Zwillinger[nb 13] started contributing as well. The last edition being edited by Jeffrey before his death[nb 12] was the seventh English edition published in 2007 (with superscript number "11").[19] This edition has been retranslated back into Russian as "seventh Russian edition" in 2011.[20][nb 11] For the eighth edition (2014/2015, with superscript number "12") Zwillinger took over the role of the editor. He was assisted by Victor Hugo Moll.[21][nb 14] In order to make room for additional information without increasing the size of the book significantly, the former chapters 11 (on algebraic inequalities), chapters 13 to 16 (on matrices and related results, determinants, norms, ordinary differential equations) and chapter 18 (on z-transforms) worth about 50 pages in total were removed and some chapters renumbered (12 to 11, 17 to 12). This edition contains more than 10 000 entries.[21][nb 3] Related projects In 1995, Alan Jeffrey published his Handbook of Mathematical Formulas and Integrals.[22] It was partially based on the fifth English edition of Gradshteyn and Ryzhik's Table of Integrals, Series, and Products and meant as an companion, but written to be more accessible for students and practitioners.[22] It went through four editions up to 2008.[22][23][24][25] The fourth edition also took advantage of changes incorporated into the seventh English edition of Gradshteyn and Ryzhik.[25] Inspired by a 1988 paper in which Ilan Vardi proved several integrals in Gradshteyn and Ryzhik,[26] Victor Hugo Moll with George Boros started a project to prove all integrals listed in Gradshteyn and Ryzhik and add additional commentary and references.[27] In the foreword of the book Irresistible Integrals (2004), they wrote:[28] It took a short time to realize that this task was monumental. Nevertheless, the efforts have meanwhile resulted in about 900 entries from Gradshteyn and Ryzhik discussed in a series of more than 30 articles[29][30][31] of which papers 1 to 28[lower-alpha 1] have been published in issues 14 to 26 of Scientia, Universidad Técnica Federico Santa María (UTFSM), between 2007 and 2015[60] and compiled into a two-volume book series Special Integrals of Gradshteyn and Ryzhik: the Proofs (2014–2015) already.[61][62] Editions Russian editions • Рыжик, И. М. (1943). Таблицы интегралов, сумм, рядов и произведений [Tables of integrals, sums, series and products] (in Russian) (1 ed.). Moscow: Gosudarstvennoe Izdatel'stvo Tehniko-Teoretičeskoj Literatury (Государственное издательство технико-теоретической литературы) (GITTL / ГИТТЛ) (Tehteoretizdat / Техтеоретиздат). LCCN ltf89006085. 400 pages.[10][63] • Рыжик, И. М. (1948). Таблицы интегралов, сумм, рядов и произведений (in Russian) (2 ed.). Moscow: Gosudarstvennoe Izdatel'stvo Tehniko-Teoretičeskoj Literatury (Государственное издательство технико-теоретической литературы) (GITTL / ГИТТЛ) (Tehteoretizdat / Техтеоретиздат). 400 pages.[11] • Рыжик, И. М.; Градштейн, И. С. (1951). Таблицы интегралов, сумм, рядов и произведений (in Russian) (3 ed.). Moscow: Gosudarstvennoe Izdatel'stvo Tehniko-Teoretičeskoj Literatury (Государственное издательство технико-теоретической литературы) (GITTL / ГИТТЛ) (Tehteoretizdat / Техтеоретиздат). LCCN 52034158. 464 pages (+ errata inlet).[63][64][65] • Градштейн, И. С.; Рыжик, И. М. (1963) [1962]. Геронимус, Ю. В.; Цейтлин, М. Ю́. (eds.). Tablitsy Integralov, Summ, Riadov I Proizvedenii Таблицы интегралов, сумм, рядов и произведений (in Russian) (4 ed.). Moscow: Gosudarstvennoe Izdatel'stvo Fiziko-Matematicheskoy Literatury (Государственное издательство физико-математической литературы) (Fizmatgiz / Физматгиз). LCCN 63027211. 1100 pages (+ 8 page errata inlet in later print runs).[14] Errata:[66] • Градштейн, И. С.; Рыжик, И. М. (1971). Геронимус, Ю. В.; Цейтлин, М. Ю́. (eds.). Таблицы интегралов, сумм, рядов и произведений (in Russian) (5 ed.). Nauka (Наука). LCCN 78876185. Dark brown fake-leather, 1108 pages.[nb 11] • Градштейн, И. С.; Рыжик, И. М.; Геронимус, Ю. В.; Цейтлин, М. Ю́. (2011). Jeffrey, Alan; Zwillinger, Daniel (eds.). Таблицы интегралов, ряда и продуктов [Table of Integrals, Series, and Products] (in Russian). Translated by Maximov, Vasily Vasilyevich [in Russian] (7 ed.). Saint-Petersburg, Russia: BHV (БХВ-Петербург). ISBN 978-5-9775-0360-0. GR:11. l+1182 pages.[20][nb 11] German editions • Ryshik, Jossif Moissejewitsch; Gradstein, Israil Solomonowitsch (1957). Tafeln / Tables: Summen-, Produkt- und Integral-Tafeln / Tables of Series, Products, and Integrals (in German and English). Translated by Berg, Christa; Berg, Lothar; Strauss, Martin. Foreword by Schröder, Kurt Erich (1 ed.). Berlin, Germany: Deutscher Verlag der Wissenschaften. LCCN 58028629. DNB-IDN 454242255, Lizenz-Nr. 206, 435/2/57. Retrieved 2016-02-21. Gray or green linen, xxiii+438 pages.[15][16] Errata:[65][67][68][69][70][71][72] • Ryshik, Jossif Moissejewitsch; Gradstein, Israil Solomonowitsch (1963). Tafeln / Tables: Summen-, Produkt- und Integral-Tafeln / Tables of Series, Products, and Integrals (in German and English). Translated by Berg, Christa; Berg, Lothar; Strauss, Martin. Foreword by Schröder, Kurt Erich (2nd corrected ed.). Berlin, Germany: VEB Deutscher Verlag der Wissenschaften (DVW). LCCN 63025905. DNB-IDN 579497747, Lizenz-Nr. 206, 435/93/63. Retrieved 2016-02-21. (Printed by VEB Druckerei "Thomas Müntzer", Bad Langensalza. Distributed in the USA by Plenum Press, Inc., New York.) Green linen, xxiii+438 pages + 4 page errata inlet. Errata:[70] • Gradstein, Israil Solomonowitsch; Ryshik, Jossif Moissejewitsch (1981). Geronimus, Juri Weneaminowitsch; Zeitlin, Michael Juljewitsch (eds.). Tafeln / Tables: Summen-, Produkt- und Integraltafeln / Tables of Series, Products, and Integrals (in German and English). Translated by Boll, Ludwig (3 ed.). Berlin / Thun / Frankfurt am Main / Moscow: Verlag Harri Deutsch / Verlag MIR Moscow. ISBN 3-87144-350-6. LCCN 82202345. DNB-IDN 551448512, 881086274, 881086282. Gray linen with gilded embossing by A. W. Schipow, 2 volumes, 677+3 & 504 pages.[73][74] Polish edition • Ryżyk, I. M.; Gradsztejn, I. S. (1964). Tablice całek, sum, szeregów i iloczynów (in Polish). Translated by Malesiński, Roman (1 ed.). Warsaw, Poland: Państwowe Wydawnictwo Naukowe (PWN). OCLC 749996828. VIAF 309184374. Retrieved 2016-02-16. Light grayish cover, 464 pages. English editions • Gradshteyn, Izrail Solomonovich; Ryzhik, Iosif Moiseevich; Geronimus, Yuri Veniaminovich; Tseytlin, Michail Yulyevich (February 1966) [1965]. Jeffrey, Alan (ed.). Table of Integrals, Series, and Products. Translated by Scripta Technica, Inc. (3 ed.). Academic Press. ISBN 0-12-294750-9. LCCN 65029097. Retrieved 2016-02-21. Black cloth hardcover with gilt titles, white dust jacket, xiv+1086 pages.[1] Errata:[1][72][75][76][77][78][79][80][81][82][83][84][85][86][87][88][89][90][91][92] • Gradshteyn, Izrail Solomonovich; Ryzhik, Iosif Moiseevich; Geronimus, Yuri Veniaminovich; Tseytlin, Michail Yulyevich (1980). Jeffrey, Alan (ed.). Table of Integrals, Series, and Products. Translated by Scripta Technica, Inc. (4th corrected and enlarged ed.). Academic Press, Inc. ISBN 978-0-12-294760-5. GR:4,5,6,7. Retrieved 2016-02-21. xlvi+1160 pages.[2][17] Errata:[2][17][93][94][95][96][97][98][99][100][101] • Gradshteyn, Izrail Solomonovich; Ryzhik, Iosif Moiseevich; Geronimus, Yuri Veniaminovich; Tseytlin, Michail Yulyevich (January 1994). Jeffrey, Alan (ed.). Table of Integrals, Series, and Products. Translated by Scripta Technica, Inc. (5 ed.). Academic Press, Inc. MR 1243179. Retrieved 2016-02-21. Blue hardcover with green or blue rectangular and gilt titles, xlvii+1204 pages.[3][18][4][5] (A CD-ROM version with ISBN 0-12-294756-8 / ISBN 978-0-12-294756-8 and LCCN 96-801532 was prepared by Lightbinders, Inc. in July 1996.[102][103][4][5]) Errata:[3][104][105][106][107][4][5][108] • Gradshteyn, Izrail Solomonovich; Ryzhik, Iosif Moiseevich; Geronimus, Yuri Veniaminovich; Tseytlin, Michail Yulyevich (July 2000). Jeffrey, Alan; Zwillinger, Daniel (eds.). Table of Integrals, Series, and Products. Translated by Scripta Technica, Inc. (6 ed.). Academic Press, Inc. ISBN 978-0-12-294757-5. MR 1773820. GR:10. Retrieved 2016-02-21. Red cover, xlvii+1163 pages.[109] (A reprint edition "积分, 级数和乘积表" by World Books Press became available in China under ISBN 7-5062-6546-X / ISBN 978-7-5062-6546-1 in April 2004.) Errata:[32][41][45][109][110][111][112][113] • Gradshteyn, Izrail Solomonovich; Ryzhik, Iosif Moiseevich; Geronimus, Yuri Veniaminovich; Tseytlin, Michail Yulyevich (February 2007). Jeffrey, Alan; Zwillinger, Daniel (eds.). Table of Integrals, Series, and Products. Translated by Scripta Technica, Inc. (7 ed.). Academic Press, Inc. ISBN 978-0-12-373637-6. MR 2360010. GR:11. Retrieved 2016-02-21. xlviii+1171 pages, with CD-ROM.[19][114] (A reprint edition "积分, 级数和乘积表" by Beijing World Publishing Corporation (世界图书出版公司北京公司 / WPCBJ) became available in China under ISBN 7-5062-8235-6 / ISBN 978-7-5062-8235-2 in May 2007.) Errata:[41][45][47][51][53][57][59][115] • Gradshteyn, Izrail Solomonovich; Ryzhik, Iosif Moiseevich; Geronimus, Yuri Veniaminovich; Tseytlin, Michail Yulyevich; Jeffrey, Alan (2015) [October 2014]. Zwillinger, Daniel; Moll, Victor Hugo (eds.). Table of Integrals, Series, and Products. Translated by Scripta Technica (8 ed.). Academic Press, Inc. ISBN 978-0-12-384933-5. GR:12. Retrieved 2016-02-21. xlvi+1133 pages.[21] Errata:[116][30][117] Japanese edition • Градштейн (Guradoshu グラドシュ), И. С.; Рыжик (Rijiku リジク), И. М. (December 1983). Sūgaku daikōshikishū 数学大公式集 [Large mathematics collection] (in Japanese). Translated by Otsuki, Yoshihiko (大槻 義彦) [in Japanese] (1 ed.). Tokyo, Japan: Maruzen (丸善). ISBN 978-4-621-02796-7. NCID BN00561932. JPNO JP84018271. Retrieved 2016-04-06. xv+1085 pages. See also • Prudnikov, Brychkov and Marichev (PBM) • Bronshtein and Semendyayev (BS) • Abramowitz and Stegun (AS) • NIST Handbook of Mathematical Functions (DLMF) • Jahnke and Emde (JE) • Magnus and Oberhettinger (MO) Notes 1. [32][33][34][35][36][37][38][39][40][41][42][43][44][45][46][47][48][49][50][51][52][53][54][55][56][57][58][59] 1. Iosif Moiseevich Ryzhik (Иосиф Моисеевич Рыжик)[nb 15] (1918?–1941?). VIAF 15286520. GND 107340518, 1087809320. (NB. Some sources identify him as a sergeant (сержантом) born in 1918, originally from Vitebsk (Витебска), who was drafted into the army in 1939 from Chkalovsk (Чкаловска), Orenburg (Оренбург), and got missing in December 1941. However, since a birth year 1918 would have made him a very young author (18), this could also have been a namesake. In the foreword of the first edition of the book, Ryzhik thanked three mathematicians of the Moscow Mathematical Society for their suggestions and advice: Vyacheslav Vassilievich Stepanov (Вячеслав Васильевич Степанов), Aleksei Ivanovich Markushevich (Алексей Иванович Маркушевич), and Ilya Nikolaevich Bronshtein (Илья Николаевич Бронштейн), suggesting that he must have been in some way associated with this group.) 2. Izrail Solomonovich Gradshteyn (Израиль Соломонович Градштейн) (1899, Odessa – 1958, Moscow). VIAF 20405466, VIAF 310677818, VIAF 270418384. ISNI 0000000116049405. GND 11526194X. 3. Following the sources, this article distinguishes between the documented number of formulas and the number of entries. 4. The fact that Ryzhik's death was not announced before the third edition of the book in 1951 might indicate that his status was unclear for a number of years, or, in the case of the first edition, that typesetting had already started, but actual production of the book had to be delayed and was then finalized in his absence as a consequence of the war. 5. Yuri Veniaminovich Geronimus (Юрий Венеаминович Геронимус) (1923–2013), GND 131451812. 6. Michail Yulyevich Tseytlin (Михаил Ю́льевич Цейтлин), also as M. Yu. Ceitlin, Michael Juljewitsch Zeitlin, Michael Juljewitsch Zeitlein, Michael Juljewitsch Tseitlin, Mikhail Juljewitsch Tseitlin (?–). 7. Christa Berg née Jahncke (?–), GND 122341597 (this entry contains an incorrect birth year and some incorrectly associated books). 8. Lothar Berg (1930-07-28 to 2015-07-27), GND 117708054. 9. Martin D. H. Strauss also as Martin D. H. Strauß (1907-03-18 Pillau, Baltijsk, Ostpreußen – 1978-05-17, East-Berlin, GDR), GND 139569200, German physicist and philosopher. 10. Ludwig Boll (1911-12-10 Gaulsheim, Germany – 1984-12-02), GND 1068090308, German mathematician. 11. The seventh Russian edition (2011) was named after the seventh English edition (2007), of which it was a retranslation. There was no sixth genuinely Russian edition. The English series of editions was originally (1965) based on the fourth Russian edition (1962/1963). It is unknown if any changes for the fifth Russian edition (1971) or the third German-English edition (1981), which did incorporate material from the fifth Russian edition, were reflected in any of the English editions in between (and thereby in the seventh Russian edition as well). 12. Alan Jeffrey (1929-07-16 to 2010-06-06), GND 113118120. 13. Daniel "Dan" Ian Zwillinger (1957–), GND 172475694. 14. Victor Hugo Moll (1956–), GND 173099572. References 1. Shanks, Daniel (October 1966). "Reviews and Descriptions of Tables and Books 85: Table of Integrals, Series, and Products by I. S. Gradshteyn, I. M. Ryzhik" (PDF). Mathematics of Computation (review). 20 (96): 616–617. doi:10.2307/2003554. JSTOR 2003554. RMT:85. Retrieved 2016-03-04. 2. Luke, Yudell Leo (January 1981). "Reviews and Descriptions of Tables and Books 5: I. S. Gradshteyn and I. M. Ryzhik, Tables of Integrals, Series and Products, Academic Press, New York, 1980" (PDF). Mathematics of Computation (review). 36 (153): 310–314. doi:10.2307/2007757. JSTOR 2007757. MSC:7.95,7.100. Retrieved 2016-03-04. 3. Kölbig, Kurt Siegfried (January 1995). "Reviews and Descriptions of Tables and Books 1: I. S. Gradshteyn & I. M. Ryzhik, Table of Integrals, Series, and Products, 5th ed. (Alan Jeffrey, ed.), Academic Press, Boston, 1994" (PDF). Mathematics of Computation. 64 (209): 439–441. doi:10.2307/2153347. JSTOR 2153347. MSC:00A22,33-00,44-00,65-00. Retrieved 2016-03-03. 4. Wimp, Jet (April 1997). "Tables of Integrals, Series and Products By I. S. Gradshteyn and I. M. Ryzhik, edited by Alan Jeffrey". American Mathematical Monthly. 5. Wimp, Jet (October 1997) [1997-09-30]. Koepf, Wolfram (ed.). "2. Tables of Integrals, Series and Products By I. S. Gradshteyn and I. M. Ryzhik, edited by Alan Jeffrey" (PDF). Books and Journals: Reviews. Orthogonal Polynomials and Special Functions. Vol. 8, no. 1. SIAM Activity Group on Orthogonal Polynomials and Special Functions. pp. 13–16. Archived (PDF) from the original on 2022-01-20. Retrieved 2022-01-23. 6. Wolfram, Stephen (2005-10-08). "The History and Future of Special Functions". Wolfram Technology Conference, Festschrift for Oleg Marichev, in honor of his 60th birthday (speech, blog post). Champaign, IL, USA: Stephen Wolfram, LLC. The story behind Gradshteyn-Ryzhik. Archived from the original on 2016-04-07. Retrieved 2016-04-06. […] In 1936 Iosif Moiseevich Ryzhik had a book entitled Special Functions published by the United Moscow-Leningrad Scientific-Technical Publisher. Ryzhik died in 1941, either during the siege of Leningrad, or fighting on the Russian front. In 1943, a table of formulas was published under Ryzhik's name by the Governmental Moscow-Leningrad Technical-Theoretical Publisher. The only thing the book seems to say about its origins is that it's responding to the shortage of books of formulas. It says that some integrals marked in it are original, but the others mostly come from three books—a French one from 1858, a German one from 1894, and an American one from 1922. It explains that effort went into the ordering of the integrals, and that some are simplified by using a new special function s equal to Gamma[x+y-1]/(Gamma[x]Gamma[y]). It then thanks three fairly prominent mathematicians from Moscow University. That's basically all we know about Ryzhik. […] Israil Solomonovitch Gradshteyn was born in 1899 in Odessa, and became a professor of mathematics at Moscow State University. But in 1948, he was fired as part of the Soviet attack on Jewish academics. To make money, he wanted to write a book. And so he decided to build on Ryzhik's tables. Apparently he never met Ryzhik. But he created a new edition, and by the third edition, the book was known as Gradshteyn-Ryzhik. […] Gradshteyn died of natural causes in Moscow in 1958. Though somehow there developed an urban legend that one of the authors of Gradshteyn-Ryzhik had been shot as a piece of anti-Semitism on the grounds that an error in their tables had caused an airplane crash. […] Meanwhile, starting around 1953, Yurii Geronimus, who had met Gradshteyn at Moscow State University, began helping with the editing of the tables, and actually added the appendices on special functions. Later on, several more people were involved. And when the tables were published in the West, there were arguments about royalties. But Geronimus [in 2005 was] still alive and well and living in Jerusalem, and Oleg phoned him […] 7. Bierens de Haan, David (1867). Nouvelles tables d'intégrales définies [New tables of definite integrals] (in French) (1 ed.). Leiden, Netherlands: P. Engels. Retrieved 2016-04-17. (NB. This book had a precursor in 1858 named Tables d'intégrales définies (published by C. G. van der Post in Amsterdam) with supplement Supplément aux tables d'intégrales définies in ca. 1864. Cited as BI (БХ) in GR.) 8. Láska, Václav Jan (1888–1894). Sammlung von Formeln der reinen und angewandten Mathematik [Compilation of formulae of pure and applied mathematics] (in German). Vol. 1–3 (1 ed.). Braunschweig, Germany: Friedrich Vieweg und Sohn. OCLC 24624148. Retrieved 2016-04-17. (NB. The book writes the author's name as Wenzel Láska. Cited as LA (Ла) in GR.) 9. Adams, Edwin Plimpton; Hippisley, Richard Lionel (1922). Greenhill, Alfred George (ed.). Smithsonian Mathematical Formulae and Tables of Elliptic Functions. Smithsonian Miscellaneous Collections. Vol. 74 (1 ed.). Washington D.C., USA: Smithsonian Institution. Retrieved 2016-04-17. (NB. Cited as AD (А) in GR.) 10. Archibald, Raymond Clare (October 1945). "Recent Mathematical Tables 219: I. M. Ryzhik, Tablitsy Integralov, Summ, Riadov i Proizvedeniĭ, Leningrad, OGIZ, 1943" (PDF). Mathematical Tables and Other Aids to Computation (MTAG). American Mathematical Society. 1 (12): 442–443. RMT:219. Retrieved 2016-03-04. 11. Hahn, Wolfgang (1950-07-05). "Rydzik, I. M.: Tabellen für Integrale, Summen, Reihen und Produkte. 2. Aufl. Moskau, Leningrad: Staatsverlag für techn.-theor. Lit., 1948". Zentralblatt für Mathematik (review) (in German). Berlin, Germany. 34 (1/3): 70. Zbl 0034.07001. Retrieved 2016-02-16. 12. Stepanov, Vyacheslav Vassilievich (1936). Preface. Специальные функции: Собрание формул и вспомогательные таблицы [Special functions: A collection of formulas and an auxiliary table]. By Ryzhik, Iosif Moiseevich (in Russian) (1 ed.). Moscow / Leningrad: Объединенное научно-техническое издательство, ONTI. Гострансизд-во. Глав. ред. общетехнич. лит-ры и номографии. Archived from the original on 2016-04-09. Retrieved 2016-04-09. (160 pages.) 13. Rosenfeld (Розенфельд), Boris Abramowitsch (Борис Абрамович) [in German] (2003). "Math.ru" Об Исааке Моисеевиче Ягломе [About Isaac Moiseevich Yaglom]. Мат. просвещение (Mat. enlightenment) (in Russian): 25–28. Archived from the original on 2022-01-12. Retrieved 2022-01-12 – via math.ru. […] во время антисемитской кампании, известной как «борьба с космополитизмом», был уволен вместе с И.М. Гельфандом и И. С. Градштейном […] [during the antisemitic campaign known as the "fight against cosmopolitanism", he was fired along with I. M. Gelfand and I. S. Gradstein.] 14. Bruins, Evert Marie [in German] (1964-01-02). "Gradšteĭn, I. S., und I. M. Ryžik: Integral-, Summen-, Reihen- und Produkttafeln. (Tablicy integralov, summ, rjadov i proizvedeniĭ) 4. Aufl. überarb. unter Mitwirkung von Ju. V. Geronimus und M. Ju. Ceĭtlin. Moskau: Staatsverlag für physikalisch-mathematische Literatur, 1962". Zentralblatt für Mathematik (list) (in German). 103 (1): 38. Zbl 0103.03801. Retrieved 2016-02-16. 15. Wrench, Jr., John William (October 1960). "Reviews and Descriptions of Tables and Books 69: I. M. Ryshik & I. S. Gradstein, Summen-, Produkt- und Integral-Tafeln: Tables of Series, Products, and Integrals, VEB Deutscher Verlag der Wissenschaften, Berlin" (PDF). Mathematics of Computation. 14 (72): 381–382. doi:10.2307/2003905. JSTOR 2003905. RMT:69. Archived (PDF) from the original on 2016-03-16. Retrieved 2016-03-16. 16. Prachar, Karl (1959-09-15). "Ryshik, I. M. und I. S. Gradstein: Summen-, Produkt- und Integraltafeln / Tables of series, products and integrals. Berlin: VEB Deutscher Verlag der Wissenschaften, 1957". Zentralblatt für Mathematik (review) (in German). 80 (2): 337–338. Zbl 0080.33703. Archived from the original on 2016-02-17. Retrieved 2016-02-12. 17. Papp, Frank J. "Gradshteyn, I. S.; Ryzhik, I. M.: Tables of integrals, series, and products. Corr. and enl. ed. by Alan Jeffrey. Incorporating the 4th ed. by Yu. V. Geronimus and M. Yu. Tseytlin (M. Yu. Tsejtlin). Transl. from the Russian – New York – London – Toronto. Volumes 1, 2. German and English Transl". Zentralblatt für Mathematik und ihre Grenzgebiete (review). 521: 193. MR 0582453. Zbl 0521.33001. Retrieved 2016-02-16. 18. "Gradshteyn, I. S.; Ryzhik, I. M. Table of integrals, series, and products. Transl. from the Russian by Scripta Technica, Inc. 5th ed. Boston, MA: Academic Press, Inc. (1994)". Zentralblatt MATH. 1994. ISBN 9780122947551. Zbl 0918.65002. Retrieved 2016-02-16. 19. Gradshteyn, Izrail Solomonovich; Ryzhik, Iosif Moiseevich; Geronimus, Yuri Veniaminovich; Tseytlin, Michail Yulyevich (February 2007). Jeffrey, Alan; Zwillinger, Daniel (eds.). Table of Integrals, Series, and Products. Translated by Scripta Technica (7 ed.). Academic Press, Inc. ISBN 978-0-12-373637-6. MR 2360010. GR:11. Retrieved 2016-02-21. 20. Таблицы интегралов, ряда и продуктов [Table of Integrals, Series, and Products] (PDF) (in Russian) (7 ed.). BHV (БХВ-Петербург). 2011. ISBN 978-5-9775-0360-0. Archived (PDF) from the original on 2016-03-16. Retrieved 2016-03-16. 21. Gradshteyn, Izrail Solomonovich; Ryzhik, Iosif Moiseevich; Geronimus, Yuri Veniaminovich; Tseytlin, Michail Yulyevich; Jeffrey, Alan (2015) [October 2014]. Zwillinger, Daniel; Moll, Victor Hugo (eds.). Table of Integrals, Series, and Products. Translated by Scripta Technica (8 ed.). Academic Press, Inc. ISBN 978-0-12-384933-5. GR:12. Retrieved 2016-02-21. 22. Jeffrey, Alan (1995-01-01). Handbook of Mathematical Formulas and Integrals (1 ed.). Academic Press, Inc. ISBN 978-0-12-382580-3. 23. Jeffrey, Alan (2000-08-01). Handbook of Mathematical Formulas and Integrals (2 ed.). Academic Press, Inc. ISBN 978-0-12-382251-2. 24. Jeffrey, Alan (2004). Handbook of Mathematical Formulas and Integrals (3 ed.). Academic Press, Inc. (published 2003-11-21). ISBN 978-0-12-382256-7. Archived from the original on 2022-01-01. Retrieved 2016-03-01. 25. Jeffrey, Alan; Dai, Hui-Hui (2008-02-01). Handbook of Mathematical Formulas and Integrals (4 ed.). Academic Press, Inc. ISBN 978-0-12-374288-9. Retrieved 2016-03-01. (NB. Contents of companion CD-ROM: ) 26. Vardi, Ilan (April 1988). "Integrals: An Introduction to Analytic Number Theory" (PDF). American Mathematical Monthly. 95 (4): 308–315. doi:10.2307/2323562. JSTOR 2323562. Archived (PDF) from the original on 2016-03-15. Retrieved 2016-03-14. 27. Moll, Victor Hugo (April 2010) [2009-08-30]. "Seized Opportunities" (PDF). Notices of the American Mathematical Society. 57 (4): 476–484. Archived (PDF) from the original on 2016-04-08. Retrieved 2016-04-08. 28. Boros, George; Moll, Victor Hugo (2006) [September 2004]. Irresistible Integrals. Symbolics, Analysis and Experiments in the Evaluation of Integrals (reprinted 1st ed.). Cambridge University Press (CUP). p. xi. ISBN 978-0-521-79186-1. Retrieved 2016-02-22. (NB. This edition contains many typographical errors.) 29. Moll, Victor Hugo; Vignat, Christophe. "The integrals in Gradshteyn and Ryzhik. Part 29: Chebyshev polynomials" (PDF). Scientia. Series A: Mathematical Sciences. Archived from the original on 2016-03-13. Retrieved 2016-03-13.{{cite journal}}: CS1 maint: unfit URL (link) (NB. This paper discusses 19 GR entries: 1.14.2.1, 1.320, 2.18.1.9, 3.753.2, 3.771.8, 6.611, 7.341.1, 7.341.2, 7.342, 7.343.1, 7.344.1, 7.344.2, 7.346, 7.348, 7.349, 7.355.1, 7.355.2, 8.411.1, 8.921. ) 30. Amdeberhan, Tewodros; Dixit, Atul; Guan, Xiao; Jiu, Lin; Kuznetsov, Alexey; Moll, Victor Hugo; Vignat, Christophe. "The integrals in Gradshteyn and Ryzhik. Part 30: Trigonometric functions" (PDF). Scientia. Series A: Mathematical Sciences. Archived from the original on 2016-03-13. Retrieved 2016-03-13.{{cite journal}}: CS1 maint: unfit URL (link) (NB. This paper discusses 51 GR entries: 1.320.1, 1.320.3, 1.320.5, 1.320.7, 2.01.5, 2.01.6, 2.01.7, 2.01.8, 2.01.9, 2.01.10, 2.01.11, 2.01.12, 2.01.13, 2.01.14, 2.513.1, 2.513.2, 2.513.3, 2.513.4, 3.231.5, 3.274.2, 3.541.8, 3.611.3, 3.621.3, 3.621.4, 3.624.6, 3.631.16, 3.661.3, 3.661.4, 3.675.1, 3.675.2, 3.688.1, 3.721.1, 3.747.7, 3.761.4, 4.381.1, 4.381.2, 4.381.3, 4.381.4, 4.521.1, 6.671.7, 6.671.8, 7.244.1, 7.244.2, 7.531.1, 7.531.2, 8.230.1, 8.230.2, 8.361.7, 8.370, 8.910.2, 8.911.1. It also contains 1 errata for GR entry 3.541.8. ) 31. Gonzalez, Ivan; Kohl, Karen T.; Moll, Victor Hugo (2014-06-13) [2014-03-19]. "Evaluation of entries in Gradshteyn and Ryzhik employing the method of brackets" (PDF). Scientia. Series A: Mathematical Sciences (published 2014). 25: 65–84. Retrieved 2016-03-13. (NB. This paper is also incorporated into volume II.) 32. Moll, Victor Hugo (2006-11-06) [2006-07-21]. "The integrals in Gradshteyn and Ryzhik. Part 1: A family of logarithmic integrals" (PDF). Scientia. Series A: Mathematical Sciences (published 2007). 14: 1–6. Archived from the original (PDF) on 2017-02-02. Retrieved 2016-03-14. (NB. This paper (from volume I) discusses 10 GR entries: 3.419.2, 3.419.3, 3.419.4, 3.419.5, 3.419.6, 4.232.3, 4.261.4, 4.262.3, 4.263.1, 4.264.3. ) 33. Moll, Victor Hugo (2006-11-06) [2006-06-27]. "The integrals in Gradshteyn and Ryzhik. Part 2: Elementary logarithmic integrals" (PDF). Scientia. Series A: Mathematical Sciences (published 2007). 14: 7–15. Retrieved 2016-03-14. (NB. This paper (from volume I) discusses 5 GR entries: 4.231.1, 4.231.5, 4.231.6, 4.232.1, 4.232.2. ) 34. Moll, Victor Hugo (2007-01-16) [2006-12-27]. "The integrals in Gradshteyn and Ryzhik. Part 3: Combinations of logarithms and exponentials" (PDF). Scientia. Series A: Mathematical Sciences (published 2007). 15: 31–36. arXiv:0705.0175. Retrieved 2016-03-14. (NB. This paper (from volume I) discusses 8 GR entries: 4.331.1, 4.335.1, 4.335.3, 4.352.1, 4.352.2, 4.352.3, 4.352.4, 4.353.2. ) 35. Moll, Victor Hugo (2007-01-16) [2006-12-27]. "The integrals in Gradshteyn and Ryzhik. Part 4: The gamma function" (PDF). Scientia. Series A: Mathematical Sciences (published 2007). 15: 37–46. arXiv:0705.0179. Retrieved 2016-03-14. (NB. This paper (from volume I) discusses 44 GR entries: 3.324.2, 3.326.1, 3.326.2, 3.328, 3.351.3, 3.371, 3.381.4, 3.382.2, 3.434.1, 3.434.2, 3.461.2, 3.461.3, 3.462.9, 3.471.3, 3.478.1, 3.478.2, 3.481.1, 3.481.2, 4.215.1, 4.215.2, 4.215.3, 4.215.4, 4.229.1, 4.229.3, 4.229.4, 4.269.3, 4.272.5, 4.272.6, 4.272.7, 4.325.8, 4.325.11, 4.325.12,, 4.331.1 4.333, 4.335.1, 4.335.3, 4.355.1, 4.355.3, 4.355.4, 4.358.2, 4.358.3, 4.358.4, 4.369.1, 4.369.2. ) 36. Amdeberhan, Tewodros; Medina, Luis A.; Moll, Victor Hugo (2007-01-16) [2006-12-27]. "The integrals in Gradshteyn and Ryzhik. Part 5: Some trigonometric integrals" (PDF). Scientia. Series A: Mathematical Sciences (published 2007). 15: 47–60. arXiv:0705.2379. Retrieved 2016-03-14. (NB. This paper (from volume I) discusses 10 GR entries: 3.621.1, 3.621.3, 3.621.4, 3.761.11, 3.764.1, 3.764.2, 3.821.3, 3.821.14, 3.822.1, 3.822.2. ) 37. Moll, Victor Hugo (2007-10-31) [2007-08-31]. "The integrals in Gradshteyn and Ryzhik. Part 6: The beta function" (PDF). Scientia. Series A: Mathematical Sciences (published 2008). 16: 9–24. arXiv:0707.2121. Retrieved 2016-03-14. (NB. This paper (from volume I) discusses 65 GR entries: 3.191.1, 3.191.2, 3.191.3, 3.192.1, 3.192.2, 3.193, 3.194.3, 3.194.4, 3.194.6, 3.194.7, 3.196.2, 3.196.3, 3.196.4, 3.196.5, 3.216.1, 3.216.2, 3.217, 3.218, 3.221.1, 3.221.2, 3.222.2, 3.223.1, 3.223.2, 3.223.3, 3.224, 3.225.1, 3.225.2, 3.225.3, 3.226.1, 3.226.2, 3.241.2, 3.241.4, 3.241.5, 3.248.1, 3.248.2, 3.248.3, 3.249.1, 3.249.2, 3.249.5, 3.249.7, 3.249.8, 3.251.1, 3.251.2, 3.251.3, 3.251.4, 3.251.5, 3.251.6, 3.251.8, 3.251.9, 3.251.10, 3.251.11, 3.267.1, 3.267.2, 3.267.3, 3.311.3, 3.311.9, 3.312.1, 3.313.1, 3.313.2, 3.314, 3.457.3, 4.251.1, 4.273, 4.275.1, 4.321.1. ) 38. Amdeberhan, Tewodros; Moll, Victor Hugo (2007-10-31) [2007-08-21]. "The integrals in Gradshteyn and Ryzhik. Part 7: Elementary examples" (PDF). Scientia. Series A: Mathematical Sciences (published 2008). 16: 25–39. arXiv:0707.2122. Retrieved 2016-03-14. (NB. This paper (from volume I) discusses 30 GR entries: 2.321.1, 2.321.2, 2.322.1, 2.322.2, 2.322.3, 2.322.4, 3.195, 3.248.4, 3.248.6, 3.249.1, 3.249.6, 3.252.1, 3.252.2, 3.252.3, 3.268.1, 3.310, 3.311.1, 3.351.1, 3.351.2, 3.351.7, 3.351.8, 3.351.9, 3.353.4, 3.411.19, 3.411.20, 3.471.1, 3.622.3, 3.622.4, 4.212.7, 4.222.1. ) 39. Moll, Victor Hugo; Rosenberg, Jason; Straub, Armin; Whitworth, Pat (2007-10-31) [2007-08-31]. "The integrals in Gradshteyn and Ryzhik. Part 8: Combinations of powers, exponentials and logarithms" (PDF). Scientia. Series A: Mathematical Sciences (published 2008). 16: 41–50. arXiv:0707.2123. Retrieved 2016-03-14. (NB. This paper (from volume I) discusses 7 GR entries: 3.351.1, 4.331.1, 4.351.1, 4.351.2, 4.353.3, 4.362.1, 8.350.1. ) 40. Amdeberhan, Tewodros; Moll, Victor Hugo; Rosenberg, Jason; Straub, Armin; Whitworth, Pat (2008-11-18) [2007-11-29]. "The integrals in Gradshteyn and Ryzhik. Part 9: Combinations of logarithms, rational and trigonometric functions" (PDF). Scientia. Series A: Mathematical Sciences (published 2009). 17: 27–44. Retrieved 2016-03-14. (NB. This paper (from volume I) discusses 45 GR entries: 2.148.4, 3.747.7, 4.223.1, 4.223.2, 4.224.1, 4.224.2, 4.224.3, 4.224.4, 4.224.5, 4.224.6, 4.225.1, 4.225.2, 4.227.1, 4.227.2, 4.227.3, 4.227.9, 4.227.10, 4.227.11, 4.227.13, 4.227.14, 4.227.15, 4.231.1, 4.231.2, 4.231.3, 4.231.4, 4.231.8, 4.231.9, 4.231.11, 4.231.12, 4.231.13, 4.231.14, 4.231.15, 4.231.19, 4.231.20, 4.233.1, 4.261.8, 4.291.1, 4.291.2, 4.295.5, 4.295.6, 4.295.11, 4.521.1, 4.531.1, 8.366.3, 8.380.3. ) 41. Medina, Luis A.; Moll, Victor Hugo (2008-11-18) [2007-11-29]. "The integrals in Gradshteyn and Ryzhik. Part 10: The digamma function" (PDF). Scientia. Series A: Mathematical Sciences (published 2009). 17: 45–66. Retrieved 2016-03-14. (NB. This paper (from volume I) discusses 76 GR entries: 3.219, 3.231.1, 3.231.3, 3.231.5, 3.231.6, 3.233, 3.234.1, 3.235, 3.244.2, 3.244.3, 3.265, 3.268.2, 3.269.1, 3.269.3, 3.311.5, 3.311.6, 3.311.7, 3.311.8, 3.311.10, 3.311.11, 3.311.12, 3.312.2, 3.316, 3.317.1, 3.317.2, 3.427.1, 3.427.2, 3.429, 3.434.2, 3.435.3, 3.435.4, 3.442.3, 3.457.1, 3.463, 3.467, 3.469.2, 3.469.3, 3.471.14, 3.475.1, 3.475.2, 3.475.3, 3.476.1, 3.476.2, 4.241.1, 4.241.2, 4.241.3, 4.241.4, 4.241.5, 4.241.7, 4.241.8, 4.241.9, 4.241.10, 4.241.11, 4.243, 4.244.1, 4.244.2, 4.244.3, 4.245.1, 4.245.2, 4.246, 4.247.1, 4.247.2, 4.251.4, 4.253.1, 4.254.1, 4.254.6, 4.256, 4.271.15, 4.275.2, 4.281.1, 4.281.4, 4.281.5, 4.293.8, 4.293.13, 4.331.1, 8.371.2. ) 42. Boyadzhiev, Khristo N.; Medina, Luis A.; Moll, Victor Hugo (2009-03-16) [2008-07-02]. "The integrals in Gradshteyn and Ryzhik. Part 11: The incomplete beta function" (PDF). Scientia. Series A: Mathematical Sciences (published 2009). 18: 61–75. Retrieved 2016-03-14. (NB. This paper (from volume I) discusses 52 GR entries: 3.222.1, 3.231.2, 3.231.4, 3.241.1, 3.244.1, 3.249.4, 3.251.7, 3.269.2, 3.311.2, 3.311.13, 3.522.4, 3.541.6, 3.541.7, 3.541.8, 3.622.2, 3.623.2, 3.623.3, 3.624.1, 3.635.1, 3.651.1, 3.651.2, 3.656.1, 3.981.3, 4.231.1, 4.231.6, 4.231.11, 4.231.12, 4.231.14, 4.231.19, 4.231.20, 4.234.1, 4.234.2, 4.251.3, 4.254.4, 4.261.2, 4.261.6, 4.261.11, 4.262.1, 4.262.4, 4.263.2, 4.264.1, 4.265, 4.266.1, 4.271.1, 4.271.16, 8.361.7, 8.365.4, 8.366.3, 8.366.11, 8.366.12, 8.366.13, 8.370. ) 43. Moll, Victor Hugo; Posey, Ronald A. (2009-03-16) [2008-07-02]. "The integrals in Gradshteyn and Ryzhik. Part 12: Some logarithmic integrals" (PDF). Scientia. Series A: Mathematical Sciences (published 2009). 18: 77–84. Retrieved 2016-03-14. (NB. This paper (from volume I) discusses 6 GR entries: 4.231.1, 4.233.1, 4.233.2, 4.233.3, 4.233.4, 4.261.8. ) 44. Moll, Victor Hugo (2010-10-10) [2009-07-07]. "The integrals in Gradshteyn and Ryzhik. Part 13: Trigonometric forms of the beta function" (PDF). Scientia. Series A: Mathematical Sciences (published 2010). 19: 91–96. arXiv:1004.2439. Retrieved 2016-03-14. (NB. This paper (from volume I) discusses 21 GR entries: 3.621.1, 3.621.2, 3.621.3, 3.621.4, 3.621.5, 3.621.6, 3.621.7, 3.622.1, 3.623.1, 3.624.2, 3.624.3, 3.624.4, 3.624.5, 3.625.1, 3.625.2, 3.625.3, 3.625.4, 3.626.1, 3.626.2, 3.627, 3.628. ) 45. Amdeberhan, Tewodros; Moll, Victor Hugo (2010-10-10) [2009-07-07]. "The integrals in Gradshteyn and Ryzhik. Part 14: An elementary evaluation of entry 3.411.5" (PDF). Scientia. Series A: Mathematical Sciences (published 2010). 19: 97–103. arXiv:1004.2440. Retrieved 2016-03-14. (NB. This paper (from volume I) discusses 1 GR entry: 3.411.5. ) 46. Albano, Matthew; Amdeberhan, Tewodros; Beyerstedt, Erin; Moll, Victor Hugo (2010-07-18) [2010-04-20]. "The integrals in Gradshteyn and Ryzhik. Part 15: Frullani integrals" (PDF). Scientia. Series A: Mathematical Sciences (published 2010). 19: 113–119. arXiv:1005.2940. Retrieved 2016-03-14. (NB. This paper (from volume I) discusses 12 GR entries: 3.232, 3.329, 3.412.1, 3.434.2, 3.436, 3.476.1, 3.484, 4.267.8, 4.297.7, 4.319.3, 4.324.2, 4.536.2. ) 47. Boettner, Stefan Thomas; Moll, Victor Hugo (2010-07-21) [2010-03-22]. "The integrals in Gradshteyn and Ryzhik. Part 16: Complete elliptic integrals" (PDF). Scientia. Series A: Mathematical Sciences (published 2010). 20: 45–59. arXiv:1005.2941. Retrieved 2016-03-14. (NB. This paper (from volume II) discusses 36 GR entries: 1.421.1, 3.166.16, 3.166.18, 3.721.1, 3.821.7, 3.841.1, 3.841.2, 3.841.3, 3.841.4, 3.842.3, 3.842.4, 3.844.1, 3.844.2, 3.844.3, 3.844.4, 3.844.5, 3.844.6, 3.844.7, 3.844.8, 3.846.1, 3.846.2, 3.846.3, 3.846.4, 3.846.5, 3.846.6, 3.846.7, 3.846.8, 4.242.1, 4.395.1, 4.414.1, 4.432.1, 4.522.4, 4.522.5, 4.522.6, 4.522.7, 8.129.1. ) 48. Amdeberhan, Tewodros; Boyadzhiev, Khristo N.; Moll, Victor Hugo (2010-07-21) [2010-03-22]. "The integrals in Gradshteyn and Ryzhik. Part 17: The Riemann zeta function" (PDF). Scientia. Series A: Mathematical Sciences (published 2010). 20: 61–71. Retrieved 2016-03-14. (NB. This paper (from volume II) discusses 36 GR entries: 3.333.1, 3.333.2, 3.411.1, 3.411.2, 3.411.3, 3.411.4, 3.411.6, 3.411.7, 3.411.8, 3.411.9, 3.411.10, 3.411.11, 3.411.12, 3.411.13, 3.411.14, 3.411.15, 3.411.17, 3.411.18, 3.411.21, 3.411.22, 3.411.25, 3.411.26, 4.231.1, 4.231.2, 4.261.12, 4.261.13, 4.262.1, 4.262.2, 4.262.5, 4.262.6, 4.264.1, 4.264.2, 4.266.1, 4.266.2, 4.271.1, 4.271.2. ) 49. Koutschan, Christoph; Moll, Victor Hugo (2010-08-21) [2010-04-26]. "The integrals in Gradshteyn and Ryzhik. Part 18: Some automatic proofs" (PDF). Scientia. Series A: Mathematical Sciences (published 2010). 20: 93–111. Archived (PDF) from the original on 2016-03-14. Retrieved 2016-03-14. (NB. This paper (from volume II) discusses 7 GR entries: 2.148.3, 3.953, 4.535.1, 6.512.1, 7.322, 7.349, 7.512.5. ) 50. Albano, Matthew; Amdeberhan, Tewodros; Beyerstedt, Erin; Moll, Victor Hugo (2011-04-13) [2010-12-23]. "The integrals in Gradshteyn and Ryzhik. Part 19: The error function" (PDF). Scientia. Series A: Mathematical Sciences (published 2011). 21: 25–42. Retrieved 2016-03-14. (NB. This paper (from volume II) discusses 42 GR entries: 3.321.1, 3.321.2, 3.321.3, 3.321.4, 3.321.5, 3.321.6, 3.321.7, 3.322.1, 3.322.2, 3.323.1, 3.323.2, 3.325, 3.361.1, 3.361.2, 3.361.3, 3.362.1, 3.362.2, 3.363.1, 3.363.2, 3.369, 3.382.4, 3.461.5, 3.462.5, 3.462.6, 3.462.7, 3.462.8, 3.464, 3.466.1, 3.466.2, 3.471.15, 3.471.16, 3.472.1, 6.281.1, 6.282.1, 6.283.1, 6.283.2, 6.294.1, 8.258.1, 8.258.2, 8.258.3, 8.258.4, 8.258.5. ) 51. Kohl, Karen T.; Moll, Victor Hugo (2011-04-13) [2010-12-23]. "The integrals in Gradshteyn and Ryzhik. Part 20: Hypergeometric functions" (PDF). Scientia. Series A: Mathematical Sciences (published 2011). 21: 43–54. Retrieved 2016-03-14. (NB. This paper (from volume II) discusses 26 GR entries: 3.194.1, 3.194.2, 3.194.5, 3.196.1, 3.197.1, 3.197.2, 3.197.3, 3.197.4, 3.197.5, 3.197.6, 3.197.7, 3.197.8, 3.197.9, 3.197.10, 3.197.11, 3.197.12, 3.198, 3.199, 3.227.1, 3.254.1, 3.254.2, 3.259.2, 3.312.3, 3.389.1, 9.121.4, 9.131.1. ) 52. Boyadzhiev, Khristo N.; Moll, Victor Hugo (2012-10-20) [2012-05-15]. "The integrals in Gradshteyn and Ryzhik. Part 21: Hyperbolic functions" (PDF). Scientia. Series A: Mathematical Sciences (published 2012). 22: 109–127. Archived (PDF) from the original on 2016-03-14. Retrieved 2016-03-14. (NB. This paper (from volume II) discusses 75 GR entries: 2.423.9, 3.231.2, 3.231.5, 3.265, 3.511.1, 3.511.2, 3.511.4, 3.511.5, 3.511.8, 3.512.1, 3.512.2, 3.521.1, 3.521.2, 3.522.3, 3.522.6, 3.522.8, 3.522.10, 3.523.2, 3.523.3, 3.523.4, 3.523.5, 3.523.6, 3.523.7, 3.523.8, 3.523.9, 3.523.10, 3.523.11, 3.523.12, 3.524.2, 3.524.4, 3.524.6, 3.524.8, 3.524.9, 3.524.10, 3.524.11, 3.524.12, 3.524.13, 3.524.14, 3.524.15, 3.524.16, 3.524.17, 3.524.18, 3.524.19, 3.524.20, 3.524.21, 3.524.22, 3.524.23, 3.525.1, 3.525.2, 3.525.3, 3.525.4, 3.525.5, 3.525.6, 3.525.7, 3.525.8, 3.527.1, 3.527.2, 3.527.3, 3.527.4, 3.527.5, 3.527.6, 3.527.7, 3.527.8, 3.527.9, 3.527.10, 3.527.11, 3.527.12, 3.527.13, 3.527.14, 3.527.15, 3.527.16, 3.543.2, 4.113.3, 4.231.12, 8.365.9. ) 53. Glasser, Larry; Kohl, Karen T.; Koutschan, Christoph; Moll, Victor Hugo; Straub, Armin (2012-10-20) [2012-05-15]. "The integrals in Gradshteyn and Ryzhik. Part 22: Bessel-K functions" (PDF). Scientia. Series A: Mathematical Sciences (published 2012). 22: 129–151. Archived (PDF) from the original on 2016-03-14. Retrieved 2016-03-14. (NB. This paper (from volume II) discusses 36 GR entries: 3.323.3, 3.324.1, 3.337.1, 3.364.3, 3.365.2, 3.366.2, 3.372, 3.383.3, 3.383.8, 3.387.1, 3.387.3, 3.387.6, 3.388.2, 3.389.4, 3.391, 3.395.1, 3.461.6, 3.461.7, 3.461.8, 3.461.9, 3.462.20, 3.462.21, 3.462.22, 3.462.23, 3.462.24, 3.462.25, 3.469.1, 3.471.4, 3.471.8, 3.471.9, 3.471.12, 3.478.4, 3.479.1, 3.547.2, 3.547.4, 8.432.6. ) 54. Medina, Luis A.; Moll, Victor Hugo (2012-06-25) [2012-02-01]. "The integrals in Gradshteyn and Ryzhik. Part 23: Combination of logarithms and rational functions" (PDF). Scientia. Series A: Mathematical Sciences (published 2012). 23: 1–18. Retrieved 2016-03-14. (NB. This paper (from volume II) discusses 54 GR entries: 3.417.1, 3.417.2, 4.212.7, 4.224.5, 4.224.6, 4.225.1, 4.225.2, 4.231.1, 4.231.2, 4.231.8, 4.231.9, 4.231.10, 4.231.11, 4.231.16, 4.231.17, 4.231.18, 4.233.1, 4.234.3, 4.234.6, 4.234.7, 4.234.8, 4.262.7, 4.262.8, 4.262.9, 4.291.1, 4.291.2, 4.291.3, 4.291.4, 4.291.5, 4.291.6, 4.291.7, 4.291.8, 4.291.9, 4.291.10, 4.291.11, 4.291.12, 4.291.13, 4.291.14, 4.291.15, 4.291.16, 4.291.17, 4.291.18, 4.291.19, 4.291.20, 4.291.21, 4.291.22, 4.291.23, 4.291.24, 4.291.25, 4.291.26, 4.291.27, 4.291.28, 4.291.29, 4.291.30. ) 55. McInturff, Kim; Moll, Victor Hugo (2012-07-28) [2012-02-01]. "The integrals in Gradshteyn and Ryzhik. Part 24: Polylogarithm functions" (PDF). Scientia. Series A: Mathematical Sciences (published 2012). 23: 45–51. Retrieved 2016-03-14. (NB. This paper (from volume II) discusses 10 GR entries: 3.418.1, 3.514.1, 3.531.1, 3.531.2, 3.531.3, 3.531.4, 3.531.5, 3.531.6, 3.531.7, 9.513.1. ) 56. Moll, Victor Hugo (2012-07-28) [2012-02-01]. "The integrals in Gradshteyn and Ryzhik. Part 25: Evaluation by series" (PDF). Scientia. Series A: Mathematical Sciences (published 2012). 23: 53–65. Retrieved 2016-03-14. (NB. This paper (from volume II) discusses 18 GR entries: 3.194.8, 3.311.4, 3.342, 3.451.1, 3.451.2, 3.466.3, 3.747.7, 4.221.1, 4.221.2, 4.221.3, 4.251.5, 4.251.6, 4.269.1, 4.269.2, 8.339.1, 8.339.2, 8.365.4, 8.366.3. ) 57. Boyadzhiev, Khristo N.; Moll, Victor Hugo (2015-01-30) [2014-09-19]. "The integrals in Gradshteyn and Ryzhik. Part 26: The exponential integral" (PDF). Scientia. Series A: Mathematical Sciences (published 2015). 26: 19–30. Retrieved 2016-03-14. (NB. This paper (from volume II) discusses 41 GR entries: 3.327, 3.351.4, 3.351.5, 3.351.6, 3.352.1, 3.352.2, 3.352.3, 3.352.4, 3.352.5, 3.352.6, 3.353.1, 3.353.2, 3.353.3, 3.353.4, 3.353.5, 3.354.1, 3.354.2, 3.354.3, 3.354.4, 3.355.1, 3.355.2, 3.355.3, 3.355.4, 3.356.1, 3.356.2, 3.356.3, 3.356.4, 3.357.1, 3.357.2, 3.357.3, 3.357.4, 3.357.5, 3.357.6, 3.358.1, 3.358.2, 3.358.3, 3.358.4, 4.211.1, 4.211.2, 4.212.1, 4.212.2. ) 58. Medina, Luis A.; Moll, Victor Hugo (2015-01-30) [2014-09-16]. "The integrals in Gradshteyn and Ryzhik. Part 27: More logarithmic examples" (PDF). Scientia. Series A: Mathematical Sciences (published 2015). 26: 31–47. Retrieved 2016-03-14. (NB. This paper (from volume II) discusses 37 GR entries: 3.194.3, 3.231.1, 3.231.5, 3.231.6, 3.241.3, 3.621.3, 3.621.5, 4.224.2, 4.231.1, 4.231.2, 4.231.8, 4.233.5, 4.234.4, 4.234.5, 4.235.1, 4.235.2, 4.235.3, 4.235.4, 4.241.5, 4.241.6, 4.241.7, 4.241.11, 4.251.1, 4.251.2, 4.252.1, 4.252.2, 4.252.3, 4.252.4, 4.254.2, 4.254.3, 4.255.2, 4.255.3, 4.257.1, 4.261.9, 4.261.15, 4.261.16, 4.297.8. ) 59. Dixit, Atul; Moll, Victor Hugo (2015-02-01) [2014-09-30]. "The integrals in Gradshteyn and Ryzhik. Part 28: The confluent hypergeometric function and Whittaker functions" (PDF). Scientia. Series A: Mathematical Sciences (published 2015). 26: 49–61. Retrieved 2016-03-14. (NB. This paper (from volume II) discusses 17 GR entries: 7.612.1, 7.612.2, 7.621.1, 7.621.2, 7.621.3, 7.621.4, 7.621.5, 7.621.6, 7.621.7, 7.621.8, 7.621.9, 7.621.10, 7.621.11, 7.621.12, 8.334.2, 8.703, 9.211.4. ) 60. Moll, Victor Hugo (2012). "Index of the papers in Revista Scientia with formulas from GR". Retrieved 2016-02-17. 61. Moll, Victor Hugo (2014-10-01). Special Integrals of Gradshteyn and Ryzhik: the Proofs. Monographs and Research Notes in Mathematics. Vol. I (1 ed.). Chapman and Hall/CRC Press/Taylor & Francis Group, LLC (published 2014-11-12). ISBN 978-1-4822-5651-2. Retrieved 2016-02-12. (NB. This volume compiles Scientia papers 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 from issues 14 to 19.) 62. Moll, Victor Hugo (2015-08-24). Special Integrals of Gradshteyn and Ryzhik: the Proofs. Monographs and Research Notes in Mathematics. Vol. II (1 ed.). Chapman and Hall/CRC Press/Taylor & Francis Group, LLC (published 2015-10-27). ISBN 978-1-4822-5653-6. Retrieved 2016-02-12. (NB. This volume compiles 14 Scientia articles from issues 20, 21, 22, 23, 25 and 26 including papers 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28 and one unnumbered paper.) 63. "Ryžik, I. M.: Tafeln von Integralen, Summen, Reihen und Produkten. Moskau-Leningrad: Staatsverlag für technisch-theoretische Literatur, 1943". Zentralblatt für Mathematik (list) (in German). 60 (1): 123. 1957-04-01. Zbl 0060.12305. Retrieved 2016-02-16. 64. Prachar, Karl (1952-09-01). "Ryžik, I. M. und I. S. Gradštejn: Tafeln von Integralen, Summen, Reihen und Produkten. 3. umgearb. Aufl. Moskau-Leningrad: Staatsverlag für technisch-theoretische Literatur, 1951". Zentralblatt für Mathematik (review) (in German). 44 (1/5): 133. Zbl 0044.13303. Retrieved 2016-02-16. 65. Wrench, Jr., John William (October 1960). "Table Errata 293: I. M. Ryshik & I. S. Gradstein, Summen-, Produkt- und Integral-Tafeln: Tables of Series, Products, and Integrals, Deutscher Verlag der Wissenschaften, Berlin, 1957" (PDF). Mathematics of Computation. 14 (72): 401–403. JSTOR 2003934. MTE:293. Archived (PDF) from the original on 2016-03-16. Retrieved 2016-03-16. 66. Градштейн, И. С.; Рыжик, И. М. (1971). "Errata in 4th edition". In Геронимус, Ю. В.; Цейтлин, М. Ю́. (eds.). Таблицы интегралов, сумм, рядов и произведений (in Russian) (5 ed.). Nauka (Наука). pp. 1101–1108. (NB. The 8-page errata list in later print runs of the fourth Russian edition affected 189 table entries.) 67. Ryshik-Gradstein: Summen-, Produkt- und Integral-Tafeln: Berichtigungen zur 1. Auflage (in German). Berlin, Germany: VEB Deutscher Verlag der Wissenschaften. 1962. MR 0175273. (NB. This brochure was available free of charge from the publisher on request.) 68. "Ryshik-Gradstein: Tafeln Summen Produkte Integrale: Berichtigungen zur 1. Auflage". L'Enseignement Mathématique. Bulletin Bibliographique: Livres Nouveaux (in French and German). 9: 5. 1963. Retrieved 2016-03-04. Die Berichtigung wird den Interessenten auf Anfrage kostenlos durch den Verlag geliefert. 69. Rodman, Richard B. (January 1963). "Table Errata 326: I. M. Ryshik & I. S. Gradstein, Summen-, Produkt- und Integral-Tafeln, Deutscher Verlag der Wissenschaften, Berlin, 1957" (PDF). Mathematics of Computation. 17 (81): 100–103. JSTOR 2003754. MTE:326. Archived (PDF) from the original on 2016-03-16. Retrieved 2016-03-16. 70. Schmieg, Glenn M. (July 1966). "Errata 392: I. M. Ryshik & I. S. Gradstein, Summen-, Produkt- und Integral-Tafeln: Tables of Series, Products, and Integrals, VEB Deutscher Verlag der Wissenschaften, Berlin, 1957" (PDF). Mathematics of Computation. 20 (95): 468–471. doi:10.2307/2003630. JSTOR 2003630. MTE:392. Archived (PDF) from the original on 2016-03-16. Retrieved 2016-03-16. 71. Filliben, James J. [at Wikidata] (January 1970). "Table Errata 456: I. M. Ryshik & I. S. Gradstein, Summen-, Produkt- und Integral-Tafeln, VEB Deutscher Verlag der Wissenschaften, Berlin, 1957" (PDF). Mathematics of Computation. 24 (109): 239–242. JSTOR 2004912. MTE:456. Archived (PDF) from the original on 2016-03-16. Retrieved 2016-03-16. 72. MacKinnon, Robert F. (January 1972). "Table Errata 486: I. S. Gradshteyn & I. M. Ryzhik, Table of Integrals, Series, and Products, 4th ed., Academic Press, New York, 1965" (PDF). Mathematics of Computation. 26 (117): 305–307. doi:10.1090/s0025-5718-1972-0415970-6. JSTOR 2004754. MR 0415970. MTE:486. Retrieved 2016-03-04. 73. "Gradshtejn, I. S.; Ryzhik, I. M.: Summen-, Produkt- und Integraltafeln. Band 1, 2. Deutsch und englisch. Übers. aus dem Russischen auf der Basis der 5. russ. Aufl., überarb. von J. Geronimus und M. Zeitlin. Tables of series, products, and integrals. Volumes 1, 2. German and English Transl". Zentralblatt für Mathematik und ihre Grenzgebiete (list) (in German). 448: 395. Zbl 0448.65002. Retrieved 2016-02-16. 74. "Gradshtejn, I. S.; Ryzhik, I. M.: Summen-, Produkt- und Integraltafeln. Band 1, 2. Deutsch und englisch. Übers. aus dem Russischen auf der Basis der 5. russ. Aufl., überarb. von J. Geronimus und M. Zeitlin. Tables of series, products, and integrals. Volumes 1, 2. German and English Transl". Zentralblatt für Mathematik und ihre Grenzgebiete (list) (in German). 456: 355. Zbl 0456.65001. Retrieved 2016-02-16. 75. Fettis, Henry E. (April 1967). "Table Errata 408: I. S. Gradshteyn & I. M. Ryzhik, Tables of Integrals, Series and Products, Fourth Edition, Academic Press, New York, 1965" (PDF). Mathematics of Computation. 21 (98): 293–295. JSTOR 2004214. MTE:408. Retrieved 2016-03-04. 76. Blake, J. R.; Wood, Van E.; Glasser, M. Lawrence; Fettis, Henry E.; Hansen, Eldon Robert; Patrick, Merrell R. (October 1968). "Table Errata 428: I. S. Gradshteyn & I. M. Ryzhik, Table of Integrals, Series, and Products, fourth edition, prepared by Yu. V. Geronimus & M. Yu. Tseytlin, Academic Press, New York, 1965" (PDF). Mathematics of Computation. 22 (104): 903–907. JSTOR 2004606. MR 0239122. MTE:428. Retrieved 2016-03-04. (NB. See 1972 corrigenda by Fettis and 1979 corrigenda by Anderson.) 77. Corrington, Murlan S.; Fettis, Henry E. (April 1969). "Table Errata 437: I. S. Gradshteyn & I. M. Ryzhik, Table of Integrals, Series, and Products, 4th ed., Academic Press, New York, 1965" (PDF). Mathematics of Computation. 23 (106): 467–472. JSTOR 2004471. MR 0415966. Retrieved 2016-03-04. 78. Bradley, Lee C. (October 1969). "Table Errata 446: I. S. Gradshteyn & I. M. Ryzhik, Tables of Integrals, Series and Products, 4th edition, Academic Press, New York, 1965" (PDF). Mathematics of Computation. 23 (108): 891–892, s15–s17. doi:10.1090/s0025-5718-1969-0415968-8. JSTOR 2004993. MR 0415968. MTE:446. Retrieved 2016-03-04. 79. Young, A. T. (January 1970). "Table Errata 451: A. Erdélyi, W. Magnus, F. Oberhettinger & F. G. Tricomi, Tables of Integral Transforms, McGraw-Hill Book Co., New York, 1954" (PDF). Mathematics of Computation. 24 (109): 239–242. doi:10.2307/2004614. JSTOR 2004912. MR 0257656. MTE:451. Archived (PDF) from the original on 2016-03-16. Retrieved 2016-03-16. (NB. The error also affects entry 3.252.10 on page 297 in GR.) 80. Muhlhausen, Carl W.; Konowalow, Daniel D. (January 1971). "Table Errata 473: I. S. Gradshteyn & I. M. Ryzhik, Tables of Integrals, Series and Products, 4th ed., Academic Press, New York, 1965" (PDF). Mathematics of Computation. 25 (113): 199–201. JSTOR 2005156. MR 0415969. MTE:473. Retrieved 2016-03-04. 81. Nash, John C. (April 1972). "Table Errata 492: I. S. Gradshteyn & I. M. Ryzhik, Table of Integrals, Series, and Products, 4th ed., Academic Press, New York, 1965" (PDF). Mathematics of Computation. 26 (118): 597–599. JSTOR 2005198. MR 0415971. MTE:492. Retrieved 2016-03-04. 82. Fettis, Henry E. (April 1972). "Corrigendum: MTE 428, Math. Comp., v.22, 1968, pp. 903–907" (PDF). Mathematics of Computation. 26 (118): 601. doi:10.2307/2005199. JSTOR 2005199. MR 0415973. Retrieved 2016-03-04. (NB. This corrigenda applies to MTE 428.) 83. Ojo, Akin; Sadiku, J. (April 1973). "Table Errata 503: I. S. Gradshteyn & I. M. Ryzhik, Table of Integrals, Series, and Products, 4th ed., Academic Press, New York, 1965" (PDF). Mathematics of Computation. 27 (122): 451–452. doi:10.1090/s0025-5718-1973-0415972-0. JSTOR 2005649. MR 0415972. MTE:503. Retrieved 2016-03-04. (NB. See 1982 corrigenda by Fettis.) 84. Fettis, Henry E. (April 1982). "Corrigenda: Ojo, Akin; Sadiku, J. (1973). Table Errata 503: I. S. Gradshteyn & I. M. Ryzhik, Table of Integrals, Series, and Products, 4th ed., Academic Press, New York, 1965" (PDF). Mathematics of Computation. 38 (158): 657. doi:10.1090/S0025-5718-1982-0645681-X. JSTOR 2007312. MR 0645681. Retrieved 2016-03-14. (NB. This corrigenda applies to MTE 503.) 85. Scherzinger, Ann (October 1976). "Table Errata 528: I. S. Gradshteyn & I. M. Ryzhik, Table of Integrals, Series, and Products, 4th ed., Academic Press, New York, 1965" (PDF). Mathematics of Computation. 30 (136): 899. doi:10.1090/s0025-5718-1976-0408192-x. JSTOR 2005420. MR 0408192. MTE:528. Retrieved 2016-03-04. 86. Carr, Alistair R. (April 1977). "Table Errata 534: I. S. Gradshteyn & I. M. Ryzhik, Tables of Integrals, Series and Products, 4th ed., Academic Press, New York, 1965" (PDF). Mathematics of Computation. 31 (138): 614–616. doi:10.1090/s0025-5718-1977-0428676-9. JSTOR 2006446. MR 0428676. MTE:534. Retrieved 2016-03-04. 87. Robinson, Neville I. (January 1978). "Table Errata 550: I. S. Gradshteyn & I. M. Ryzhik, Table of Integrals, Series, and Products, 4th ed., Academic Press, New York, 1965" (PDF). Mathematics of Computation. 32 (141): 317–320. JSTOR 2006287. MR 0478539. MTE:550. Retrieved 2016-03-04. 88. Fettis, Henry E. (January 1979). "Table Errata 557: I. S. Gradshteyn & I. M. Ryzhik, Table of Integrals, Series, and Products, 4th ed., Academic Press, New York, 1965" (PDF). Mathematics of Computation. 33 (145): 430–431. JSTOR 2006060. MTE:557. Retrieved 2016-03-04. 89. Anderson, Michael (January 1979). "Corrigenda: p. 906 of MTE 428. I. S. Gradshteyn & I. M. Ryzhik, Table of Integrals, Series, and Products, 4th ed., Academic Press, New York, 1965" (PDF). Mathematics of Computation. 33 (145): 432–433. doi:10.2307/2006061. JSTOR 2006061. Retrieved 2016-03-04. (NB. This corrigenda applies to MTE 428.) 90. McGregor, John Ross (April 1979). "Table Errata 564: I. S. Gradshteyn & I. M. Ryzhik, Table of Integrals, Series, and Products, 4th ed., Academic Press, New York, 1965" (PDF). Mathematics of Computation. 33 (146): 845–846. JSTOR 2006322. MTE:564. Retrieved 2016-03-04. 91. Birtwistle, David T. (October 1979). "Table Errata 565: I. S. Gradshteyn & I. M. Ryzhik, Table of Integrals, Series, and Products, 4th ed., Academic Press, New York, 1965" (PDF). Mathematics of Computation. 33 (148): 1377. JSTOR 2006476. MTE:565. Retrieved 2016-03-04. 92. Gallas, Jason A. Carlson (October 1980). "Table Errata 572: I. S. Gradshteyn & I. M. Ryzhik, Table of Integrals, Series, and Products, 4th ed., Academic Press, New York, 1965" (PDF). Mathematics of Computation. 35 (152): 1444. doi:10.1090/S0025-5718-1980-0583522-8. JSTOR 2006418. MR 0583522. MTE:572. Retrieved 2016-03-04. 93. Fettis, Henry E. (January 1981). "Table Errata 577: I. S. Gradshteyn & I. M. Ryzhik, Table of Integrals, Series, and Products, 4th ed., Academic Press, New York, 1965" (PDF). Mathematics of Computation. 36 (153): 317–318. doi:10.1090/S0025-5718-1981-0595074-8. JSTOR 2007758. MR 0595074. MTE:577. Retrieved 2016-03-04. 94. Fettis, Henry E. (January 1981). "Table Errata 582: I. S. Gradshteyn & I. M. Ryzhik, Table of Integrals, Series, and Products, 4th ed., Academic Press, New York, 1965" (PDF). Mathematics of Computation. 36 (153): 315–320. JSTOR 2007758. MR 0595074. MTE:582. Archived (PDF) from the original on 2016-03-16. Retrieved 2016-03-04. (NB. See 1982 corrigenda by Fettis.) 95. Fettis, Henry E. (January 1982). "Corrigenda: Fettis, Henry E. (1981). Table Errata 582: I. S. Gradshteyn & I. M. Ryzhik, Table of Integrals, Series, and Products, 4th ed., Academic Press, New York, 1965" (PDF). Mathematics of Computation. 38 (157): 337. doi:10.1090/S0025-5718-1982-0637313-1. JSTOR 2007492. MR 0637313. Archived (PDF) from the original on 2016-03-16. Retrieved 2016-03-16. (NB. This corrigenda applies to MTE 582.) 96. van Haeringen, Hendrik; Kok, Lambrecht P. [at Wikidata] (October 1982). "Table Errata 589: I. S. Gradshteyn & I. M. Ryzhik, Table of Integrals, Series, and Products, Corrected and enlarged edition, Academic Press, New York, First printing, 1980" (PDF). Mathematics of Computation. 39 (160): 747–757. doi:10.1090/S0025-5718-1982-0669666-2. JSTOR 2007357. MR 0669666. MTE:589. Retrieved 2016-02-22. 97. van Haeringen, Hendrik; Kok, Lambrecht P. [at Wikidata]. "I. S. Gradshteyn & I. M. Ryzhik, Tables of integrals, series, and products. Math. comput. 39, 747–757 (1982)". Zentralblatt für Mathematik und ihre Grenzgebiete (review). 521: 193. Zbl 0521.33002. Retrieved 2016-02-16. 98. Fettis, Henry E.; Deutsch, Emeric; Krupnikov, Ernst D. (October 1983). "Table Errata 601: I. S. Gradshteyn & I. M. Ryzhik, Table of Integrals, Series and Products, Corrected and enlarged edition, Academic Press, New York, First Printing, 1980" (PDF). Mathematics of Computation. 41 (164): 780–783. doi:10.1090/S0025-5718-1983-0717727-2. JSTOR 2007718. MR 0717727. MTE:601. Retrieved 2016-03-04. 99. Solt, György (October 1986). "Table Errata 607: I. S. Gradshteyn & I. M. Ryzhik, Table of Integrals, Series, and Products, corrected and enlarged edition prepared by A. Jeffrey, Academic Press, New York, 1980" (PDF). Mathematics of Computation. 47 (176): 767–768. doi:10.1090/S0025-5718-1986-0856719-2. JSTOR 2008202. MR 0856719. MTE:607. Retrieved 2016-03-03. 100. "Errors in the Integral Tables of Gradshteyn and Ryzhik with Correct Results from Mathematica". Mathematica Information Centre / Wolfram Library Archive: Technical Notes. Champaign, IL, USA: Wolfram Research, Inc. 2004 [2003]. Archived from the original on 2003-04-25. Retrieved 2016-02-16. 101. "Errors in the Integral Tables of Gradshteyn and Ryzhik with Correct Results from Mathematica" (Technical note). Champaign, IL, USA: Wolfram Research, Inc. 2004 [2003]. Archived from the original on 2004-06-19. Retrieved 2016-03-04. 102. "Table of integrals, series, and products. Ed. by Alan Jeffrey. CD-ROM version 1.0 for PC, MAC, and UNIX computers. 5th ed. (English) San Diego, CA: Academic Press (1996)". Zentralblatt MATH. 1996. ISBN 9780122947568. Zbl 0918.65001. Retrieved 2016-02-16. 103. Rosenblum, Marvin (October 1996). Koepf, Wolfram (ed.). "4. Table of Integrals, Series, and Products, CD-ROM Version 1.0 Edited by Alan Jeffrey" (PDF). Books and Journals: Review. Orthogonal Polynomials and Special Functions. Vol. 7, no. 1. SIAM Activity Group on Orthogonal Polynomials and Special Functions. pp. 11–12. Archived (PDF) from the original on 2022-01-20. Retrieved 2022-01-23. 104. Kölbig, Kurt Siegfried (June 1996) [1995]. "Corrigenda: I. S. Gradshteyn & I. M. Ryzhik; Table of Integrals, Series, and Products, Fifth edition, Academic Press, Boston" (PDF). CERN Computing and Networks Division. CN/95/15. Retrieved 2016-02-12. 105. Kölbig, Kurt Siegfried (January 1995). "Table errata 617: I. S. Gradshteyn & I. M. Ryzhik, Table of Integrals, Series, and Products, 5th ed. (Alan Jeffrey, ed.), Academic Press, Boston, 1994" (PDF). Mathematics of Computation. 64 (209): 449–460. doi:10.1090/S0025-5718-1995-1270626-0. JSTOR 2153354. MR 1270626. MTE:617. Retrieved 2016-03-03. 106. Lambert, Adeline (January 1997). "Table Errata 628". Mathematics of Computation. 66 (217): 463. JSTOR 2153671. MR 1388890. MTE:628. 107. Fikioris, George (October 1998). "Table Errata 634". Mathematics of Computation. 67 (224): 1753–1754. JSTOR 2584882. MR 1625064. MTE:634. MSC:00A22,33-00,65-00. 108. Ruderman, Dan L. (2001-01-22). "Errors in Gradshteyn and Ryzhik, 5th ed". Archived from the original on 2007-02-18. 3.381.3, 3.411.6, 3.721.3, 3.761.2, 3.761.9, 3.897.1, 6.561.13, 8.350.2 109. Rangarajan, Sarukkai Krishnamachari. "Gradshteyn, I. S.; Ryzhik, I. M. Table of integrals, series, and products. Translated from the Russian. Translation edited and with a preface by Alan Jeffrey and Daniel Zwillinger. 6th ed. San Diego, CA: Academic Press". Zentralblatt MATH. ISBN 9780122947575. Zbl 0981.65001. Retrieved 2016-02-16. 110. "Table Errata 636". Mathematics of Computation. 71 (239): 1335–1336. July 2002. JSTOR 2698918. MTE:636. 111. "Table Errata 637". Mathematics of Computation. 71 (239): 1335–1336. July 2002. JSTOR 2698918. MTE:637. 112. Zwillinger, Daniel; Jeffrey, Alan (2005-11-10). "Errata for Tables of Integrals, Series, and Products, 6th edition by I. S. Gradshteyn and M. Ryzhik edited by Alan Jeffrey and Daniel Zwillinger, Academic Press, Orlando, Florida, 2000, ISBN 0-12-294757-6" (PDF). Archived (PDF) from the original on 2016-03-08. Retrieved 2016-03-08. (NB. This list of 64 pages has 398 entries. According to Daniel Zwillinger it is incomplete.) 113. De Vos, Alexis (2020-11-09) [2009-03-19]. "Alexis De Vos". Universiteit Gent, Belgium. Archived from the original on 2021-06-13. Retrieved 2022-01-12. […] Finally, he is the proud discoverer of an error in equation 3.454.1 of the Gradshteyn and Ryzhik "Tables of integrals, series, and products". See errata for 6th edition by Alan Jeffrey and Daniel Zwillinger, pages 1 and 19. The error is now corrected in the 7th edition page 363 (with acknowledgement in page xxvi). […] 114. "Gradshteyn, I. S.; Ryzhik, I. M. Table of integrals, series, and products. Translated from the Russian. Translation edited and with a preface by Alan Jeffrey and Daniel Zwillinger. With one CD-ROM (Windows, Macintosh and UNIX). 7th ed. Amsterdam: Elsevier/Academic Press (ISBN 978-0-12-373637-6)". Zentralblatt MATH. ISBN 9780123736376. Zbl 1208.65001. Retrieved 2016-02-16. 115. Zwillinger, Daniel; Jeffrey, Alan (2008-04-11). "Errata for Tables of Integrals, Series, and Products (7th edition) by I. S. Gradshteyn and M. Ryzhik edited by Alan Jeffrey and Daniel Zwillinger, Academic Press, Orlando, Florida, 2007, ISBN 0-12-373637-4" (PDF). Archived (PDF) from the original on 2016-03-08. Retrieved 2016-03-08. (NB. This list of 7 pages has 42 entries. According to Daniel Zwillinger it is incomplete.) 116. Veestraeten, Dirk (2015-07-24) [2015-06-21]. Written at Amsterdam, Netherlands. "Some remarks, generalizations and misprints in the integrals in Gradshteyn and Ryzhik". Scientia. Series A: Mathematical Sciences. Valparaıso, Chile: Universidad Tecnica Federico Santa Marıa. 26: 115–131. ISSN 0716-8446. S2CID 124124467. Archived (PDF) from the original on 2021-12-26. Retrieved 2021-12-26. (18 pages) 117. Zwillinger, Daniel; Moll, Victor Hugo (2021-04-23) [2014-10-06]. "Errata for Tables of Integrals, Series, and Products (8th edition) by I. S. Gradshteyn and M. Ryzhik edited by Daniel Zwillinger and Victor Moll, Academic Press, 2014, ISBN 0-12-384933-0" (PDF) (6 ed.). Archived (PDF) from the original on 2021-04-25. Retrieved 2021-12-26. (NB. This list of 33 pages has 191 entries.) External links • Zwillinger, Daniel. "Gradshteyn and Ryzhik: Table of Integrals, Series, and Products (Home Page)". Archived from the original on 2016-03-08. Retrieved 2016-03-08. • Moll, Victor Hugo. "List with the formulas and proofs in GR". Archived from the original on 2010-01-09. Retrieved 2016-03-08. • "SCIENTIA, Series A: Mathematical Sciences – Official Journal of the Universidad Técnica Federico Santa María". Universidad Técnica Federico Santa María (UTFSM). Retrieved 2016-03-08. • Blackley, Jonathan "Seamus" (2021-06-12) [2021-06-11]. "One of my most cherished things, a tool so useful and powerful that I honesty have come to look at it as a friend: Table of Integrals, Series, and Products". twitter. Archived from the original on 2021-09-11. Retrieved 2022-01-12.
Wikipedia
Is Lebesgue's "universal covering" problem still open? The following problem has been attributed to Lebesgue. Let "set" denote any subset of the Euclidean plane. What is the greatest lower bound of the diameter of any set which contains a subset congruent to every set of diameter 1? There are a number of interesting geometric problems of this type. Is it possible that some of them may be difficult to solve because the solution is a real irrational number which (when expressed in decimal form) is not even recursive-and so cannot be approximated in the usual way? open-problems edited Dec 9, 2013 at 3:34 Garabed GulbenkianGarabed Gulbenkian $\begingroup$ Are you interested in minimal diameter of a universal cover or minimal area? $\endgroup$ – Fedor Petrov Oct 16, 2018 at 5:12 $\begingroup$ The Quanta Magazine article about recent (2018) progress is nice. $\endgroup$ – Mark S The question is still open. There are at least two versions. The most popular asks for the minimal-area convex subset of the plane such that every set with diameter 1 can be translated, rotated and/or reflected to fit inside it. Here is the best lower bound I know: Peter Brass and Mehrbod Sharifi, A lower bound for Lebesgue's universal cover problem, Int. Jour. Comp. Geom. & Appl. 15 (2005), 537--544. Their lower bound is 0.832, obtained through a rigorous computer-aided search for the convex set with the smallest area that contains a circle, equilateral triangle and pentagon of diameter 1. The best upper bound I'm 100% sure of is 0.8441153, proved here: John Baez, Karine Bagdasaryan and Philip Gibbs, The Lebesgue universal covering problem, Jour. Comp. Geom. 16 (2015), 288-299; arXiv:1502.01251 . Our paper also reviews the history of this problem, which is rather interesting. In 1920, Pál noted that a regular hexagon of area circumscribed around the unit circle does the job. This has area $$ \sqrt{3}/2 \approx 0.86602540 $$ But in the same paper, he showed you could safely cut off two corners of this hexagon, defined by fitting a dodecagon circumscribed around the unit circle inside the hexagon. This brought the upper bound down to $$ 2 - 2/\sqrt{3} \approx 0.84529946 $$ He guessed this solution was optimal. In 1936, Sprague sliced tiny pieces of Pal's proposed solution and bring the upper bound down to $$ \sim 0.84413770 $$ (Image from Hansen's paper, added by J.O'Rourke.) The big hexagon above is Pál's original solution. He then inscribed a regular dodecagon inside this, and showed that you can remove two of the resulting corners, say $B_1B_2B$ and $F_1F_2F,$ and get a smaller universal covering. But Sprague noticed that near $D$ you can also remove the part outside the circle with radius 1 centered at $B_1$, as well as the part outside the circle with radius 1 centered at $F_2.$ In 1975, Hansen showed you could slice off very tiny corners off Sprague's solution, each of which reduces the area by $6 \cdot 10^{-18}$. In a later paper, Hansen did better: H. Hansen, Small universal covers for sets of unit diameter, Geometriae Dedicata 42 (1992), 205--213. He again sliced two corners off Sprague's solution, but now one reduces the area by a whopping $4 \cdot 10^{-11}$, while the other, he claimed, reduces the area by $6 \cdot 10^{-18}$. One author, in a parody of the usual optimistic prophecies of accelerating progress, commented that ...progress on this question, which has been painfully slow in the past, may be even more painfully slow in the future. In 1980, Duff considered nonconvex subsets of the plane with least area such that every set with diameter one can be rotated and translated to fit inside it. He found one with area which is smaller than the best known convex solution: G. F. D. Duff, A smaller universal cover for sets of unit diameter, C. R. Math. Acad. Sci. 2 (1980), 37--42. In 2015, Philip Gibbs, Karine Bagdasaryan and I wrote a paper on this topic, mentioned above. We found a new smaller universal cover, and noticed that Hansen had made a mistake in his 1992 paper. Hansen claimed to have removed pieces of area $4\cdot 10^{-11}$ and $6 \cdot 10^{-18}$ from Sprague's universal cover, but the actual areas removed were $3.7507 \cdot 10^{-11}$ and $8.4460 \cdot 10^{-21}$. So, Hansen's universal covering has area $$ \sim 0.844137708416 $$ Our new, smaller universal covering had area $$ \sim 0.8441153 $$ This is about $2.2 \cdot 10^{-5}$ smaller than Hansen's. To calculate the area of our universal cover Philip used a Java program, which is available online. Greg Egan checked our work using high-precision calculations in Mathematica, which are also available online. See the references in our paper for these programs and also a Java applet that Gibbs created to visualize Hansen's universal covering. It's fun to look at the smallest sliver Hansen removed, because it's 30 million times longer that it is wide! More recently Philip Gibbs wrote a paper claiming to have an even smaller universal covering, with area $$ \sim 0.8440935944 $$ Philip Gibbs, An upper bound for Lebesgue's universal covering problem, 22 January 2018. Gibbs is a master at this line of work, but I must admit I haven't checked all the details, so it would be good for some people to carefully check them. I've written a slightly more detailed account of the Lebesgue universal covering problem with some pictures here: J. Baez, Lebesgue's universal covering problem (part 1), Azimuth, 8 December 2013. J. Baez, Lebesgue's universal covering problem (part 2), Azimuth, 3 February 2015. J. Baez, Lebesgue's universal covering problem (part 3), Azimuth, 7 October 2018. If anyone knows of further progress on this puzzle, please let me know! Martin Sleziak John BaezJohn Baez $\begingroup$ 15 October 2018 - I've updated my answer to discuss the new improved upper bound found by Philip Gibbs. $\endgroup$ – John Baez $\begingroup$ OP asks about minimal diameter (not minimal area), which is probably not the Lebesgue's original problem, but still interesting. Do you know anything about this question? $\endgroup$ $\begingroup$ Philip Gibbs says this other problem is harder; I don't know anything about it. $\endgroup$ The problem has been studied for various groups $G$ of isometries of $\mathbb R^n$. A set $K\subset \mathbb R^n$ is called $G$-universal cover iff every set of diameter 1 is contained in $gK$ for some $g\in G$. V. Makeev proved that the mean width of a $T_n$-universal cover is greater or equal to $\sqrt{2n/(2n+1)}$, where $T_n$ is the group of translations of $\mathbb R^n$. For $n=2$ the estimate is sharp; the perimeter of a $T_2$-universal cover $\geq 2\pi/\sqrt{3}$ (link). M. Kovalev obtained a rather explicit description of all minimal $D_2$-universal covers, where $D_2$ is the group of all isometries of $\mathbb R^2$. Theorem. Every minimal universal $D_2$-cover $K$ is star-shaped. There is a polar coordinate system (with the centre in $K$) such that $$\partial K=\{(\phi,\rho(\phi)):\ 0 < \phi\leq 2\pi\},$$ where $\rho=\rho(\phi)$ is Lipschitz and for any $\phi\in[0,2\pi]$ $$c^2\leq \rho(\phi) \leq 1 - c^2,\qquad c=1-1/\sqrt{3}.$$ Glorfindel Andrey RekaloAndrey Rekalo $\begingroup$ It's worth noting, for nonexperts, that "minimality" here means that the set $K$ doesn't contain a subset with the desired property. This is implied by, but does not imply, area minimization. Indeed, "minimal" solutions to the Lebesgue universal covering problem are known, while area-minimizing ones are not. $\endgroup$ Dec 8, 2013 at 7:36 $\begingroup$ The link to the search will only work for people with MathSciNet subscription - as previously discussed on meta. But it seems a reasonable guess that it is supposed to be Kovalev, M. D. The smallest Lebesgue covering exists. (Russian) Mat. Zametki 40 (1986), no. 3, 401–406, 430; MR869931. $\endgroup$ – Martin Sleziak It's Lebesgue Minimal Problem. It's still open, though there are some bounds to the area of such set, for example there are lower bound for area $S\ge \frac{\pi}{8}+\frac{\sqrt{3}}{4}$ it's not hard to show that such set must have diameter larger or equal to $\frac{\sqrt{3}}{3}+\frac{1}{2}=1.077350...$ Our set must have a subsets congruent to the equilateral triangle with side 1(ABC) and to the circle with radius 0.5(with center O). If I - the incenter of triangle ABC, and $O \in BIC$, then consider point D which lies on the radius perpendicular to BC. Then $AD \ge AI+OD=1.077...$. Nurdin TakenovNurdin Takenov $\begingroup$ You say $O \in BIC$ but your diagrams seems to have $O \notin BIC$. Still, it obeys $AD \ge AI + OD$. $\endgroup$ As other answers have pointed out there are different versions and generalizations of Lebesgue's Universal Covering Problem. His original question from 1914 in a letter to Pál has been quoted as "What is the smallest area of a convex set in the plane that contains a congruent copy of every planar set of unit diameter?" (see "Research Problems in Discrete Geometry" Brass, Moser, Pach.) Variations of the question ask for the minimum diameter or perimeter. Sometimes the convexity condition is relaxed but an advantage of the convex case is that the existance of a universal cover with minimum area is then ensured by the Blaschke selection theorem. Congruence allows reflections as well as translations and rotations but in some statements of the problem reflections are not allowed. The universal covers found by Pál and improved by Sprague do not require reflections. The best known convex universal cover by Hansen from 1992 does require reflections. Hansen's area is 0.844137708435197570894066994 and Duff gave a smaller non-convex cover of 0.84413570 (see other answers for references) However, it is possible to improve on these upper bounds as follows: Start with a regular hexagon circumscribed round a circle of diameter one. As Pál showed this is a universal cover. Pál considered the eight sided shape formed by removing two corners from this hexagon using cuts made with two lines tangent to the circle in such a way that the new edges are sides of a regular dodecagon inscribed in the hexagon. Now consider a more general case where the two cuts are still tangent to the circle and still at an angle of 60 degrees to each other but at a small angle $\sigma$ to the edges of a dodecahedron. The remaining shape is still a universal cover. To see this consider the six triangles at the corners of the hexagon each outisde a line tangent to the circle and at the same slant angle away from the edge of a regualar dodecagon. In the diagram these are labelled A,B,C,D,E and F. Any shape of diameter one can be fitted inside the hexagon. The least distance between opposite triangles such as A and D is one, so the shape cannot be in the interior of both triangles. This is true for each of the three pairs of opposite triangles so the shape can only be inside the interior of at most three of these triangles. Each of the possible cases can be checked to see that the shape can then be rotated through some multiple of sixty degrees around the centre so that it is still inside the hexagon but not inside the triangles E and C. Therefore the hexagon with these two triangles removed is a universal cover. This cover can be reduced further by generalising the argument used by Sprague. First observe that any shape of diameter one can be contained inside a curve of constant width one (such as a cirle of unit diameter, or a Reuleaux polygon) so to prove that a set of points is a universal cover it suffices to show that it can cover a set congruent to every curve of constant width one. When such a curve is fitted inside the hexagon it will touch each of the six sides of the hexagon at a unique point. On the side running from D to C it must be to the left of the point P on the corner of the removed triangle C. This means that all points near A outside an arc of radius one centred on P can be removed. Similarly the curve must touch the edge of the hexagon from E to D somewhere right of N, the corner of the triangle E that has been removed. So all points outside an arc of radius one centred on N can also be removed. The remaining cover then has a vertex at X where these two arcs meet. This reduces the cover but not enough to make it smaller for any non-zero value of $\sigma$ than Sprague's universal cover which is the case $\sigma=0$. One more piece can be removed if we use the freedom to reflect shapes. The axis of reflection to use is the line from the midpoint M of the side of the hexagon from E to D, to the midpoint of the opposite side. A shape fitted into the hexagon with the corners E and C removed can be reflected about this axis provided it also does not enter the triangles F' and D' which are the reflections of C and E about the axis. When this is the case we will choose to reflect the shape if it touches the side from E to D at a point nearer to E than D. Remember that it touches the opposite side at the opposite point which is therefore also reflected to be nearer the corner of the hexagon at B than the one at A. If we draw an arc centred on M of radius one it cuts off a larger region near A and no shape that can be reflected can be beyond this line. The point where this meets the arc centred on P is marked W. This means that the only shapes that can have points outside that arc are ones that cannot be reflected. This means they must enter some point inside the regions F' or D' If they have a point in D' then they cannot have a point in A' which is the triangle whose points are at a distance of more than one from all points in D'. Draw one more arc centred on Q at the corner of the triangle C' which is the reflection of the region F. All points in C' are at a distance of one or more than one from points in F' so the arc will touch the region F' but not enter it. This arc will meet the arc centred on M at a point Z and the arc centred on N at a point Y. Now consider the fate of points inside the region XYZW bounded by the four arcs. It is a general property of curves of constant width one that if two points are inside the curve then all points on an arc of radius one through the two points are also inside the curve. Suppose then that a shape fitted inside the hexagon had a point in XYZW and also in F' We could then join those two points with an arc but between the two points it would be outside the arc centered on Q and would therefore go outside the hexagon. This is in contradiction with the premise so we conclude that no shape fitted in the hexagon can have a point in both XYZW and F'. It can also be verified that for angles $\sigma$ less than 9 degrees the region XYZW is inside the triangle A'. Therefore shapes with a point insode XYZW do not have points in F' or D' and can be reflected. However, we have already determined that such shapes will not have points in this region. This proves that the region XYZW can be removed from the universal cover. It turns out that this is now sufficient to construct a universal cover smaller than the ones of Hansen and Duff. Even if we restrict ourselves to the convex case and remove only the part of this region that leaves a convex shape, the area of the universal cover for an angle $\sigma = 0.4$ degrees can be computed to be 0.8441177 There are other small pieces that can be removed from this cover to reduce it further. This of course does not solve Lebesgue's Universal Covering Problem which is hard because of the complexity of the problem (not because the numbers involved are irrational as asked in the question) However I can break the problem down with three conjectures 1st conjecture: The minimal convex cover for any subset of curves of width one will always fit inside a hexagon circumscribed around a circle such that opposite sides are parallel. It can be shown that such a shape is a universal cover but only computational evidence supports the truth of this conjecture. 2nd conjecture: For any such hexagon there is a minimal convex cover. The conjecture is that the minimum area for such a cover is found for the case of the regular hexagon. Again computation supports this conjecture. 3rd conjecture: For a regular hexagon the minimum cover within the hexagon for any subset of curves of width one fits inside the shape formed by removing two corners by two lines tangent to the inscribed circle and at an angle of 60 degrees to each other (as above) If all of these three conjectures are true then the problem of finding the minimal convex cover reduces to finding the minimal cover inside a shape of this form. The conjectures may be hard to prove but if are true then the final problem may be tractable using methods similar to the above proof. Philip GibbsPhilip Gibbs Philip Gibbs wrote: the area of the universal cover for an angle $\sigma=0.4$ degrees can be computed to be $0.8441177$. I believe Philip later noticed an error in this area computation. He also later noticed that there's a constraint that $\sigma$ needs to obey, which is not obeyed by this choice: the best choice of $\sigma$ is about $$ \sigma = 1.294389444703601012^\circ $$ and the area of the universal covering this gives is about $$ 0.844115297128419059\dots .$$ We wrote a paper on this together with Karine Bagdasaryan, and Greg Egan did some high-precision calculations that give the above numbers. Our paper explains the details: John C. Baez, Karine Bagdasaryan and Philip Gibbs, The Lebesgue universal covering problem. Abstract: In 1914 Lebesgue defined a "universal covering" to be a convex subset of the plane that contains an isometric copy of any subset of diameter 1. His challenge of finding a universal covering with the least possible area has been addressed by various mathematicians: Pal, Sprague and Hansen have each created a smaller universal covering by removing regions from those known before. However, Hansen's last reduction was microsopic: he claimed to remove an area of $6 \cdot 10^{−18}$, but we show that he actually removed an area of just $8 \cdot 10^{-21}$. In the following, with the help of Greg Egan, we find a new, smaller universal covering with area less than $0.8441153$. This reduces the area of the previous best universal covering by a whopping $2.2 \cdot 10^{−5}$. What recent discoveries have amateur mathematicians made? Examples of notably long or difficult proofs that only improve upon existing results by a small amount Volumes of sets of constant width in high dimensions What fraction of n-point sets in the unit ball have diameter smaller than 1? Two questions about Convex Sets and Lebesgue Measure A question about tiling Hilbert Space Updated bounds or references for an old Erdős problem –– coloring the plane with multiple forbidden distances? On planar sections of 3D convex bodies
CommonCrawl
Resonances: Second Language Development and Language Planning and Policy from a Complexity Theory Perspective DOI:10.1007/978-3-319-75963-0_12 In book: Language Policy and Language Acquisition Planning (pp.203-217) Diane Larsen-Freeman To read the full-text of this research, you can request a copy directly from the author. This chapter begins by introducing Complexity Theory and five of its theoretical tenets that have implications for both second language development (SLD) and language planning and policy (LPP). The tenets have to do with qualities of complex systems: emergence, interconnected levels and timescales, nonlinearity, dynamism, and context dependence. These tenets are then applied to SLD. I go on to show that these same qualities of complex systems hold resonances for LPP. However, descriptive resonances are not sufficient for building a bridge between SLD and LPP. Thus, I conclude that a bridge must be constructed of a deeper awareness, namely that both second language learners/educators and planners/policy makers operate in a complex world, where interventions need to be situated, contingent, and adaptable, and where agents of change need to be prepared for unexpected outcomes. you can request a copy directly from the author. ... In this respect, assessment cannot cover the dynamic fluctuations in CAF or other constructs, and this causes an inaccurate sense of a lack of learning or development. This necessitates self-referential assessment practices which take into account the learner and the contextual dynamics that would influence learning and performance (Larsen-Freeman, 2018). ... Complexity, Accuracy and Fluency in Second Language Writing Kutay Uzun Even though complexity, accuracy, and fluency measures have chiefly been a concern in L2 oral production, they have their particular ways of manifesting in L2 written production, too. As a result, written learner texts are being increasingly studied with respect to those measures thanks to the improvements in applications for the computational analysis of texts. Centered around two major hypotheses regarding their trade-offs, namely the limited attentional capacity model and the cognition hypothesis, this entry presents the concepts of complexity, accuracy, and fluency within the context of L2 writing along with the pedagogical implications for their development and assessment. ... Chaos/Complexity Theory (C/CT), which forms the conceptual framework of this study, has its origins in research explaining complex systems in the scientific fields (Harshbarger, 2007). However, it has recently attracted the attention of social scientists and educators in the SLA field, particularly in analysing the L2 learner's learning pattern (Larsen-Freeman, 2018). This theory is based on the connectivities/interactions amongst the factors which influence language learning and determine its nature. ... Maltese as a Second LanguageLearning Challenges and Suggested Teaching Strategies Jacqueline Żammit Adult learners experience challenges when learning a second language (L2), and educators must think of potential teaching strategies to overcome these challenges. This study explores the learning challenges that adult participants experienced while learning Maltese as a second language (ML2), including some of the teaching strategies which they indicated were effective. This study applied a pragmatic epistemology and a longitudinal, qualitative research design to clarify the complex phenomenon of second language acquisition (SLA) and comprehensively address the research question. Thirty-five adult participants in an ML2 class sat for two timed grammaticality judgement tests (TGJTs) and verb conjugation (VC) tasks, picture interpretation tasks six times over a 15-month period and reflective journals. For post-hoc analysis, the participants participated in an interview and a stimulated recall session. Despite participants' learning difficulties, which were collected through reflective journals and interviews, they indicated that the acquisition of the Maltese verbal tense and aspect did take place over time, although this was a particularly challenging area of ML2 acquisition. The participants recommended teaching strategies that could facilitate ML2 learning. ... The uptake of CDST in applied linguistics research has continued to accelerate, pushing further and faster even than in related fields such as education and theoretical linguistics (Koopmans, 2020;Kretzschmar, 2015). As recent work (e.g., Larsen-Freeman, 2017) synthesizing current strands of applied linguistics that have been informed by CDST shows, CDST has made important contributions to language development/acquisition (Lowie et al., 2010;Verspoor et al., 2008), language attrition (Schmid et al., 2013), language ecology (Cowley, 2011;Kramsch & Whiteside, 2008), language evolution (Ke & Holland, 2006;Mufwene et al., 2017), language policy and planning (Bastardas-Boada, 2013;Larsen-Freeman, 2018), language pedagogy (Han, 2020;Levine, 2020), bilingualism and multilingualism (Herdina & Jessner, 2002), sociolinguistics (Blommaert, 2014), educational linguistics (Hult, 2010), and communication studies (Massip-Bonet et al., 2019), among other areas of applied linguistics. ... Complex dynamic systems theory in language learning: A scoping review of 25 years of research STUD SECOND LANG ACQ Philip Hiver Ali H. Al-Hoorie Reid Evans A quarter of a century has passed since complex dynamic systems theory was proposed as an alternative paradigm to rethink and reexamine some of the main questions and phenomena in applied linguistics and language learning. In this article, we report a scoping review of the heterogenous body of research adopting this framework. We analyzed 158 reports satisfying our inclusion criteria (89 journal articles and 69 dissertations) for methodological characteristics and substantive contributions. We first highlight methodological trends in the report pool using a framework for dynamic method integration at the levels of study aim, unit of analysis, and choice of method. We then survey the main substantive contribution this body of research has made to the field. Finally, examination of study quality in these reports revealed a number of potential areas of improvement. We synthesize these insights in what we call the "nine tenets" of complex dynamic systems theory research, which we hope will help enhance the methodological rigor and the substantive contribution of future research. ... There has been observed a frequent confusion between the terms "language policy" and "language planning". According to Haugen (1959, in Larsen-Freeman, 2018 language planning includes the legal background and the set of ideologies that affects decision making in policies relevant to languages. In other words, language planning describes the specific way in which governmental decisions about languages are made. ... Human Rights and Minority Languages: Immigrants' Perspectives in Greece Argyro-Maria Skourmalla Marina Sounoglou Human rights and their fortification through conventions and treaties are thought to be one of the greatest achievements of the previous century. A very important category of human rights is the Linguistic Human Rights (LHR). Linguistic Human Rights are connected to basic human rights and are of great importance in policy and planning. There have been numerous researches on language policies and in educational systems around the world. However, minority populations' opinion, for example refugees' opinion, is rarely represented in these researches. The present research aims at exploring the existing language policies in Greece in reference to minority languages. For the needs of this research six adult refugees participated in short semi-structured interviews. Even though participants seemed to be unaware of the term "Linguistic Human Rights", most of them referred to the difficulty they have in exercising major human rights due to the monolingual policies that are followed in Greece. Taking into consideration the importance of Linguistic Human Rights and people's need to use their mother language(s) in Greece, the last part of this research includes suggestions and ideas towards multilingual practices that come from different countries around the world. A Qualitative Research on the Effect of Chaos and Butterfly Effect on Education Okan Sarigöz Chaos is a scientific approach that refers to the fact that systems or behaviors that are thought to be irregular, complex, impossible to predict actually occur in an orderly manner. The aim of this research is to determine what chaos and butterfly effect mean in terms of education, the importance of chaos and butterfly effect in education and its effects on education. The research is a qualitative study aimed at determining the opinions of teachers about chaos and butterfly effect. The case study method was used in the research. The research was carried out with 23 teachers selected on a voluntary basis among 44 teachers who are doing master's degrees in educational sciences. Research data were collected with a semi-structured interview form developed by the researcher. All the data obtained were analyzed by coding using the content analysis method. In the research, it was concluded that the chaos and butterfly effect positively affect students' development of different ideas, improve their ability to analyze, activate metacognitive functions, and give students the ability to solve problems more quickly by evaluating them from different point of view. The Significance of Chaos/Complexity Theory in Maltese as a Second Language Acquisition Despite extensive research in second language acquisition (SLA), we are still a long way from understanding what exactly happens in the adult's mind while learning a second language (L2). This study explores whether a learning pattern could be established over time in 35 adults learning Maltese as a second language (ML2), especially with respect to Maltese verbs. This research is driven by chaos/complexity theory. It focuses on the non-linear learning curve, the origins of the butterfly effect and fractal patterns of learning. It describes how learning is unpredictable, chaotic, dynamic and complex. A longitudinal research system and a mixed-method approach focussed on methodological triangulation were used in this research. Structured Timed Grammaticality Judgment Tests, verb conjugation tasks, reflected journals and interviews were used to investigate the learning curve over a period of 15 months. According to the results, all participants indicated a non-linear learning pattern and confirm the characteristics of Chaos/Complexity Theory. The development of writing skills of learners of English as a foreign language (EFL) [Chaves Varón-2020-thesis ] Orlando Chaves The complexity and variability of the development of writing in a second language have motivated extensive theoretical and empirical research of relevance for language learning and teaching. Whereas there is abundant evidence about what develops in writing (e.g., linguistic aspects in texts, writing strategies, processes, and motivation) and why it develops (e.g., learner maturation, instruction, feedback), comparatively less is known about how writing development occurs. Understanding how learners of English as a second/foreign language (ESL/EFL) develop their writing skills has grown in interest given the dominance of English as the current leading language for academic and non-academic communication. The present study investigates EFL learners' writing development in a foreign languages pre-service teacher education programme at a Colombian university. Drawing on the extensive body of ESL/EFL writing literature that has examined the complexity of writing development from distinct yet rather isolated angles and theoretical traditions, this study adopted a multi-lensed approach to investigate writing development as a writer-text-context compound. Methodologically, the study responds to calls for counterbalancing the partiality for quantitative cross-sectional studies of academic texts of groups of writers in EFL writing development research. Thus, it adopts a mixed-methods approach that investigated the participants' writing development over a 16-week academic semester. The quantitative phase examined writing development differences in groups from three curricular stages of the programme (initial = 31; middle = 29; final = 40; N=100) through a non-academic writing task and a questionnaire. The qualitative phase examined the developmental trajectories and the factors affecting the writing development of six individual learners (three higher scorers and three lower scorers selected from the three curricular stages) using interviews and six texts produced by each participant over the semester. Three independent raters evaluated the texts in the two phases of the study using a rubric developed for this study to reflect the comprehensive view of writing by including text-, writer-, and reader-related writing dimensions. The interviews and questionnaires provided data about writing development that cannot be seen in the texts. Email letters were chosen as a representative non-academic genre used by ESL/EFL learners in the context examined and globally. The findings showed significant differences across the groups. They revealed various developmental trajectories across the various writing dimensions and individual writers, associated with long- and short-term factors influencing EFL writing development. These findings cast light on what develops, why it develops, and how development occurs at both group and individual levels in an EFL situation. It was found that writing progress is limited but significant, nonlinear, and resulting from an interplay between contextual and individual characteristics (e.g., L1, family, instruction, personality, motivation, proficiency, and age). It was also found that writing development is also linked to interactions between writing facets (e.g., content, task, genre, language, authorial voice, audience awareness, language, readability, writing situation) in a way that resembles a self-organising system (Larsen-Freeman & Cameron, 2008a). While the present study was exploratory, the comprehensive view adopted provides a better understanding of writing development to inform EFL writing research, teaching, and assessment. The complexity and variability of writing development remind L2 writing researchers, teachers, and evaluators that, as the writing progress is not linear, having times in which there is no evidence of progress, or and at times, apparent regression, caution is needed in the evaluation of EFL learners' writing proficiency. Research Methods for Complexity Theory in Applied Linguistics This book provides practical guidance on research methods and designs that can be applied to Complex Dynamic Systems Theory (CDST) research. It discusses the contribution of CDST to the field of applied linguistics, examines what this perspective entails for research and introduces practical methods and templates, both qualitative and quantitative, for how applied linguistics researchers can design and conduct research using the CDST framework. Introduced in the book are methods ranging from those in widespread use in social complexity, to more familiar methods in use throughout applied linguistics. All are inherently suited to studying both dynamic change in context and interconnectedness. This accessible introduction to CDST research will equip readers with the knowledge to ensure compatibility between empirical research designs and the theoretical tenets of complexity. It will be of value to researchers working in the areas of applied linguistics, language pedagogy and educational linguistics and to scholars and professionals with an interest in second/foreign language acquisition and complexity theory. Toward a Unified Theory of Language Development: The Transdisciplinary Nexus of Cognitive and Sociocultural Perspectives on Social Activity Francis M. Hult Chapter 1. Complexity Theory: In celebration of Diane Larsen-Freeman From mobility to complexity in sociolinguistic theory and method Jan Blommaert Sociolinguistics is a dynamic field of research that explains the role and function of language in social life. This book offers the most substantial account available of the core contemporary ideas and arguments in sociolinguistics, with an emphasis on innovation and change. Bringing together original writing by more than twenty of the field's most influential international thinkers and researchers, this is an indispensable guide to the newest and most searching ideas about language in society. For researchers and advanced students it gives access to the field's most pressing issues and debates, as well as providing a platform for new initiatives in sociolinguistic research. Language planning as a complex practice Curr Issues Lang Plann Gabrielle Hogan-Brun A New Paradigm for Developmental Science: Relationism and Relational-Developmental-Systems Willis F. Overton The Cartesian-Split-Mechanistic scientific paradigm that until recently functioned as the standard conceptual framework for sub-fields of developmental science (including inheritance, evolution, and organismic—pre-natal, cognitive, emotional, motivational, socio-cultural—development) has been progressively failing as a scientific research program. An alternative scientific paradigm composed of nested meta-theories with Relationism at the broadest level and Relational-Developmental-Systems as a mid-range meta-theory is offered as a more progressive conceptual framework for developmental science. Termed broadly the Relational-Developmental-Systems paradigm, this framework accounts for the findings that are anomalies for the old paradigm, accounts for the emergence of new findings, and points the way to future scientific productivity— and a more optimistic approach to evaluate-evidence-based applications aimed at promoting positive human development and social justice. Language planning and complexity: A conversation Stephen John Hogan Language planning and complexity is the subject of this volume's collection of papers. But is such a linkage desirable or even possible? The Editor of this thematic issue recently held a conversation with the Director of the Bristol Centre for Complexity Sciences to discuss this and other questions. A record of their exchange is given below. From mobility to complexity in sociolinguistic theory and method. Current issues in LPP research and their impact on society Jeroen Darquennes After a very broad description of what language policy and planning is about this paper presents an overview of some of the current preoccupations of researchers focusing on language policy and planning as one of the blooming fields of applied linguistics. The current issues in language policy and planning research that are dealt with include 'the history of the field', 'language practices in different domains of society', 'ideas and beliefs about language', and 'the practical side of language policy and planning'. The brief sketch of current issues in language policy and planning research is meant to serve as the background for a preliminary discussion of the impact of language policy and planning research on society. That discussion takes the different 'roles' of academics working at university departments and doing research on language policy and planning as a starting point. Language policy and planning as an interdisciplinary field: Towards a complexity approach [Política y planificación lingüísticas como campo interdisciplinario: hacia una aproximación compléxica] Albert Bastardas-Boada One of the dangers that we should be aware of when we study issues of language policy and planning is the fragmentary perspective by which they can be approached. Reality, by contrast, is interrelated and overlapping. This is why a complexity perspective stresses the importance of studying the contexts of phenomena, that is to say, their external relations. The direction to be followed here leads towards a better understanding of reality as a set of open systems that are in continuous exchange with the surrounding ecosystem, bearing in mind always that any apparent stability is the result of a dynamic equilibrium. Making headway towards an interdisciplinary approach is therefore necessary and imperative. Headings such as status/normative/institution vis-à-vis others such as solidarity/normal/individual seem to imply a basic distinction in the definition of sociocultural reality. To discover and understand the dynamics of the interaction between these two major categories is, in fact, one of the most important subjects waiting to be addressed by language planning and policy strategies and more broadly by sociolinguistics. An interrelated set of guiding questions for the field could thus be stated as follows: What group or organisation, in pursuit of what overall objective or intention, wants to achieve what, where, how and when; and what do they actually achieve, and why? With this approach, even if how a group or organisation obtains its desired goal – that is, its actual intervention – is included as one of the main elements in a piece of research, the research will not focus exclusively on this topic, but will frame the intervention and identify how it is interrelated with all the other elements involved globally in this phenomenon, trying to establish a clear theoretical understanding of the entire interwoven set of events and processes. The Dynamic Systems Approach as Metatheory for Developmental Psychology HUM DEV David C Witherington The dynamic systems perspective has been touted as an integrative metatheoretical framework for the study of stability and change in development. However, two dynamic systems camps exist with respect to the role higher-order form, once emergent, plays in the process of development. This paper evaluates these two camps in terms of the overarching world views they embody. Some dynamic systems proponents ground their conceptualization of development in pure contextualist terms by privileging the here-and-now in the explanation of development, whereas other proponents adopt an integration of organismic and contextualist world views by considering both local context and higher-order form in their explanatory accounts. These different ontological premises affect how each camp views the process of self-organization, the principle of circular causality and the very nature of explanation in developmental science. Taking Emergence Seriously: The Centrality of Circular Causality for Dynamic Systems Approaches to Development The dynamic systems (DS) approach has emerged as an influential and potentially unifying metatheory for developmental science. Its central platform – the argument against design – suggests that structure spontaneously and without prescription emerges through self-organization. In one of the most prominent accounts of DS, Thelen and her colleagues [Spencer, Dineva, & Schöner, 2009a; Thelen & Smith, 1994, 2006] have extended the argument against design to a complete ontological rejection of structural explanation. I argue that this antistructuralist stance conceptually undermines the very principle of emergence through self-organization upon which the approach is built, jeopardizing its process focus. Taking emergence seriously entails a strong commitment to circular causality and the reciprocal nature of structure-function relations through the adoption of a pluralistic model of causality, one that recognizes both local-to-global processes of construction and global-to-local processes of constraint. Ecological Perspectives on Second Language Acquisition and Socialization Claire Kramsch Sune Vork Steffensen Language Ideologies and Policies: Multilingualism and Education Marcia Farr Juyoung Song Educating multilingual students is a great challenge for both teachers and parents in a society in which English is the medium of schooling and of wider communication. Top-down language policies often overpower teachers' individual intentions and practices. In this paper, we point out that language ideologies in 'commonsense' beliefs and political orientations, rather than pedagogical considerations or research evidence, motivate language policies. We particularly discuss the language ideologies of standardization and monolingualism that underlie bilingual education and English-only policies in the U.S. and how these policies conflict with the reality of multilingual students' linguistic and identity practices. We also examine research on bilingual, bidialectal, and plurilingual practices in which multiple languages or varieties within a language co-exist and are used creatively by speakers for significant social and pragmatic functions, and we highlight the critical gap between top-down language policies and such ground-level plurilingualism. Teachers' knowledge of such plurilingual practices by their students and their deeper understanding of and critical perspective toward language policies can empower them to negotiate top-down language polices in their classrooms in order to facilitate language and literacy development among their multilingual students. Chaos/Complexity Science and Second Language Acquisition There are many striking similarities between the new science of chaos/complexity and second language acquisition (SLA) Chaos/complexity scientists study complex nonlinear systems They are interested in how disorder gives way to order, of how complexity arises in nature 'To some physicists chaos is a science of process rather than state, of becoming rather than being' (Gleick 1987 5) It will be argued that the study of dynamic, complex nonlinear systems is meaningful in SLA as well Although the new science of chaos/complexity has been hailed as a major breakthrough in the physical sciences, some believe its impact on the more human disciplines will be as immense ( Waldrop 1992) This belief will be affirmed by demonstrating how the study of complex nonlinear systems casts several enduring SLA conundrums in a new light Self-Organization, Emergence and the Architecture of Complexity Francis Heylighen . It is argued that the problems of emergence and the architecture of complexity can be solved by analysing the self-organizing evolution of complex systems. A generalized, distributed variationselection model is proposed, in which internal and external aspects of selection and variation are contrasted. "Relational closure" is introduced as an internal selection criterion. A possible application of the theory in the form of a pattern directed computer system for supporting complex problem-solving is sketched. RESUME. La thse est propose selon laquelle les problmes d'mergence et de l'architecture de la complexit peuvent tre rsolues en analysant l'volution auto-organisatrice des systmes complexes. Un modle gnralis, distribu de variation-et-slection est propos, et les aspects internes et externes de la variation et de la slection sont mis en contraste. La "fermeture relationelle" est introduite comme critre de slection interne. Une application potentielle de la thorie, un sy... Diversity and Complexity Scott E. Page Linguistic Attractors: The cognitive dynamics of language acquisition and change David L. Cooper Complex Dynamic Systems Theory Dynamics in Action: Intentional Behavior as a Complex System Alicia Juarrero Language Planning in Education James W. Tollefson Language planning in education refers to a broad range of decisions affecting the structure, function, and acquisition of language in schools. This chapter reviews the history of language planning in education, major contributions of past research, current research, problems and difficulties facing the field, and future directions. Early developments are categorized into two major periods, distinguished by a focus on the role of language planning in "modernization" and "development" on the one hand and critical analysis of power and ideology on the other. Major contributions emphasize work by pioneers in language planning, such as Joshua Fishman and Charles Ferguson, who laid the foundation for subsequent work on language maintenance and shift, bilingualism and diglossia, and a host of related topics. Subsequent developments shifted attention to language and ideology, tensions between "standard" and "nonstandard" varieties, globalization and the spread of English, language maintenance/revitalization, and bilingual approaches to education. Work in progress includes new developments in research methodologies, new conceptual frameworks such as interpretive policy analysis and the ecology of language, and changing understandings of language policy and planning. These new understandings have led to increasing use of qualitative research methods such as ethnography. Important challenges facing the field include efforts to integrate language planning with other social sciences and to build more direct links between research and the practice of language planning in education. Finally, this chapter examines future directions, including the role of language planning in economic inequality, language planning in non-state institutions such as the World Bank, and development of new research methodologies. Language Planning and Social Change AM POLIT SCI REV Jonathan Pool Robert L. Cooper Dennis Baron Rosalie Pedalino Porter Dynamic Systems and Language Development Paul van geert Marjolijn H. Verspoor Language development viewed from a dynamic approach to complex systems is in line with an emergentist, usage-based view of language and language development. The main difference, however, is in terms of the object inquiry and the methods and techniques used. The approach focuses on the process itself and on quantification of change rather than on the underlying environmental, biological or psychological reasons of change. After defining development and dynamic systems, this chapter shows how different dynamic principles correspond to general human and language developmental processes. The chapter argues that the dynamic principles hold for language and language development as well and translate into commonly held views in linguistics: interconnectedness with sociocultural embeddedness; iteration with frequency of exposure; variability with overuse, overgeneralization and U-shaped behavior; variation with individual trajectories and differences; and attractor states with stages of development. Chapter 9. Another step to be taken – Rethinking the end point of the interlanguage continuum The Systems View of Life: A Unifying Vision Fritjof Capra Pier Luigi Luisi Over the past thirty years, a new systemic conception of life has emerged at the forefront of science. New emphasis has been given to complexity, networks, and patterns of organisation leading to a novel kind of 'systemic' thinking. This volume integrates the ideas, models, and theories underlying the systems view of life into a single coherent framework. Taking a broad sweep through history and across scientific disciplines, the authors examine the appearance of key concepts such as autopoiesis, dissipative structures, social networks, and a systemic understanding of evolution. The implications of the systems view of life for health care, management, and our global ecological and economic crises are also discussed. Written primarily for undergraduates, it is also essential reading for graduate students and researchers interested in understanding the new systemic conception of life and its implications for a broad range of professions - from economics and politics to medicine, psychology and law. Crosslinguistic Influence in Language and Cognition Scott Jarvis Aneta Pavlenko A cogent, freshly written synthesis of new and classic work on crosslinguistic influence, or language transfer, this book is an authoritative account of transfer in second-language learning and its consequences for language and thought. It covers transfer in both production and comprehension, and discusses the distinction between semantic and conceptual transfer, lateral transfer, and reverse transfer. The book is ideal as a text for upper-level undergraduate and graduate courses in bilingualism, second language acquisition, psycholinguistics, and cognitive psychology, and will also be of interest to researchers in these areas. A Complexity Approach Toward Mind-Brain-Education (MBE); Challenges and Opportunities in Educational Intervention and Research Henderien Steenbeek In the context of an educational or clinical intervention, we often ask questions such as "How does this intervention influence the task behavior of autistic children?" or "How does working memory influence inhibition of immediate responses?" What do we mean by the word influence here? In this article, we introduce the framework of complex dynamic systems (CDS) to disentangle the meaning of words such as influence, and to discuss the issue of education and intervention as something that takes place in the form of complex, real-time, situated processes. What are the applied implications of such a CDS framework? Can we use it to improve education? Five general principles—process laws—are introduced, which can be used to guide the way we formulate research questions and methods, and the way we use the results of such research. In addition, we briefly discuss a project in progress, in which we ourselves attempt to apply the process laws that govern educational activities. Finally, we report about a discussion about the usability of the process laws, both in educational research and in the classroom, as was held during our workshop at the Mind, Brain, and Education Conference, November 2014. Dynamic Development in Speaking Versus Writing in Identical Twins LANG LEARN Huiping Chan Marjolijn Verspoor Louisa Vahtrick Taking a dynamic usage-based perspective, this longitudinal case study compares the development of sentence complexity in speaking versus writing in two beginner Taiwanese learners of English (identical twins) in an extensive corpus consisting of 100 oral and 100 written texts of approximately 200 words produced by each twin over 8 months. Three syntactic complexity measures were calculated: mean length of T-unit, dependent clauses per T-unit, and coordinate phrases per T-unit. The working hypothesis was that (a) the learners' oral texts would become more complex sooner than their written texts and that (b) the two learners would show similar developmental patterns. We found that these two learners initially demonstrated syntactic complexity in their oral language rather than in their written language, yet over time they were found to exhibit inverse trends of development. This observation was confirmed with dynamic modeling by means of a hidden Markov model, which allowed us to detect moments of self-organization in the learners' spoken and written output (i.e., moments where the interaction among various measures changes and takes on a new configuration). Indigenous Literacies in the Americas: Language Planning from the Bottom Up Akira Y. Yamamoto Nancy H. Hornberger On the roles of repetition in language teaching and learning Repetition is common in language use. Similarly, having students repeat is a common practice in language teaching. After surveying some of the better known contributions of repetition to language learning, I propose an innovative role for repetition from the perspective of complexity theory. I argue that we should not think of repetition as exact replication, but rather we should think of it as iteration that generates variation. Thus, what results from iteration is a mutable state. Iteration is one way that we create options in how to make meaning, position ourselves in the world as we want, understand the differences which we encounter in others, and adapt to a changing context. Saying what we mean: Making a case for 'language acquisition' to become 'language development' As applied linguists know very well, how we use language both constructs and reflects our understanding. It is therefore important that we use terms that do justice to our concerns. In this presentation, I suggest that a more apt designation than multilingual or second language acquisition (SLA) is multilingual or second language development (SLD). I give a number of reasons for why I think SLD is more appropriate. Some of the reasons that I point to are well known. Others are more current, resting on a view of language from a complex systems perspective. Such a perspective rejects the commodification of language implied by the term 'acquisition', instead imbuing language with a more dynamic quality, implied by the term 'development', because it sees language as an ever-developing resource. It also acknowledges the mutable and interdependent norms of bilinguals and multilinguals. In addition, this perspective respects the fact that from a target-language vantage point, regress in learner performance is as characteristic of development as progress. Finally, and most appropriately for AILA 2011, the term second language development fits well with the theme of the congress – harmony in diversity – because it recognizes that there is no common endpoint at which all learners arrive. For, after all, learners actively transform their linguistic world; they do not merely conform to it. June K. Phillips 1 From input to affordance: Social-interactive learning from an ecological perspective Leo van Lier SLA for the 21st Century: Disciplinary Progress, Transdisciplinary Relevance, and the Bi/multilingual Turn Lourdes Ortega The goals of this article are to appraise second language acquisition's (SLA) disciplinary progress over the last 15 years and to reflect on transdisciplinary relevance as the field has completed 40 years of existence and moves forward into the 21st century. I first identify four trends that demonstrate vibrant disciplinary progress in SLA. I then turn to the notion of transdisciplinarity, or the proclivity to pursue and generate SLA knowledge that can be of use outside the confines of the field and contribute to overall knowledge about the human capacity for language. I propose an understanding of transdisciplinary relevance for SLA that results from the ability: (a) to place one's field in a wider landscape of disciplines that share an overarching common goal and (b) to develop critical awareness of one's disciplinary framings of object of inquiry and goals and others' likely reception of them. Finally, I argue that it is by reframing SLA as the study of late bi/multilingualism that the remarkable progress witnessed in the last 15 years will help the field reach new levels of transdisciplinary relevance as a contributor to the study of the ontogeny of human language and a source of knowledge in support of language education in the 21st century. Transfer of Learning Transformed Instruction is motivated by the assumption that students can transfer their learning, or apply what they have learned in school to another setting. A common problem arises when the expected transfer does not take place, what has been referred to as the inert knowledge problem. More than an academic inconvenience, the failure to transfer is a major problem, exacting individual and social costs. In this article, I trace the evolution of research on the transfer of learning, in general, and on language learning, in particular. Then, a different view of learning transfer is advanced. Rather than learners being seen to "export" what they have learned from one situation to the next, it is proposed that learners transform their learning. The article concludes by offering some suggestions for how to mitigate the inert knowledge problem from this perspective. The Science of the Individual Todd Rose Parisa Rouhani Kurt W. Fischer Our goal is to establish a science of the individual, grounded in dynamic systems, and focused on the analysis of individual variability. Our argument is that individuals behave, learn, and develop in distinctive ways, showing patterns of variability that are not captured by models based on statistical averages. As such, any meaningful attempt to develop a science of the individual necessarily begins with an account of the individual variability that is pervasive in all aspects of behavior, and at all levels of analysis. Using examples from fields as diverse as education and medicine, we show how starting with individual variability, not statistical averages, helped researchers discover two sources of ordered variability—pathways and contexts—that have implications for theory, research, and practice in multiple disciplines. We conclude by discussing three broad challenges—data, models, and the nature of science—that must be addressed to ensure that the science of the individual reaches its full potential. Unpeeling the Onion: Language Planning and Policy and the ELT Professional TESOL QUART Thomas Ricento The field of language planning and policy (LPP) provides a rich array of research opportunities for applied linguists and social scientists. However, as a multidisciplinary field that seeks to understand, among other things, why some languages thrive whereas others are marginalized, LPP may appear quite theoretical and far removed from the lives of many English language teaching (ELT) practitioners. This is unfortunate, because ELT professionals—be they teachers, program developers, materials and textbook writers, administrators, consultants, or academics—are involved in one way or another in the processes of LPP. The purpose of this article is to unravel those processes and the role of ELT professionals in them for both theoretical and practical reasons: theoretical, because we believe there are principled ways to account for why particular events affect the status and vibrancy of languages and speech communities, and practical, because we believe there are ways to influence the outcome of social processes. In general, we find that the principle of linguistic self-determinism-the right to choose (within limits) what languages one will use and be educated in—is not only viable but desirable for LPP decision making because it both promotes social equity and fosters diversity. In this article, we examine how ELT professionals are already actively engaged in deciding language policies, how they promote policies reaffirming or opposing hierarchies of power that reflect entrenched historical and institutional beliefs (see Phillipson & Skutnabb-Kangas, this issue), and how they might affect changes in their local contexts. T|ranslanguaging: Language, Bilingualism and Education This book addresses how the new linguistic concept of 'Translanguaging' has contributed to our understandings of language, bilingualism and education, with potential to transform not only semiotic systems and speaker subjectivities, but also social structures. Applied Linguistics: thematic pursuits or disciplinary moorings?: A conversation between Michael Halliday and Anne Burns J Appl Ling Michael Halliday Anne Burns Analysis of Language Policy Discourses Across the Scales of Space and Time Int J Sociol Lang The ecology of language has been put forward as a useful orientation to the holistic investigation of multilingual language policies because it draws attention to relationships among speakers, languages, policies, and social contexts at varying dimensions of social organization. As such, it is an orientation that stands to facilitate the integration of micro- and macro-sociolinguistic inquiry in language policy and planning (LPP); however, it is not a method. An ecological orientation requires the application of specific methods in order to achieve a holistic picture of an LPP situation. To this end, the present article explores how recent discourse analytic theories and methods that focus on ways in which discursive processes operate within and across space and time are especially well suited to the ecological objective of understanding relationships between language policies and the social actions of individuals. The Complexity Turn THEOR CULT SOC John Urry Dynamic Systems Theory and the Complexity of Change PSYCHOANAL DIALOGUES Esther Thelen The central thesis of this paper is that grand theories of development are alive and well and should be paramount to those interested in behavioral intervention. Why? Because how we think about development affects how we approach treatment. Here I discuss the central concepts of a new theory of development—dynamic systems theory—to highlight the way in which a theory can dramatically alter views of what intervention is all about. Rather than focusing on one root of maladaptive behavior such as biological predispositions, environmental causes, or motivational states, dynamic systems theory presents a flexible, time-dependent, and emergent view of behavioral change. I illustrate this new view with a case study on how infants develop the motivation to reach for objects. This example highlights the complex day-by-day and week-by-week emergence of new skills. Although such complexity presents daunting challenges for intervention, it also offers hope by emphasizing that there are multiple pathways toward change. Towards a Pedagogical Paradigm Rooted in Multilinguality Rama Kant Agnihotri I imagine that choosing any one alternative out of the three educational models provided by Spring (2007/this issue) may not really be adequate for a new social–sociolinguistic theory for a potentially just world order. As an alternative to the persistently degenerating consumerist model of the education security state, it may not be enough to borrow extensively from the classical, progressive and indigenous models but also turn our attention to the dialectics between social theory and action and to language as multilinguality that mediates this interaction such as in the work of Habermas, Foucault, and Bourdieu (for an insightful discussion of the work of these philosophers on these issues, see Sarangi, 2001); we also need to carefully examine such monumental efforts and their critiques in the field of education as the National Educational Policy Investigation in South Africa in 12 volumes (National Education Coordinating Committee, 1992) and the National Curriculum Framework (National Council of Educational Research and Training, 2005) with all its 21 position papers divided into three volumes; namely, Curricular Areas, Systemic Reforms, and National Concerns. We are indeed looking for a society in which there is space for universal "happiness, satisfaction, and leisure"; but I think we are also looking for a society that ensures an autonomy of mind and reflection and of care and respect for others. As Spring indicates, there are serious problems with such short-cut solutions as "think globally; learn English; remain rooted in your national identity." As you get increasingly sucked into the consumerist economy, long hours of work leave you with no happiness and leisure; there is no time for reflection so that you are not even aware of what is destroying you; ruthless competition leaves no space for caring for others and the diversity of your life style, and communication systems vanishes faster than you realize. At Home in the Universe MATH SOC SCI Stuart A. Kauffman The complexity turn in educational linguistics Lang Cult Curric In the wake of conversations about integrating macro- and micro-levels of linguistic analysis over the last 50 years, and following theoretical and methodological debates in the 1990s about investigating the dynamics of entire social systems, complexity theory is coming of age in educational linguistics. Central to the application of complexity theory in social science is its attention to multiple scales of social organisation and how they are connected through the actions of individuals, with an emphasis on the unfolding of social processes rather than on cause–effect relationships. This provides us with a new perspective that is increasingly being adopted in the development and implementation of multilingual education throughout the world, as seen in the simultaneous management of linguistic resources across different scales – national, regional, community, classroom, and interpersonal. Ecological Language Education Policy Principles of Language EcologyContributions of Language Ecology to the Study of Educational Language Planning and PolicyExamples of an Ecological Approach to Educational Language PolicyConclusion Complex Systems and Educational Change: Towards a new research agenda Educ Philos Theor Jay L. Lemke Nora H. Sabelli How might we usefully apply concepts and procedures derived from the study of other complex dynamical systems to analyzing systemic change in education? In this article we begin to define possible agendas for research toward developing systematic frameworks and shared terminology for such a project. We illustrate the plausibility of defining such frameworks and raise the question of the relation between such frameworks and the crucial task of aggregating data across 'systemic experiments', such as those conducted under the Urban Systemic Initiative sponsored by the US National Science Foundation. Our discussion includes a review of key issues identified by groups of researchers regarding (1) Defining the System, (2) Structural Analysis, (3) Relationships Among Subsystems and Levels, (4) Drivers for Change, and (5) Modeling Methods. The Dynamics of Second Language Emergence: Cycles of Language Use, Language Change, and Language Acquisition MOD LANG J Nick C. Ellis This article outlines an emergentist account whereby the limited end-state typical of adult second language learners results from dynamic cycles of language use, language change, language perception, and language learning in the interactions of members of language communities. In summary, the major processes are:1. Usage leads to change: High frequency use of grammatical functors causes their phonological erosion and homonymy.2. Change affects perception: Phonologically reduced cues are hard to perceive.3. Perception affects learning: Low salience cues are difficult to learn, as are homonymous/polysemous constructions because of the low contingency of their form–function association.4. Learning affects usage: (i) Where language is predominantly learned naturalistically by adults without any form focus, a typical result is a Basic Variety of interlanguage, low in grammatical complexity but communicatively effective. Because usage leads to change, maximum contact languages learned naturalistically can thus simplify and lose grammatical intricacies. Alternatively, (ii) where there are efforts promoting formal accuracy, the attractor state of the Basic Variety can be escaped by means of dialectic forces, socially recruited, involving the dynamics of learner consciousness, form-focused attention, and explicit learning. Such influences promote language maintenance.Form, user, and use are inextricable. Complex Systems and Applied Linguistics Int J Appl Ling Lynne Cameron The book introduces key concepts in complexity theory to readers concerned with language, its learning, and its use. It demonstrates the applicability and usefulness of these concepts to a range of areas in applied linguistics including first and second language development, language teaching, and discourse analysis. It concludes with a chapter that discusses suitable approaches to research investigations. This book will be invaluable for readers who want to understand the recent developments in the field that draw on complexity theory, including dynamic systems theory, ecological approaches, and emergentism. Transdisciplinary approach to language study. The complexity theory perspective J Filopović The linguistic genius of babies. TED talk P Kuhl The handbook of bilingual and multilingual education T Wiley Terrence G. Wiley Complex Systems and the Evolution of Artificial Intelligence Klaus Mainzer Can machines think? This famous question from Turing has new topicality in the framework of complex systems. The chapter starts with a short history of computer science since Leibniz and his program for mechanizing thinking (mathesis universalis) (Sect. 5.1). The modern theory of computability enables us to distinguish complexity classes of problems, meaning the order of corresponding functions ... [Show full abstract] describing the computational time of their algorithms or computational programs. Modern computer science is interested not only in the complexity of universal problem solving but also in the complexity of knowledge-based programs. Famous examples are expert systems simulating the problem solving behavior of human experts in their specialized fields. Further on, we ask if a higher efficiency of problem solving may be expected from quantum computers and quantum complexity theory (Sect. 5.2). A Complex Systems Paradox of Organizational Learning and Knowledge Management July 2013 · International Journal of Knowledge-Based Organizations Soheil Ghili Serima Nazarian Madjid Tavana Mohammad Taghi Isaai Many organizations are striving to survive and remain competitive in the current uncertain and rapidly changing economic environment. Businesses must innovate to face this volatility and maintain their competitiveness. Organizational learning is a complex process with many interrelated elements linking knowledge management with organizational innovation. In this paper we use several theories ... [Show full abstract] (i.e., organizational learning, knowledge management, organizational innovation, complexity theory, and systems theory) to discover and study the interrelationships among the organizational learning elements. The purpose of this paper is threefold: (1) We identify organizational learning as a mediating variable between knowledge management and organizational innovation; (2) We further present a paradox where decisions that are expected to improve organizational learning, surprisingly do not work; and (3) We show this paradox is not the result of overlooking organizational learning elements, but rather, caused by neglecting to consider the complex interrelationships and interdependencies among them. The Neumann problem for the $k$-Cauchy-Fueter complexes over $k$-pseudoconvex domains in $\mathbb{R}... April 2017 · Journal of Geometric Analysis Wei Wang The $k$-Cauchy-Fueter operators and complexes are quaternionic counterparts of the Cauchy-Riemann operator and the Dolbeault complex in the theory of several complex variables. To develop the function theory of several quaternionic variables, we need to solve the non-homogeneous $k$-Cauchy-Fueter equation over a domain under the compatibility condition, which naturally leads to a Neumann problem. ... [Show full abstract] The method of solving the $\overline{\partial}$-Neumann problem in the theory of several complex variables is applied to this Neumann problem. We introduce notions of $k$-plurisubharmonic functions and $k$-pseudoconvex domains, establish the $L^2$ estimate and solve this Neumann problem over $k$-pseudoconvex domains in $\mathbb{R}^4$. Namely, we get a vanishing theorem for the first cohomology groups of the $k$-Cauchy-Fueter complex over such domains. From West to East and Back Again, An Educational Reading of Hermann Hesse's Later Work P. Roberts Of all the great Western novelists of the twentieth century, the German writer Hermann Hesse is arguably one of the most important for educationists. Paying particular attention to Hesse's last novel, The Glass Bead Game, and its immediate predecessor, The Journey to the East, this book suggests that Hesse was a man of the West who turned to the idea of 'the East' in seeking to understand himself ... [Show full abstract] and his society. From these later texts a rich, complex theory of educational transformation emerges. From West to East and Back Again examines the role of dialogue and uncertainty in the transformative process, considers utopian and ritualistic elements in Hesse's work, and explores the notion of education serving as a bridge between life and death. Hesse's novels address philosophical themes and questions of enduring significance, and this book will appeal to all who share an interest in human striving and growth. Method framework for studying military complex system Y.-M. Han F.-P. Liu Y. Guo For studying information age's military complex system, the traditional reductionism approaches are not applicable due to the complexity being not processed as complexity and the complexity theory's Agent-Based Modeling & Simulation method is short of the holistic perspectives from the top of the whole military complex system. And complexity theory is not mature and still evolving. For dealing ... [Show full abstract] with these challenges, a method framework for studying military complex system was proposed that involved descriptors to structure the notions of complexity-"The Four Modalities" as the core, "Bi-directional Neutralizing Method" as the approach, interaction diagrams and M&S as the supporting tool. Perspective: Explicitly correlated electronic structure theory for complex systems February 2017 · The Journal of Chemical Physics Andreas Grüneis So Hirata Yu-ya Ohnishi Seiichiro Ten-no The explicitly correlated approach is one of the most important breakthroughs in ab initio electronic structure theory, providing arguably the most compact, accurate, and efficient ansatz for describing the correlated motion of electrons. Since Hylleraas first used an explicitly correlated wave function for the He atom in 1929, numerous attempts have been made to tackle the significant challenges ... [Show full abstract] involved in constructing practical explicitly correlated methods that are applicable to larger systems. These include identifying suitable mathematical forms of a correlated wave function and an efficient evaluation of many-electron integrals. R12 theory, which employs the resolution of the identity approximation, emerged in 1985, followed by the introduction of novel correlation factors and wave function ansätze, leading to the establishment of F12 theory in the 2000s. Rapid progress in recent years has significantly extended the application range of explicitly correlated theory, offering the potential of an accurate wave-function treatment of complex systems such as photosystems and semiconductors. This perspective surveys explicitly correlated electronic structure theory, with an emphasis on recent stochastic and deterministic approaches that hold significant promise for applications to large and complex systems including solids. Entire solutions of two types of systems of complex differential-difference equations September 2016 · Acta Mathematica Scientia L.Y. Gao By Nevanlinna theory of the value distribution, and the theory of complex differential and complex difference, we will mainly investigate entire solutions with finite order of two types of systems of differential-difference equations, and obtain two results. It extends some results concerning complex differential (difference) equations to the systems of differential-difference equations. Damping identification using Weighted Fitting of Frequency-response- function (WFF) method Yf Duan Lei Zhu Yiqiang Xiang Abstract-Modal identification is usually a necessity for structural damage identification and health evaluation. However, the modal damping is difficult to be accurately identified. Even though some new damping identification methods arise in recent years, it is difficult to apply these methods in engineering practice due to their complexity in theory and computation. This paper presents a ... [Show full abstract] frequency-domain modal identification method- Weighted Fitting of Frequency-response-function (WFF) Method. The principle and methodology of this method is first addressed. The target functions are defined as the differences between the measured and computed frequency-response functions (FRFs) multiplied by appropriate weightings for different excitation frequencies. The modal parameters, modal frequency and damping, are obtained by the FRF fitting that leads to the minimum value of the target functions. The modal identifications for single- and multi-degree-of-freedom structure responses exited by band-limited white-noises with measurement noises are conducted to verify the effectiveness of this method. Then this method is applied to modal identification of a box-gird railway bridge with a passing train. Compared with half-power bandwidth method, this method is shown to be simple and reliable for the accurate identification of the modal damping. Complex rotation method applied to Z -dependent perturbation theory: The 2 s 2 p autoionizing states... January 1992 · Physical review A, Atomic, molecular, and optical physics Lonnie W. Manning Frank C. Sanders Z-dependent perturbation theory and the complex rotation method are utilized to calculate the resonance position and width for the 2s2p 1,3P autoionizing states of two-electron atoms to high order. Use of the complex rotation method in Z-dependent perturbation theory simultaneously yields values of both the resonance position and width for all members of an isoelectronic sequence. It also ... [Show full abstract] simplifies the calculation by making the wave function of the resonance wave function square integrable while simultaneously rotating the interacting continuum away from the real energy axis, thereby eliminating any complications introduced by the presence of a degeneracy in zero order between the doubly excited states and the adjacent continuum. Knowledge of the exact second-order widths is used to increase the efficiency of the complex stabilization method. As the accuracy of the real and imaginary parts of the complex energy are generally related, the exact values of the second-order widths are also helpful in ascertaining the accuracy of the second-order results, and therefore the reliability of the total resonance position and widths. We believe the present values to be the most accurate results to date for Z>=3 for the singlet and Z>=2 for the triplet. Preface to Part XVII Richard Lesh Hurford's article is a useful introduction to the topic of complex systems and their potential significance in mathematics education. He focuses on two categories of issues. The first concerns the possibility of treating complex systems as an important topic to be included in any mathematics curriculum that claims to be preparing students for full participation in a technology-based age of ... [Show full abstract] information. The second concerns the possibility of using systems theory in general, and complexity theory in particular, to develop models to explain the development of students' mathematical thinking in future-oriented learning or problem solving situations. How to Choose Between Policy Proposals: A Simple Tool Based on Systems Thinking and Complexity Theor... January 2013 · Emergence: Complexity & Organization Steven Wallis Complexity and systems approaches can be applied for the creation and evaluation of policy proposals. However, those approaches are difficult to learn and use. Therefore, those conceptual tools are not available to the general public. If citizens were able to analyze policies for themselves with relative ease, they would gain a powerful tool for choosing and improving policy. In this paper, I ... [Show full abstract] present a relatively simple method that can be used to measure the structure (complexity and co-causal relationships) of competing policies. I demonstrate this method by conducting a detailed comparison of two economic policies that have been put forth by competing political parties. The results show clear differences between the policies that are not visible through other forms of analysis. Thus, this method serves as a "David's sling"-a simple tool that can empower individuals and organization to have a greater influence on the policy process. The 10th J. A. F. Stevenson Memorial Lecture Outline of a physical theory of physiological systems April 1982 · Canadian Journal of Physiology and Pharmacology F. Eugene Yates I examine the scientific status of "organismic biology" and find it to be weak and vitalistic. I propose that we now need integrative theories based on a physical approach to biology. Certain technical terms about systems, regulation and control, and the minimum requirements of a physical theory for an organism are defined. Homeostasis, the earlier first theory of physiological stability, is ... [Show full abstract] shown to be incomplete. I then introduce a physical theory for complex systems (homeokinetics) based upon statistical mechanics, nonlinear dynamics, and irreversible thermodynamics. The question of whether or not biological systems lie within the domain reachable by such physical analysis is considered in detail, and answered in the affirmative. I then give highlights of the physical theory to be invoked. Following the work of A. S. Iberall, I introduce a generalized stress tensor that shows in what way complex systems that internalize most of their transformations of energy can be examined from a statistical mechanic... You can request the full-text of this chapter directly from the authors on ResearchGate.
CommonCrawl
\begin{document} \begin{frontmatter} \title{Tail processes for stable-regenerative multiple-stable model} \runtitle{Tail processes for stable-regenerative model} \begin{aug} \author[A]{\fnms{Shuyang} \snm{Bai}\ead[label=e1]{[email protected]}}, \and \author[B]{\fnms{Yizao} \snm{Wang}\ead[label=e2]{[email protected]}} \address[A]{Department of Statistics, University of Georgia, 310 Herty Drive, Athens, GA, 30602, USA. \printead{e1}} \address[B]{Department of Mathematical Sciences, University of Cincinnati, 2815 Commons Way, Cincinnati, OH, 45221-0025, USA. \printead{e2}} \end{aug} \begin{abstract} We investigate a family of discrete-time stationary processes defined by multiple stable integrals and renewal processes with infinite means. The model may exhibit behaviors of short-range or long-range dependence, respectively, depending on the parameters. The main contribution is to establish a phase transition in terms of the tail processes that characterize local clustering of extremes. Moreover, in the short-range dependence regime, the model provides an example where the extremal index is different from the candidate extremal index. \end{abstract} \begin{keyword}[class=MSC] \kwd[Primary ]{60G70} \kwd[; secondary ]{60G52} \kwd{60K05} \end{keyword} \begin{keyword} \kwd{extremal index} \kwd{long-range dependence} \kwd{multiple integral} \kwd{phase transition} \kwd{regular variation} \kwd{renewal process} \kwd{short-range dependence} \kwd{tail process} \end{keyword} \end{frontmatter} \section{Introduction and main results}\label{sec:1} \subsection{The model and background} The objective of this paper is to study the local behavior of extremes of a family of stationary stochastic processes known as the stable-regenerative multiple-stable model that has attracted attention in the studies of stochastic processes with {\em long-range dependence} \citep{samorodnitsky16stochastic,pipiras17long,beran13long}. In an accompanying paper \citep{bai21phase} the macroscopic/global limit of extremes are established in terms of convergence of the random sup-measures in the framework of \citet{obrien90stationary}, and a phase transition is revealed. Here, we characterize the microscopic/local limit of extremes, in terms of the tail processes as introduced by \citet{basrak09regularly}. The family of processes of our interest has a tail parameter $\alpha\in(0,2)$, a memory parameter $\beta\in(0,1)$, and a multiplicity parameter $p\in{\mathbb N}:=\{1,2,\ldots\}$. The representation is intrinsically related to renewal processes, for which we introduce some notation. Consider a discrete-time renewal process with the consecutive renewal times denoted by $\vv\tau:=\{\tau_0,\tau_1,\dots\} \subset {\mathbb N}_0:=\{0,1,2,\ldots\}$. Here $\tau_0$ is the initial renewal time, and the inter-renewal times $(\tau_{i}-\tau_{i-1})_{i\ge 0}$ are i.i.d.\ ${\mathbb N}$-valued with cumulative distribution function $F$, that is, $F(x) = \mathbb P(\tau_{i}-\tau_{i-1}\le x), i\in{\mathbb N}, x\ge 0$. Denote the probability mass function by $f(n) = \mathbb P(\tau_{i}-\tau_{i-1} = n), n\in{\mathbb N}$. Throughout, we assume \begin{equation}\label{eq:F} \wb F(x) = 1-F(x) \sim \mathsf C_Fx^{-\beta} \mbox{ as } x\to\infty \quad\mbox{ with }\quad \beta\in(0,1), \end{equation} which implies an infinite mean, and the following technical assumption \begin{equation}\label{eq:Doney} \sup_{n\in{\mathbb N}}\frac{nf(n)}{\wb F(n)}<\infty. \end{equation} By default, a renewal process starts at renewal at time 0, and hence $\tau_0 = 0$. Note that our renewal processes may be {\em delayed}, that is, $\tau_0$ is not necessarily zero, and may be a random variable in ${\mathbb N}_0 = {\mathbb N}\cup\{0\}$, and we shall be specific when this is the case. An important notion is the {\em stationary delay measure} of the renewal process, denoted by $\pi$. More precisely, $\pi$ is supported on ${\mathbb N}_0$ with \[ \pi(k)\equiv \pi(\{k\}) = \wb F(k) = 1-F(k), \quad k\in{\mathbb N}_0. \] (For the sake of simplicity, we do not distinguish $\pi(k)$, the mass function at $k\in{\mathbb N}_0$, from $\pi(\{k\})$, the measure evaluated at the set $\{k\}$.) Note that the stationary delay measure $\pi$ is a $\sigma$-finite and infinite measure on ${\mathbb N}_0$, since the renewal distribution has infinite mean. More details are in Section \ref{sec:background}. We then provide a series representation of the model. Consider \begin{equation}\label{eq:series one-sided} \sif i1\ddelta{y_i,d_i}\eqd {\rm PPP}\pp{(0,\infty]\times{\mathbb N}_0, \alpha x^{-\alpha-1}dxd\pi}, \end{equation} where the right-hand side is understood as the Poisson point process on $(0,\infty]\times {\mathbb N}_0$ with intensity measure $\alpha x^{-\alpha-1}dxd\pi$. In addition, let $\{\vv\tau\topp{i,d_i}\}_{i\in{\mathbb N}}$ denote, given the above Poisson point process, conditionally independent delayed renewal processes, each $\vv\tau\topp{i,d_i}$ with initial renewal time delayed at $\tau_0 = d_i$ and inter-renewal times following $F$. In this paper, we consider the {\em stable-regenerative multiple-stable model} given by \begin{equation}\label{eq:series infty p} \ccbb{X_k}_{k\in{\mathbb N}_0} \eqd\ccbb{\sum_{0<i_1<\cdots<i_p} [\varepsilon_{\boldsymbol i}][y_{\boldsymbol i}]\inddd{k\in \bigcap_{r=1}^p\vv\tau\topp{i_r,d_{i_r}}}}_{k\in{\mathbb N}_0}, \end{equation} where $\{\varepsilon_i\}_{i\in{\mathbb N}}$ are i.i.d.\ Rademacher random variables independent of the point process in \eqref{eq:series one-sided}, $[\varepsilon_{\boldsymbol i}] = \varepsilon_{i_1}\times\cdots\times \varepsilon_{i_p}$, and similar notation applies for $[y_{\boldsymbol i}]$. The name `stable-regenerative' comes from the fact that each renewal process $\vv\tau$, say non-delayed, has a scaling limit as a $\beta$-stable-regenerative random closed set \citep{giacomin07random,bertoin99subordinators}. The name `multiple-stable' comes from the fact that the series representation above corresponds to a multiple stochastic integral with respect to a stable random measure. See \citep{bai20functional} for our model described explicitly in a multiple-stable-integral representation. We shall work with the series representation only, so we omit the stochastic-integral representation here. The investigation of multiple stochastic integrals has a long history, dated from the celebrated work of \citet{ito51multiple} for the Gaussian case (a.k.a.~Wiener--It\^o integrals). See \citep{krakowiak86random, szulga91multiple,kwapien92random,rosinski99product} for their extensions to non-Gaussian case, and \citep{samorodnitsky89asymptotic} for series representations. It is worth noting that the exclusion of the diagonals in the multiple stable integral concurs with the exclusion of the diagonals in the multiple sum in \eqref{eq:series infty p}. Our model is actually a simplified version of the one investigated in \citep{bai20functional}; yet it preserves the key features and the renewal-process point of view facilitates our analysis. When $p=1$, the model $\{X_k\}_{k\in{\mathbb N}_0}$ in \eqref{eq:series infty p} is a stationary (non-Gaussian) stable process and was first introduced in \cite{rosinski96classes}. It exhibits non-standard asymptotic behaviors in terms of limit theorems for sums and extremes as revealed in \citep{owada15functional,samorodnitsky19extremal}, and is hence viewed as a model with long-range dependence. For $p\ge 2$, a functional central limit theorem has been established recently in \citep{bai20functional}, under the assumption \begin{equation}\label{eq:beta_p} \beta_p:=p\beta -p+1\in(0,1), \end{equation} with a new family of self-similar multiple-stable processes with stationary increments arising in the limit (see also \citep{bai20limit,bai22limit} for variations of the model \eqref{eq:series infty p} that scale to multiple-Gaussian processes known as the Hermite processes). The functional central limit theorem indicates that under the assumption \eqref{eq:beta_p}, the process exhibits behaviors of long-range dependence. Our motivation is to understand the limit behavior of extremes for all possible values of $\beta$ and $p$. Note that with $p=1$, necessarily $\beta_p = \beta\in(0,1)$, and hence one only encounters the regime of long-range dependence: an extremal limit theorem in terms of random sup-measures \citep{obrien90stationary} has been established in \citep{samorodnitsky19extremal}, where the limit random sup-measures exhibit non-trivial dependence structure and in particular their marginal laws may no longer belong to the classical extreme-value distributions when $\beta> 1/2$. It was also shown that when $\beta_p<0$ the process no longer exhibits long-range dependence, with the limit random sup-measure being independently scattered \citep{bai21phase}. Here, we examine the microscopic limit of extremes by investigating the tail processes for a full range of parameters. For a general stationary process with regularly varying tails $\{X_k\}_{k\in{\mathbb N}_0}$, its tail process characterizes the possible local clustering of extremes, and arises in a conditional limit theorem as \begin{equation}\label{eq:Y} \mathcal L\pp{\frac{X_0}{x},\dots,\frac{X_m}{x}\;\middle\vert\; |X_0|>x}\to \mathcal L(Y_0,\dots,Y_m), \mbox{ for all } m\in{\mathbb N}, \end{equation} as $x\to\infty$. The left-hand side is understood as the conditional joint law of $(X_i/x)_{i=0,\dots,m}$ given $|X_0|>x$, and the right-hand side the joint law of $(Y_0,\dots,Y_m)$. If the above holds, then $\{Y_i\}_{i\in{\mathbb N}_0}$ is referred to as the {\em tail process} of $\{X_i\}_{i\in{\mathbb N}_0}$. The tail process was originally introduced by \citet{basrak09regularly}. Some closely related ideas date back to at least \citet{davis95point,davis98sample} (and see references therein for some closely related earlier developments) in the characterization of local clustering of extremes via point-process convergence. See also \citet{kulik20heavy} for the state of the art on the topic. There are several equivalent characterizations of the tail process. In particular, equivalently to \eqref{eq:Y}, one has \[ \mathcal L\pp{\frac{X_0}{|X_0|},\dots,\frac{X_m}{|X_0|}\;\middle\vert\; |X_0|>x}\to (\Theta_0,\dots,\Theta_m), \mbox{ for all } m\in{\mathbb N}, \] and the process $\vv\Theta=\{\Theta_i\}_{i\in{\mathbb N}_0}$ is referred as the {\em spectral tail process}. In such a case, \[ \ccbb{Y_i}_{i\in{\mathbb N}_0}\eqd \ccbb{V_\alpha\Theta_i}_{i\in{\mathbb N}_0}, \] where $V_\alpha$ is an $\alpha$-Pareto random variable ($\mathbb P(V_\alpha>x) = x^{-\alpha}, x\ge 1$) independent from $\{\Theta_i\}_{i\in{\mathbb N}_0}$. Furthermore, $\vv\Theta$, when it exists, can be uniquely extended to a ${\mathbb Z}$-indexed stochastic process. \subsection{Main result: a phase transition} We first describe the spectral tail processes that will appear in the limit. Let $\vv\Theta^*\equiv \{\Theta_k^*\}_{k\in{\mathbb N}_0}$ be a $\{0,1\}$-valued sequence defined as follows: let $\{\vv\tau\topp i\}_{i=1,\dots,p}$ denote i.i.d.~copies of a standard (non-delayed) renewal process, with the inter-renewal distribution function $F$ as in \eqref{eq:F}, and consider \begin{equation}\label{eq:Theta*} \Theta_{k}^* :=\begin{cases} 1, & \mbox{ if } k\in\vv\eta,\\ 0, & \mbox{ otherwise}, \end{cases} \quad k=0,1,\dots \quad\mbox{ with }\quad\vv\eta :=\bigcap_{r=1}^p \vv\tau\topp r. \end{equation} In particular, $\Theta_0^* = 1$ since $0\in\vv\tau\topp r, r=1,\dots,p$ by definition. Moreover, let $\varepsilon$ be a Rademacher random variable independent from $\vv\Theta^*$. We set \begin{equation}\label{eq:Theta} \vv\Theta := \varepsilon\vv\Theta^* = \{\varepsilon\Theta_i^*\}_{i\in{\mathbb N}_0}. \end{equation} We emphasize that the law of the spectral tail process is completely determined by $F$ and $p$. In fact, the intersection $\vv\eta$ is again a non-delayed (i.e., $0\in\vv\eta$) renewal process. Note that the larger $p$ is, the smaller/sparser the intersection set $\vv\eta$ becomes. In particular, $\vv\eta$ is possibly terminating, namely, $\eta_1 = \infty$ with strictly positive probability; this is the case when the renewal distribution of $\vv\eta$ has a mass at infinity, and we write $\vv\eta = \{0,\eta_1,\dots,\eta_k\}$ if the $(k+1)$-th renewal time is the first time with value infinity. The renewal process $\vv\eta$ is terminating if and only if $\beta_p<0$, where $\beta_p$ is as in \eqref{eq:beta_p}. A quick derivation can be found in Section \ref{sec:EI} below (see the discussions after \eqref{eq:qFp}). The main result of the paper is the following. \begin{Thm}\label{thm:tail} For all $m\in{\mathbb N}$, \[ \mathcal L\pp{{\frac{X_0}{|X_0|},\dots,\frac{X_m}{|X_0|}}\;\middle\vert\; |X_0|>x} \to \mathcal L\pp{\varepsilon\Theta_0^*,\dots,\varepsilon\Theta_m^*}, \] as $x\to\infty$, with the right-hand side as introduced in \eqref{eq:Theta}. \end{Thm} \begin{Rem} Theorem \ref{thm:tail} complements our results in \citep{bai21phase} so that now we have a complete picture regarding limit extremes at both macroscopic and microscopit levels, as summarized in Table \ref{table:1}. In the table, the extremal index (EI) and the candidate extremal index are two well-known notions in extreme-value theory for stationary sequences. See more background and discussions in Section \ref{sec:EI} below. \begin{table}[ht!] \begin{tabular}{|c|c|c|c|c|c} \hline regime & tail process & limit random sup-measure & candidate EI & EI\\ & (microscopic) & (macroscopic) & $\vartheta$ & $\theta$ \\ \hline super-critical, $\beta_p>0$ & non-terminating & with long-range dependence & 0 & 0 \\ critical, $\beta_p=0$ & non-terminating & independently scattered & 0 & 0 \\ sub-critical, $\beta_p < 0$ & terminating & independently scattered & $\mathfrak q_{F,p}$ & $\mathsf D_{\beta,p}\mathfrak q_{F,p}$\\ \hline \end{tabular} \caption{Summary of phase transition. See $\mathfrak q_{F,p}$ and $D_{\beta,p}$ in \eqref{eq:qFp} and \eqref{eq:shape D} below.}\label{table:1} \end{table} At microscopic level here, Theorem \ref{thm:tail} reveals that the super-critical ($\beta_p>0$) and critical ($\beta_p = 0$) regimes have the same type of asymptotic behavior, in the sense that the tail processes are not terminating; while at macroscopic level as described in \citep{bai21phase}, the critical ($\beta_p = 0$) and sub-critical ($\beta_p < 0$) regimes have the same type of asymptotic behavior, with independently scattered $\alpha$-Fr\'echet random sup-measures arising in the limit. On the other hand, the limit random sup-measures in the regime $\beta_p>0$ are of a new type, extending the one characterized in \citep{samorodnitsky19extremal}. They again exhibit long-range dependence and, unless $p = 1, \beta<1/2$, their marginal law goes beyond the family of classical extreme-value distributions due to an aggregation effect (see \citep{wang22choquet} for an explanation when $p=1$). So, the convergence of tail processes reveals a more delicate picture of local behaviors when $\beta_p\le 0$. At the same time, the tail process at the super-critical regime $\beta_p>0$ is still of a local nature, as it is impossible to recover from the tail processes the random sup-measures that arise in the macroscopic limit obtained in \citep{bai21phase}. It is also remarkable that in the critical regime, while the macroscopic limit is independently scattered, the same as in the sub-critical regime, the normalization is {\em not the same}; this is related to the fact that the tail process in the critical regime is again non-terminating, reflecting the infinite size of the local cluster of extremes. \end{Rem} \subsection{A notable example: when the candidate extremal index differs from the extremal index}\label{sec:EI} A widely investigated notion for regularly-varying stationary stochastic process is the {\em extremal index}. A closely related notion is the {\em candidate extremal index}. For both, our reference is again \citet{kulik20heavy}. We first recall the definitions when examining the stable-regenerative model in the sub-critical regime. Throughout this subsection, we assume $\beta_p<0$. In this case, the spectral tail process $\vv \Theta$ is terminating. Then the {\em candidate extremal index} $\vartheta$ is (writing $a_+ = \max(a,0)$ for $a\in{\mathbb R}$) \begin{equation}\label{eq:our var} \vartheta := {\mathbb E}\pp{\sup_{i\ge 0}(\Theta_i)_+^\alpha - \sup_{i\ge 1}(\Theta_i)_+^\alpha \;\middle\vert\; \Theta_0>0 } = \mathfrak q_{F,p}, \end{equation} with \begin{equation}\label{eq:qFp} \mathfrak q_{F,p} := \mathbb P(\eta_1 = \infty) = \left(\sif n0 u(n)^p\right)^{-1} = \lim_{n\to\infty}\wb F_p(n)\in(0,1), \end{equation} where $u(n) = \mathbb P(n\in\vv\tau)$ (see \eqref{eq:u} below). (See Remark \ref{rem:RT} on our convention of candidate extremal indices.) To compute $\vartheta = \mathfrak q_{F,p}$, it is key to observe that $(\Theta_0)_+=\varepsilon_+$ and $(\Theta_i)_+= \varepsilon_+ \Theta_i^* $ is $\{0,1\}$-valued for all $i\ge 1$, and that $\varepsilon$ and $\vv\Theta$ are independent, whence $\vartheta = \mathbb P\pp{\Theta_i^* = 0, \mbox{ for all } i\in{\mathbb N} \ |\ \varepsilon=1}=\mathbb P\pp{\Theta_i^* = 0, \mbox{ for all } i\in{\mathbb N} } = \mathbb P(\eta_1 = \infty)$. At the same time, the sequence $\{\Theta_i^*\}_{i\in{\mathbb N}_0}$ is all zeros except a finite number, say $\mathfrak G$, of ones, and because of a strong Markov property, $\mathfrak G$ is then geometric with \begin{equation}\label{eq:G} \mathbb P(\mathfrak G = k) = \mathbb P(\eta_1 = \infty)\mathbb P(\eta_1 < \infty)^{k-1}, k\in{\mathbb N}, \end{equation} and ${\mathbb E} \mathfrak G = \mathfrak q_{F,p}^{-1}$. At the same time, ${\mathbb E} \mathfrak G = \sif n0 \mathbb P(n\in\vv\eta) = \sif n0\mathbb P(n\in\vv\tau)^p\in (0,\infty)$, whence \eqref{eq:qFp} follows. Next, the {\em extremal index} $\theta$, if it exists, can be defined as the unique value from $[0,1]$ so that the following convergence holds \begin{equation}\label{eq:EI def} \mathbb P\pp{\frac1{b_n}\max_{k=1,\dots,n}X_k \le x} = \exp\pp{-\theta x^{-\alpha}}, \end{equation} where $b_n$ is such that (see \eqref{eq:top_dominates} below) \begin{equation}\label{eq:b_n} \lim_{n\to\infty} n\mathbb P(X_1>b_n) = 1, \quad \mbox{ and in fact }\quad b_n \sim \pp{\frac12\frac{n\log^{p-1}n}{p!(p-1)!}}^{1/\alpha} \end{equation} as $n\rightarrow\infty$. For our model, it has been proved in \citep{bai21phase} that \[ \theta = \mathsf D_{\beta,p}\mathfrak q_{F,p}, \] with \begin{equation}\label{eq:shape D} \mathsf D_{\beta,p} := \sum_{s=q_{\beta,p}}^p\binom ps(-1)^{p-s}(-\beta_p)^{p-1} \quad\mbox{ with }\quad q_{\beta,p} := \min\{q\in{\mathbb N}:\beta_q<0\}. \end{equation} This follows from the convergence of random sup-measure established in \citep{bai21phase}, formally, \begin{equation}\label{eq:RSM limit} \frac1{b_n}\ccbb{\max_{k/n\in G}X_k}_{G\in\mathcal G}\xrightarrow{\textit{f.d.d.}} (\mathsf D_{\beta,p}\mathfrak q_{\beta,p})^{1/\alpha}\ccbb{\mathcal M_\alpha^{\rm is}(G)}_{G\in\mathcal G}, \end{equation} where $\mathcal G$ is the collection of all open intervals of $[0,1]$ and $\mathcal M_\alpha^{\rm is}$ is the {\em independently scattered} $\alpha$-Fr\'echet random sup-measure. Indeed, \eqref{eq:EI def} is the special case of marginal convergence with $G=[0,1]$. \begin{Rem}\label{rem:RT} We have used a slightly different convention compared to \cite{kulik20heavy}. In fact, our candidate extremal index $\vartheta$ is the same as the \emph{right-tail} candidate extremal index in \citep[ Eq.~(7.5.4b)]{kulik20heavy}. The so-called candidate extremal index in \citep{kulik20heavy} is instead given by (see \citep[ Eq.~(5.6.5)]{kulik20heavy}) \[ \vartheta^\circ = {\mathbb E}\pp{\sup_{i\ge 0}|\Theta_i|^\alpha - \sup_{i\ge 1}|\Theta_i|^\alpha}. \] The right-tail index (our $\vartheta$) concerns only positive extreme values, while the index $ \vartheta^\circ $ above concerns extreme values in absolute values. For our model, one readily checks that $\vartheta$ and $\vartheta^\circ$ happen to coincide. \end{Rem} It is clear from the definitions above that the extremal index $\theta$ and the candidate extremal index $\vartheta$ are characteristics of macroscopic and microscopic, respectively, behaviors of extremes. A priori these two values are not necessarily equal, although they can be shown to be equal under an anti-clustering condition and a mixing condition (see Remark \ref{rem:PP} below). However, we are not aware of any other examples of a regularly varying stochastic process so that $\theta$ and $\vartheta$ are both between 0 and 1 and yet not the same. Therefore, our model is of special interest, and we elaborate more the underlying mechanism for this rare phenomenon from the following two aspects. \begin{enumerate}[(i)] \item First, we provide a simplified computation of the extremal index for the case $p=2$ in Section \ref{sec:p=2}. The limit theorem \eqref{eq:RSM limit} above is established in \citep{bai21phase} by a different and much more involved proof. The proof in in Section \ref{sec:p=2}, however, does not apply for $p\ge 3$. We hope the presentation here sheds light on the very unusual dependence structure of the model. \item Second, we prove that the so-called anti-clustering condition holds, that is, \begin{equation}\label{eq:AC} \lim_{\ell\to\infty}\limsup_{n\to\infty}\mathbb P\pp{\max_{\ell\le |k|\le r_n}|X_k|>b_n\eta\;\middle\vert\; |X_0|>b_n\eta} = 0, \mbox{ for all } \eta>0, \end{equation} for $r_n\rightarrow\infty$, $r_n = o(\log b_n)$, in Section \ref{sec:anti}. (We actually prove a stronger version of it, known as the $\mathcal S$ condition in the literature.) This, and combined with the fact that $\vartheta\ne\theta$, implies immediately that the commonly applied mixing-type condition in the classical approach fails for our model at least for the block size $r_n = o(\log b_n)$. See Remark \ref{rem:PP} below for more discussions. \end{enumerate} \begin{Rem}\label{rem:PP}In the classical approach, in addition to the convergence of the tail processes and the verification of the anti-clustering condition \eqref{eq:AC}, if one could also verify for our model the condition \begin{equation}\label{eq:mixing} \lim_{n\to\infty} {\mathbb E}\exp\pp{-\summ i1n f(X_i/b_n)} - \pp{{\mathbb E}\exp\pp{-\summ i1{r_n}f(X_i/b_n)}}^{\floor{n/r_n}} = 0, \end{equation} with $b_n,r_n$ as in \eqref{eq:b_n}, then it follows that \begin{equation}\label{eq:PP} \summ i1n \ddelta{X_i/b_n,i/n}\inddd{X_i>0}\Rightarrow \sif i1\mathfrak G_i\ddelta{\vartheta^{1/\alpha}\Gamma_i^{-1/\alpha},U_i} \end{equation} in the space of $\mathfrak M_p((0,\infty]\times[0,1])$, where $\{\Gamma_i\}_{i\in{\mathbb N}}$ are consecutive arrival times of a standard Poisson process, $\{U_i\}_{i\in{\mathbb N}}$ are i.i.d.~uniform random variables over $(0,1)$, $\{\mathfrak G_i\}_{i\in{\mathbb N}}$ are i.i.d.~copies of $\mathfrak G$ in \eqref{eq:G}, and all families are independent (see \citep[Corollary 7.3.4]{kulik20heavy}). The convergence \eqref{eq:PP} would imply the extremal index $\theta = \vartheta$, whence a contradiction. Thus, in the context of our model, the relation \eqref{eq:mixing} fails to hold for $r_n\rightarrow\infty$ and $r_n=o(\log b_n)$. The idea of the classical approach is as follows. Within each block of size $r_n$, the local asymptotics is fully characterized by the tail processes, and different blocks behave asymptotically independently due to the condition \eqref{eq:mixing}. Usually, the condition \eqref{eq:mixing} follows from certain strong mixing properties (e.g. $\beta$-mixing; see \citep[Section 7.4.1]{kulik20heavy}). Our results indicate that our model does not enjoy very strong mixing properties. Nevertheless, we expect to be able to prove \eqref{eq:PP} with $\vartheta^{1/\alpha}$ on the right-hand side replaced by $\theta^{1/\alpha}$. This is left for an upcoming work. \end{Rem} We conclude the introduction with a few more remarks. \begin{Rem} Note that $\theta = \mathsf D_{\beta,p}\vartheta$, and here we collect a couple of facts on $\mathsf D_{\beta,p}$: \begin{enumerate}[(i)] \item for all $\beta\in(0,1)$ so that $\beta_p<0$, $\mathsf D_{\beta,p}\in(0,1)$, \item for all $\beta\in(0,1/2)$, $\mathsf D_{\beta,p} = 1-p\beta^{p-1}$. In particular, $\lim_{\beta\downarrow 0}\mathsf D_{\beta,p} = 1$. \end{enumerate} To see the first, introduce \[ f_p(x) = \frac1{(p-1)!}\summ s0p \binom ps (-1)^{p-s}(s-x)_+^{p-1} = \frac1{(p-1)!}\summ s0p(-1)^{p-s}\binom ps(x-s)_+^{p-1}, \] which is the probability density function of the so-called Irwin--Hall distribution, the one for the sum of $n$ i.i.d.~uniform random variables over $(0,1)$. Then we can write \[ \mathsf D_{\beta,p} = (p-1)!(1-\beta)^{p-1}f_p((1-\beta)^{-1})>0, \mbox{ for all } \beta\in(0,1-1/p), \] or exactly when $\beta_p<0$. To show that $\mathsf D_{\beta,p}<1$, it is equivalent to show that $f_p(x) <((p-1)!)^{-1} x^{p-1}$ for all $x>1$. But, recall that $f_p(x)$ is the $p$-times convolution function of uniform density function over $(0,1)$ evaluated at $x$. while $\frac1{(p-1)!}x^{p-1}$ is the $p$-times convolution of the indicator function over $[0,\infty)$ evaluated at $x$. The desired relation now follows. To see the second, first recall that for any polynomial function $Q(s)$ with degree at most $p-1$, one has $\summ s0p(-1)^{p-s}\binom ps Q(s) = 0$ since this corresponds to a $p$-times differencing operation. Then, take $Q(s) = (-\beta_s)^{p-1} = (s(1-\beta)-1)^{p-1}$ here. For $\beta\in(0,1/2)$, $q_{\beta,p} = 2$, and it then follows that \[ \mathsf D_{\beta,p} = \summ s2p (-1)^{p-s}\binom ps(-\beta_s)^{p-1} = -(-1)^pQ(0)-(-1)^{p-1}pQ(1) = 1-p\beta^{p-1}. \] The fact $\mathsf D_{\beta,p}\in (0,1)$, which implies $\theta<\vartheta$, is consistent with \citet[Lemma 7.5.4]{kulik20heavy}. We note that although the lemma there is stated only for ${\mathbb R}_+$-valued processes, the proof readily extends to real-valued processes with the right-tail (cf.\ Remark \ref{rem:RT}) candidate extremal index $\vartheta$ in \eqref{eq:our var}. \end{Rem} \begin{Rem} In the case $\beta_p = 0$, we have in the critical regime, in place of \eqref{eq:RSM limit}, \[ \mathbb P\pp{\frac1{\wt b_n}\max_{k=1,\dots,n}X_k \le x} = \exp\pp{-\wt \theta x^{-\alpha}} \quad\mbox{ with }\quad \wt b_n = \pp{\frac{n(\log\log n)^{p-1}}{\log n}}^{1/\alpha}, \] and the value $\wt\theta>0$ explicitly computed in \citep{bai21phase}. Since $b_n /\wt b_n\rightarrow \infty$ as $n\rightarrow\infty$ with $b_n$ as in \eqref{eq:b_n}, the convergence above implies that the extremal index $\theta$ defined via \eqref{eq:EI def} is zero. One may argue that in the super-critical regime, the extremal index is again $\theta = 0$ in the same way. \end{Rem} \begin{Rem} Our example shows that the candidate extremal index should be viewed as a local statistic only. In this regard, we recall that despite the fact that it is commonly interpreted as the reciprocal of the mean cluster size, it is not always the case. In particular, \citet{smith88counterexample} provided an example where the extremal index is less than 1, while there is no extremal clustering from a point-process-convergence perspective. As further elaborated in \citep[Example 14.4.5]{kulik20heavy}, in this example the candidate extremal index and extremal index are the same. \end{Rem} {\em The paper is organized as follows.} Section \ref{sec:background} provides related background, notably on multiple-stable processes and renewal processes. Section \ref{sec:tail} proves Theorem \ref{thm:tail}. Section \ref{sec:anti} proves the anti-clustering condition and the convergence of cluster point process when $\beta_p<0$. Section \ref{sec:p=2} provides a computation of the extremal index with $p=2,\beta\in(0,1/2)$. \section{Preliminary results}\label{sec:background} \subsection{Renewal processes with infinite mean} Throughout, our references on renewal processes are \citet[Appendix A.5]{giacomin07random} and \citet{bingham87regular}. Besides the notation and properties of renewal processes in Section \ref{sec:1}, we shall also use the renewal mass function of a non-delayed renewal process $\vv\tau$ as follows \begin{equation}\label{eq:renewal mass} u(k) := \mathbb P(k\in \vv\tau),\ k\in{\mathbb N}_0. \end{equation} It is known that the assumption \begin{equation}\label{eq:u} u(k)\sim \frac{k^{\beta-1}}{\mathsf C_F\Gamma(\beta)\Gamma(1-\beta)}, \end{equation} as $k\rightarrow\infty$ implies the assumption \eqref{eq:F}, and that under the assumption \eqref{eq:Doney} the two are equivalent (see \citep{doney97onesided}). Note that $\vv\tau$ is null-recurrent, and the stationary delay measure associated with the inter-renewal distribution $F$ can be taken as $\pi(k) = \wb F(k), k\in{\mathbb N}_0$ (more generally one may set $\pi(k) = C\wb F(k)$, and we choose the constant $C=1$ for simplicity). The meaning of stationary delay measure is explained in the Section \ref{sec:two-sided}, where for the sake of completeness, we present a proof for a system of {\em two-sided renewal processes} to be shift-invariant. Next, we recall some properties of the intersected renewal processes $\vv\eta = \bigcap_{r=1}^p\vv\tau\topp r = \{0,\eta_1,\eta_2,\dots\} $ as introduced in \eqref{eq:Theta*}. We let $F_p$ denote the cumulative distribution function of $\eta_1$ and we shall need the asymptotics of $\wb F_p(x) = 1-F_p(x)$. In summary, we have the following: \[ \wb F_p(n) \sim \begin{cases} \displaystyle n^{-\beta_p}\frac{(\mathsf C_F\Gamma(\beta)\Gamma(1-\beta))^p}{\Gamma(\beta_p)(\Gamma(1-\beta_p))}, & \mbox{ if } \beta_p>0; \\\\ \displaystyle \frac{(\mathsf C_F\Gamma( \beta)\Gamma(1-\beta))^p}{\log n}, & \mbox{ if } \beta_p = 0. \end{cases} \] When $\beta_p<0$, the renewal process is terminating, with the probability $\mathfrak q_{F,p} = \mathbb P(\eta_1 = \infty)$ as given in \eqref{eq:qFp}. For more details, see \citep{giacomin07random,bingham87regular} and \citep{bai21phase}. \subsection{An equivalent series representation} Here, we explain a different representation that shall be needed for our proofs, for finite-dimensional distributions of the process. We are interested in the joint law of $(X_0,\dots,X_m)$, for some $m$ fixed. Then, for all those $i\in{\mathbb N}_0$ such that $\vv\tau\topp{i,d_i}\cap\{0,\dots,m\} = \emptyset$, they do not contribute in the series representation. Therefore, by a standard thinning argument of Poisson point processes, it follows that \begin{equation}\label{eq:thinning} \sif i1\ddelta{\varepsilon_i\Gamma_i^{-1/\alpha},\ \vv\tau\topp{i,d_i}\cap\{0,\dots,m\}}\inddd{\vv\tau\topp{i,d_i}\cap\{0,\dots,m\}\ne\emptyset}\eqd\sif i1\ddelta{w_m^{1/\alpha}\varepsilon_i\Gamma_i^{-1/\alpha},\ R_{m,i}}, \end{equation} where on the right-hand side, \[ w_m = \summ k0{m} \pi(k) = \summ k0{m} \wb F(k) \sim \frac{\mathsf C_F}{1-\beta}\cdot m^{1-\beta}, \] the random variables $\{\Gamma_i\}_{i\in{\mathbb N}}$ are consecutive arrival times of a standard Poisson process, $\{\varepsilon_i\}_{i\in{\mathbb N}}$ are i.i.d.~Rademacher random variables, $\{R_{m,i}\}_{i\in{\mathbb N}}$ are i.i.d.~random closed subsets of $\{0,\dots,m\}$ with the law $R_{m,i}\eqd R_m$ described below, and all families are independent. Suppose $\vv\tau^*$ is a delayed renewal process with the stationary delay measure $\pi$ and renewal distribution $F$ defined on a measurable space with respect to an infinite measure $\mu^*$ (since $\pi$ is an infinite measure). Then, one can introduce a probability measure $\mu_m$ on the same measurable space via \[ \frac{d\mu_m}{d\mu^*} = \frac{\inddd{\vv\tau^*\cap\{0,\dots,m\} \ne\emptyset}}{\mu^*(\{\vv\tau^*:\vv\tau^*\cap\{0,\dots,m\}\ne\emptyset\})} = \frac{\inddd{\vv\tau^*\cap\{0,\dots,m\} \ne\emptyset}}{w_m}. \] Then, the law of $R_m$ is the one induced by $\vv\tau^*$ with respect to the probability measure $\mu_m$. Moreover, it is immediately verified that \begin{enumerate}[(i)] \item $\mathbb P(k\in R_m) = 1/w_m, k=0,\dots,m$ (shift invariance). \item $\mathbb P(\min(R_m\cap\{k+1,k+2,\dots\}) \le k+ j\mid k\in R_m) = F(j)$ (Markov/renewal property). \end{enumerate} By \eqref{eq:thinning}, we work with the following equivalent representation of \eqref{eq:series infty p}: \begin{equation}\label{eq:p>=1} \ccbb{X_k}_{k=0,\dots,m}\eqd \ccbb{w_m^{p/\alpha}\sum_{0<i_1<\cdots<i_p}\frac{[\varepsilon_{\boldsymbol i}]}{[\Gamma_{\boldsymbol i}]^{1/\alpha}}\inddd{k\in \bigcap_{r=1}^p R_{m,i_r}}}_{k=0,\dots,m}, \end{equation} where on the right-hand side the notation is as in \eqref{eq:thinning}. \begin{Rem} Note that the representation in \eqref{eq:p>=1} is slightly different from the one used in \citep{bai21phase}: we include the time zero here, which is more convenient when studying the tail processes. Such a change does not effect the normalization or the limit object. \end{Rem} \subsection{A two-sided representation}\label{sec:two-sided} We provide another series representation of $\{X_k\}_{k\in{\mathbb Z}}$, based on two-sided renewal processes. We do not need this in the rest of the paper. Such a representation is of its own interest, and helps illuminate the notion of delayed stationary distribution which is now defined in a two-sided manner. The results in this section may have been known in the literature but we were unable to find it. Recall \eqref{eq:series one-sided} and \eqref{eq:series infty p}. In place of $\vv\tau\topp{i,d_i}$ now we introduce $\vv\tau\topp{i,d_i,g_i}$: this is a two-sided renewal processes to be introduced with the first renewal time to the right of the origin (included) is $d_i$, the first renewal time to the left of the origin (not included) is $-g_i$ (so $g_i\in{\mathbb N}$). We first introduce \[ \sif i1\ddelta{y_i,d_i,g_i} \eqd {\rm PPP}((0,\infty)\times{\mathbb N}_0\times{\mathbb N},\alpha x^{-\alpha-1}dxd\wt \pi), \] with the measure $\wt\pi$ on ${\mathbb N}_0\times{\mathbb N}$ determined by the following mass function \[ \wt \pi(d,g):= \pi(d)\cdot \frac{f(d+g)}{\wb F(d)} = f(d+g), \ d\in{\mathbb N}_0, g\in{\mathbb N}. \] Note that the factor $f(d+g)/\wb F(d)$ is the probability mass function at $d+g, g\in{\mathbb N}$, of the conditional law of a renewal time with respect to $F$, given that the renewal time is strictly larger than $d$. Now we attach independent renewal processes to each pair of $(d_i,g_i)$. Let each ${\vv\tau}\topp{i,\rightarrow}$ be a copy of $\vv\tau = \{\tau_0,\tau_1,\dots\}$, a renewal process starting from the origin (so $\tau_0 = 0$) with renewal distribution $F$: ${\vv\tau}\topp{i,\rightarrow} \eqd \{0,\tau_1,\tau_2,\dots\}$. Similarly, let each ${\vv\tau}\topp {i,\leftarrow}$ be a time-reversed renewal process, starting from zero: so with $\vv\tau$ as before, ${\vv\tau}\topp{i,\leftarrow}\eqd \{0,-\tau_1,-\tau_2,\dots\}$. All the renewal processes are assumed independent from everything else. Then, we set \begin{equation}\label{eq:tau two-sided} \vv\tau\topp{i,d_i,g_i} := (d_i+{\vv\tau}\topp{i,\rightarrow}) \cup (-g_i+{\vv\tau}\topp{i,\leftarrow}), \end{equation} where $d_i+{\vv\tau}\topp{i,\rightarrow},-g_i+{\vv\tau}\topp{i,\leftarrow}$ are understood as subsets of ${\mathbb Z}$. Each two-sided renewal $\vv\tau\topp{i,d_i,g_i}$ takes value in the path space \[ S = \ccbb{\vv t = \{t_i\}_{i\in{\mathbb N}}: t_i\in{\mathbb Z}, \mbox{ all distinct}} \] equipped with the cylindrical $\sigma$-field, i.e., the $\sigma$-field generated by $\{\vv t\in S:\ k\in \vv t\}$, $k\in {\mathbb Z}$. The two-sided series representation is then given by \begin{equation}\label{eq:two-sided} \ccbb{X_k}_{k\in{\mathbb Z}} \eqd \ccbb{\sum_{{\boldsymbol i}\in\mathcal D_p}[\varepsilon_{\boldsymbol i}][y_{\boldsymbol i}]\inddd{k\in \bigcap_{r=1}^p\vv\tau\topp{i_r,d_{i_r},g_{i_r}}}}_{k\in{\mathbb Z}} \end{equation} with $\{\varepsilon_i\}$ as in \eqref{eq:series infty p} which is independent of everything else, and \[\mathcal D_p=\{(i_1,\ldots,i_p)\in {\mathbb N}^p:\ i_1<\ldots<i_p \}.\] \begin{Lem} The two-sided representation \eqref{eq:two-sided} represents a stationary process $\{X_k\}_{k\in{\mathbb Z}}$ which restricted to $k\in{\mathbb N}_0$ has the representation \eqref{eq:series infty p}. \end{Lem} \begin{proof} It is known that the multiple series in \eqref{eq:two-sided} converges almost surely and unconditionally (e.g., \citep[Theorem 1.3 and Remark 1.5]{samorodnitsky89asymptotic}). By construction, \eqref{eq:two-sided} restricted to $k\in{\mathbb N}_0$ is the same as \eqref{eq:series infty p}. It remains to prove stationarity of the process $\{X_k\}_{k\in{\mathbb Z}}$ This shall follow from a shift-invariance property of the intensity measure of the Poisson point process \[ \sif i1\ddelta{y_i,\vv\tau\topp{i,d_i,g_i}}. \] (We omit the discussions regarding $\{\varepsilon_i\}_{i\in{\mathbb N}}$ for the sake of simplicity.) Introduce the following shift operation $\mathsf B$ on ${\boldsymbol t}\in S$: $\mathsf B({\boldsymbol t}) = {\boldsymbol t} +1$. for ${\boldsymbol t} = \{t_i\}_{i\in{\mathbb N}}\in S$, $\mathsf B({\boldsymbol t}) = {\boldsymbol t}+1 = \{t_i+1\}_{i\in{\mathbb Z}}$ (so that $d+{\vv\tau}\topp{i,\rightarrow} = \mathsf B^d({\vv\tau}\topp{i,\rightarrow})$). Let $P_{\vv\tau}$ denote the probability measure on $S$ induced by the distribution of the renewal process $\vv\tau$ as above. Then, the Poisson point process above is on the space $(0,\infty)\times S$, with its intensity measure denoted by $\alpha x^{-\alpha-1}dx dQ$, and $Q$ is defined as the push-forward measure with respect to the construction \eqref{eq:tau two-sided} from the product measure $\wt \pi\times P_{\vv\tau}\times P_{\vv\tau}$ on $({\mathbb N}_0\times{\mathbb N}) \times S\times S$. Now, the stationarity of the process $\{X_k\}_{k\in{\mathbb Z}}$ follows from the following shift-invariance property for the measure $Q$ on $S$: \begin{equation}\label{eq:Q invariance} Q\circ \mathsf B^{-1} = Q. \end{equation} To prove the above, we start by writing for every ${\boldsymbol t}\in S$ and $k\in {\mathbb Z}$ that \[ \mathsf d_k({\boldsymbol t}) := \min\{t_i:t_i\ge k\} - k, \quad\mbox{ and }\quad \mathsf g_k({\boldsymbol t}) := k-\max\{t_i:t_i<k\}. \] In words, $\mathsf d_k({\boldsymbol t})$ represents the distance between $k$ and the first element to the right of $k$ (included) of ${\boldsymbol t}$, and $\mathsf g_k({\boldsymbol t})$ the distance between $k$ and the first element to the left of $k$ (not included). Our construction tells that \begin{equation}\label{eq:Q} Q(\mathsf d_0(\vv t) = d,\mathsf g_0(\vv t) = g) = \wt \pi(d,g) = f(d+g) \mbox{ for all } d\in{\mathbb N}_0,\ g\in{\mathbb N}. \end{equation} To show \eqref{eq:Q invariance}, based on a decomposition with respect to $d_0(\vv t)$ and $\mathsf g_0(\vv t)$ and Dynkin's theorem, it suffices to show \begin{equation} \label{eq:Q inv intermediate} Q \left( \vv s \subset B^{-1}\vv t ,\ \mathsf d_0(B^{-1}\vv t) =d,\ \mathsf g_0(B^{-1}\vv t) = g \right)= Q\left( \vv s \subset \vv t ,\ \mathsf d_0(\vv t)=d, \ \mathsf g_0(\vv t) = g \right) \end{equation} for all $d\in{\mathbb N}_0$, $g\in{\mathbb N}$ and $\vv s \in S$ of finite size such that $\mathsf d_0(\vv s)=d$ and $\mathsf g_0(\vv s)=g$. Write $\vv s=\{ s_{-m},\ldots, s_{-1},s_0, \ldots, s_{n} \}$, such that $s_{i}<s_{i+1}$ and $s_{-1}=-g$ and $s_0=d$, $m\in {\mathbb N}$, $n\in {\mathbb N}_0$. Then the left-hand side of \eqref{eq:Q inv intermediate} is equal to \begin{equation}\label{eq:Q 2nd} Q ( s_{i}+1 \in \vv t ,\ i=-m,\ldots,n ,\ \mathsf d_1(\vv t)=d,\ \mathsf g_1( \vv t) = g ). \end{equation} Now, if $d\ge 0$ and $g\ge 2$, then $\{\vv t\in S:\ \mathsf d_1(\vv t)=d,\ \mathsf g_1(\vv t)=g\}=\{\vv t\in S:\ \mathsf d_0(\vv t)=d+1, \ \mathsf g_0(\vv t)=g-1\}=\{\vv t\in S:\ \mathsf d_0(\vv t)=s_{0}+1, \ \mathsf g_0(\vv t)=s_{-1}+1\}$. Hence the expression in \eqref{eq:Q 2nd} in this case becomes \begin{equation}\label{eq:Q 3rd} f(d+g) \prod_{-m\le i\le n-1, i\neq -1} u(s_{i+1}-s_i), \end{equation} where we have used the relation \eqref{eq:Q} as well as the renewal property on the two sides with the renewal mass function $u$ as in \eqref{eq:renewal mass}. Next suppose $d\ge 0$ and $g = 1$. Set $\mathsf d'_0 (\vv t)=\min ( \vv t \cap \{d_0(\vv t)+1,d_0(\vv t)+2,\ldots\})$, namely, the second element of $\vv t$ to the right of the origin (included). We have $ \{\vv t\in S:\ \mathsf d_1(\vv t) = d,\ \mathsf g_1(\vv t) = 1\} =\{\vv t\in S:\ \mathsf d_0(\vv t) = 0,\ \mathsf d'_0 (\vv t)= d+1\}=\{\vv t\in S:\ \mathsf d_0(\vv t) = s_{-1}+1,\ \mathsf d'_0 (\vv t)= s_0+1\}$. Then by construction and the renewal property, the expression in \eqref{eq:Q 2nd} in this case becomes \[ Q(\mathsf d_0(\vv t) = 0)\mathbb P(\tau_1 = d+1) \prod_{-m\le i\le n-1, i\neq -1} u(s_{i+1}-s_i). \] Note that $Q(\mathsf d_0(\vv t) =0) = \pi(0) = \wb F(0) = 1$ and $\mathbb P(\tau_1 = d+1)=f(d+1)$, and hence the formula above coincides with \eqref{eq:Q 3rd} when $d\ge 0$ and $g=1$. Therefore, we have shown that the left-hand side of \eqref{eq:Q inv intermediate} is equal to the expression in \eqref{eq:Q 3rd} for all $d\in {\mathbb N}_0$ and $g\in {\mathbb N}$. The proof is concluded once noticing that the right-hand side of \eqref{eq:Q inv intermediate} is readily \eqref{eq:Q 3rd} for all $d\in {\mathbb N}_0$ and $g\in {\mathbb N}$. \end{proof} \section{Convergence for tail processes}\label{sec:tail} We prove Theorem \ref{thm:tail}. Below $C$ will denote a generic positive constant whose value may change from one expression to another. In order to establish $(m+1)$-dimensional multivariate regular variation, we shall work with the representation \eqref{eq:p>=1} with $ m$ throughout fixed. We introduce some notation. Set \begin{align} \wt\ell_k(1) & :=\min\ccbb{i\in{\mathbb N}:k\in R_{m,i}},\nonumber\\ \wt\ell_k(s) &:=\min\ccbb{i>\wt \ell_k(s-1):k\in R_{m,i}},\ s\ge 2, \ k=0,\dots,m,\label{eq:ell_k} \end{align} namely, $\wt\ell_k(1),\wt\ell_k(2),\ldots$ are the successive $i$-indices such that $k\in R_{m,i}$. For $\vv i=(i_1,\ldots,i_p) \in \mathcal D_p$, we write $\wt\ell_k(\vv i)=(\wt\ell_k(i_1),\ldots, \wt\ell_k(i_p))$. In this way, we write \begin{equation}\label{eq:top} X_k = w_m^{p/\alpha}\sum_{{\boldsymbol i}\in\mathcal D_p}\frac{\sbb{\varepsilon_{\wt\ell_k({\boldsymbol i})}}}{\sbb{\Gamma_{\wt\ell_k({\boldsymbol i})}}^{1/\alpha}} \quad\mbox{ and }\quad T_k:=w_m^{p/\alpha}{\frac{\sbb{\varepsilon_{\wt\ell_k((1,\dots,p))}}}{\sbb{\Gamma_{\wt\ell_k((1,\dots,p))}}^{1/\alpha}}}, \quad k=0,\dots,m. \end{equation} Here and below the notational convention is to write the product $\varepsilon_{\wt\ell_k(i_1)}\cdots\varepsilon_{\wt\ell_k(i_p)}$ as $[\varepsilon_{\wt\ell_k({\boldsymbol i})}] $, and similarly for $[\Gamma_{\wt\ell_k({\boldsymbol i})}]$. Note that the indicator functions are dropped in the representation \eqref{eq:top}. Moreover, in order to study the marginal distribution, taking into account of the thinning probability $\mathbb P(k\in R_{m,i})=w_m^{-1}$, we shall work with the following representation for each $k\in {\mathbb N}_0$ (but not jointly in $k$): \begin{equation}\label{eq:thinning Xk} \pp{X_k,T_k} \eqd \pp{\sum_{{\boldsymbol i}\in\mathcal D_p}\frac{[\varepsilon_{\boldsymbol i}]}{[\Gamma_{\boldsymbol i}]^{1/\alpha}},\frac{\sbb{\varepsilon_{(1,\dots,p)}}}{\sbb{\Gamma_{(1,\dots,p)}}^{1/\alpha}}}. \end{equation} Recall that from \citep{samorodnitsky89asymptotic}, we have \begin{equation}\label{eq:product_tail} \mathsf q_p(x):=\mathbb P\pp{\bb{\Gamma_{1:p}}^{-1}>x} \sim \frac{x^{-1}\log^{p-1}x}{p!(p-1)!}. \end{equation} as $x\to\infty$. Here and below we write $\Gamma_{1:p} = \Gamma_{(1,\dots,p)} = \Gamma_1\times\cdots\times \Gamma_p$. Moreover we have the following. \begin{Lem}\label{lem:1} We have $\mathbb P\spp{\max_{(1,\dots,p)\ne{\boldsymbol i}\in\mathcal D_p}\bb{\Gamma_{{\boldsymbol i}}}^{-1/\alpha}>x} = \mathbb P\spp{(\Gamma_1\cdots \Gamma_{p-1}\Gamma_{p+1})^{-1/\alpha}>x}$ and \begin{equation}\label{eq:single_product} \limsup_{x\to\infty}\frac{\mathbb P\pp{(\Gamma_1\cdots \Gamma_{p-1}\Gamma_{p+1})^{-1/\alpha}>x}}{\mathsf q_{p-1}(x^\alpha)} \le {\mathbb E} \Gamma_2^{-1}<\infty. \end{equation} Moreover, as $x\rightarrow\infty$, \begin{equation}\label{eq:top_dominates} \mathbb P(|X_k|>x)\sim 2\mathbb P( X_k >x) \sim \mathbb P(|T_k|>x) = \mathsf q_p(x^\alpha)\sim \frac{\alpha^{p-1} x^{-\alpha} \log^{p-1}(x)}{p!(p-1)!}, \end{equation} and \begin{equation}\label{eq:remainder_rate} \mathbb P(|X_k-T_k|>x)\le C\mathsf q_{p-1}(x^\alpha)=O( x^{-\alpha} \log^{p-2}(x) ). \end{equation} \end{Lem} In words, $T_k$ is the product of $\Gamma_i$ for the smallest $p$ $i$-indices such that $k\in R_{m,i}$, and the tails of $ X_k $ are determined by this single term alone. Then, by Lemma \ref{lem:1}, the joint representation \eqref{eq:top} allows us to replace the $X_k$'s by $T_k$'s and deal with the joint law of $T_0,\dots,T_m$. The above can be read from \citep{samorodnitsky89asymptotic}, which deals with a more general setup. For the sake of convenience we include a proof here. \begin{proof}[Proof of Lemma \ref{lem:1}] The first claim \eqref{eq:single_product} follows from monotonicity. Let $\wt\Gamma_2\eqd\Gamma_2$ be a random variable independent from everything else. We then have \[ (\Gamma_1\cdots \Gamma_{p-1}\Gamma_{p+1})^{-1/\alpha} \eqd \bb{\Gamma_{1:p-1}}^{-1/\alpha}\cdot \pp{\Gamma_{p-1}+\wt\Gamma_2}^{-1/\alpha} \le \bb{\Gamma_{1:p-1}}^{-1/\alpha}\wt\Gamma_2^{-1/\alpha}. \] The right-hand side is now a product of two independent random variables. The second part now follows from Breiman's lemma (e.g.~\citep{resnick07heavy}). Notice that in view of \eqref{eq:product_tail}, the relation \eqref{eq:remainder_rate} implies \eqref{eq:top_dominates}, and hence it is left to prove \eqref{eq:remainder_rate}. Let $M$ be a fixed integer such that $M\ge 2p/\alpha+p$. Note that the conclusion concerns only the marginal distribution, and hence we can work with the simplified representation of $X_k$ in \eqref{eq:thinning Xk}. By \eqref{eq:single_product}, any finite sum of terms of the form $[\varepsilon_{\boldsymbol i}] [\Gamma_{\boldsymbol i}]^{-1/\alpha}$ with ${\boldsymbol i}\neq (1,\dots,p)$ will have lighter tails than $[\Gamma_{(1,\dots,p)}]^{-1/\alpha}$. Since the sets $\{{\boldsymbol i}\in\mathcal D_q: i_q\le M\}$, $q=0,\ldots,p-1$ are finite (when $q=0$ the set is understood as an empty set), it suffices to show the following: for any fixed $i_1<\cdots<i_q\le M$ and $q=0,1,\ldots, p-1$, \begin{equation}\label{eq:remainder series} \mathbb P\left([\Gamma_{i_{1:q}}]^{-1/\alpha} \left|\sum_{M<i_{q+1}<\cdots<i_p } \frac{[\varepsilon_{i_{q+1:p}}]} { [\Gamma_{i_{q+1:p}}]^{1/\alpha}}\right|>x \right) =o\left( x^{-\alpha} \log ^{p-1}x \right),\quad x\rightarrow \infty, \end{equation} here and below we write $i_{1:q} = (i_1,\dots,i_q)$, $[\Gamma_{i_{1:q}}] = \Gamma_{i_1}\times\cdots\times \Gamma_{i_q}$, and similarly for other terms. If $q=0$, in view of the inequality ${\mathbb E}\left( [\Gamma_{i_{ 1:p}}]^{-2/\alpha}\right)\le c (i_1\ldots i_p)^{-2/\alpha} $ which holds by the choice of $M$ (cf.\ \cite[Eq.(3.2)]{samorodnitsky89asymptotic}), as well as the orthogonality induced by $[\varepsilon_{i_{1:p}}]$, one can verify that the second moment of the multiple series in \eqref{eq:remainder series} is finite, and hence the tail decay is $O(x^{-2}) =o\left( x^{-\alpha} \log ^{p-1}x\right)$ as $x\rightarrow\infty$. Suppose $q>0$ below. Then the probability above is bounded by \begin{equation}\label{eq:P} \mathbb P\left([\Gamma_{(1,\dots,q)}]^{-1/\alpha} \left|\sum_{M<i_{q+1}<\cdots<i_p} \frac{[\varepsilon_{i_{q+1:p}}]}{ [\Gamma_{i_{q+1:p}}]^{1/\alpha}}\right|>x \right) . \end{equation} Using $[\Gamma_{i_{q+1:p}}]^{-1/\alpha}\le \prodd r{q+1}p(\Gamma_{i_r}-\Gamma_{q})^{-1/\alpha}$, conditioning on $\{\Gamma_i\}_{i\in{\mathbb N}}$ and applying the contraction principle for multilinear form of Rademacher random variables \cite[Theorem 3.6]{delapena94contraction}, the probability in \eqref{eq:P} is bounded, up to a multiplicative constant, by \begin{align*} \mathbb P\left([\Gamma_{(1,\dots,q)}]^{-1/\alpha} \left|\sum_{M<i_{q+1}<\cdots<i_p} [\epsilon_{i_{q+1:p}}] \prodd r{q+1}p(\Gamma_{i_r}-\Gamma_{q})^{-1/\alpha}\right|>x \right). \end{align*} Note that $[\Gamma_{(1,\dots,q)}]^{1/\alpha}$ is independent of the absolute value part above, which can be shown to have a finite second moment similarly as before using the fact $(\Gamma_{i}-\Gamma_{q})_{i>M}\eqd (\Gamma_{i})_{i>M-q}$ with the choice of $M$. Therefore, by Breiman's lemma again, the probability above is of the same order as $\mathbb P([\Gamma_{(1,\dots,q)}]^{-1/\alpha}>x)\sim \mathsf q_{q}(x^\alpha) =o(x^{-\alpha} \log ^{p-1}x)$ as $x\to\infty$ in view of \eqref{eq:product_tail}. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:tail}] Write \begin{equation} T_m^*:=w_m^{p/\alpha}\frac{\sbb{\varepsilon_{(1,\dots,p)}}}{\sbb{\Gamma_{(1,\dots,p)}}^{1/\alpha}}\quad\mbox{ and }\quad \label{eq:H} H_k := \inddd{\wt\ell_k((1,\dots,p)) = (1,\dots,p)},\ k=0,\dots,m. \end{equation} Note that $T_k H_k=T_m^* H_k$. The proof consists of establishing the following asymptotic equivalence in law: \begin{align} \mathcal L\pp{\frac{X_0}{|X_0|},\dots,\frac{X_m}{|X_0|}\;\middle\vert\; |X_0|>x}& \sim \mathcal L\pp{\frac{X_0}{|X_0|},\dots,\frac{X_m}{|X_0|}\;\middle\vert\; |T_m^*|>x, H_0 = 1}\label{eq:1}\\ & \sim \mathcal L\pp{\frac{X_0}{|T_m^*|},\dots,\frac{X_m}{|T_m^*|}\;\middle\vert\; |T_m^*|>x, H_0 = 1}\label{eq:2}\\ & \sim \mathcal L\pp{\frac{T_m^*H_0}{|T_m^*|},\dots,\frac{T_m^*H_m}{|T_m^*|}\;\middle\vert\; |T_m^*|>x, H_0 = 1}\label{eq:3}\\ & \sim \mathcal L\pp{\varepsilon H_0,\dots,\varepsilon H_m\mid H_0 =1}\label{eq:4}. \end{align} Here and below, $\mathcal L(\cdots\mid\cdots)$ denote the corresponding conditional law, and $\mathcal L(\cdots)\sim\mathcal L(\cdots)$ means that as $x\to\infty$, the two sides have same limit law, given that the limit law of one of them exists. We will leave the verification of first step \eqref{eq:1} to the end. We first verify steps \eqref{eq:2} and \eqref{eq:3}. Write, for each $k$, \[ X_k = T_k + X_k-T_k = T^*_mH_k + T_k(1-H_k)+ X_k-T_k. \] Observe that $ |T_k|(1-H_k)\le w_m^{p/\alpha}\max_{(1,\dots,p)\ne{\boldsymbol i}\in\mathcal D_p}\bb{\Gamma_{\boldsymbol i}}^{-1/\alpha}. $ In addition, in view of Item (i) before \eqref{eq:p>=1} and independence, we have $\mathbb P(H_k=1)=w_m^{-p}$. It then follows from Lemma \ref{lem:1} and the independence between $T_m^*$ and $H_k$ that \begin{equation}\label{eq:Xk_tail} \mathbb P(|X_k|>x) \sim \mathbb P(|T_m^*H_k|>x)\sim \frac{w_{m}^{p}\alpha^{p-1} x^{-\alpha}\log^{p-1}x}{p!(p-1)!}\cdot\frac1{w_{m}^{p}} = \frac{\alpha^{p-1} x^{-\alpha} \log^{p-1}(x)}{p!(p-1)!}, \end{equation} and \begin{equation}\label{eq:Xk remainder} \mathbb P(|X_k-T_m^*H_k|>x)=O(x^{-\alpha}\log^{p-2}x) \end{equation} as $x\to\infty$. Next, we need the following facts: for random variables $Y$ and $Z$, not necessarily independent, such that $\mathbb P(|Z|>\epsilon x) = o(\mathbb P(|Y|> x))$ as $x\to\infty$ for any $\epsilon>0$, we have as $x\rightarrow\infty$ the convergences in distribution: \begin{equation}\label{eq:L1} \mathcal L\pp{\frac {|Y|}{|Y+Z|}\;\middle\vert\; |Y|>x} \to \mathcal L(1) \quad\mbox{ and }\quad \mathcal L\pp{\frac {Z}{|Y|} \;\middle\vert\; |Y|>x} \to \mathcal L(0), \end{equation} where $\mathcal L(1)$ and $\mathcal L(0)$ denote the laws of constants 1 and 0, respectively. Indeed, the first relation follows from the second one, whereas the second holds since \[\mathbb P(|Z/Y|>\epsilon \ | \ |Y|>x) \le \mathbb P(|Z|>\epsilon x )/\mathbb P(|Y|>x)\rightarrow 0\] for any $\epsilon>0$. Now \eqref{eq:2} follows by writing for each $k$, \[ \frac{X_k}{|X_0|}=\frac{X_k}{|T_m^*| } \cdot \frac{|T_m^*| }{|T_m^* +X_0-T_m^*| } \] and then applying the first relation in \eqref{eq:L1} with $Y = T_m^*$, $Z = X_0-T_m^*$, as well as the relations \eqref{eq:Xk_tail} and \eqref{eq:Xk remainder}. The step \eqref{eq:3} follows by writing for each $k$, \[ \frac{X_k}{|T_m^*|} = \frac{T_m^*H_k}{|T_m^*|} + \frac{X_k-T_m^*H_k}{|T_m^*|}, \] and applying the second relation in \eqref{eq:L1}. The last step \eqref{eq:4} follows from $\sbb{\varepsilon_{(1,\dots,p)}}\EqD\varepsilon$ and independence between $(H_k)_{k=0,\ldots,m}$ and $T_m^*$. Comparing the definitions \eqref{eq:H} (note that $\wt\ell_k((1,\dots,p)) = (1,\dots,p)$ means $k\in R_{m,i}, i=1,\ldots,p$) and \eqref{eq:Theta}, one readily checks that \[ \mathcal L(H_0,\dots,H_m\mid H_0 = 1) = \mathcal L(\Theta_0^*,\dots,\Theta_m^*). \] Now we return to verify the first step \eqref{eq:1}. Introduce $B_0(x) := \ccbb{|X_0|>x}, B^*_m(x):=\ccbb{|T_m^*|>x}$, and \[ E = \ccbb{\left(\frac{X_0}{|X_0|},\dots,\frac{X_m}{|X_0|}\right) \in A}, \] where $A$ is a Borel set in ${\mathbb R}^{m+1}$ whose boundary is not charged by the limit law \eqref{eq:4}. It suffices to prove \begin{equation}\label{eq:1'} \lim_{x\to\infty}\mathbb P\pp{E\;\middle\vert\; |X_0|>x} = \lim_{x\to\infty}\mathbb P\pp{E\;\middle\vert\; |T_m^*|>x, H_0 = 1}. \end{equation} It is clear that, from the established steps \eqref{eq:2} to \eqref{eq:4}, the (limit of) right-hand side of \eqref{eq:1'} exists. For the left-hand side, write \begin{align*} \mathbb P(E\mid |X_0|>x) = \frac{\mathbb P(E\cap \{|X_0|>x,H_0=1\})+\mathbb P(E\cap \{|X_0|>x, H_0=0\})}{\mathbb P(|X_0|>x)}. \end{align*} In view of \eqref{eq:top_dominates} and \eqref{eq:remainder_rate}, we have \[\mathbb P(E\cap\{|X_0|>x,H_0 = 0\}) \le \mathbb P(|X_0|>x, H_0 = 0) =o(\mathbb P(|X_0|>x)) \] as $x\to\infty$. Thereofore, with in addition \eqref{eq:Xk_tail}, we see that $\mathbb P(E\mid |X_0|>x)$ has the same limit as \begin{equation}\label{eq:fraction} \frac{\mathbb P(E\cap\{|X_0|>x, H_0=1\})}{\mathbb P(|T_m^*|>x,H_0=1)} \end{equation} as $x\to\infty$. For the numerator, we have the following upper and lower bounds: \begin{align*} \mathbb P(E\cap\{|X_0|>x,H_0=1\})& \ge \mathbb P(E \cap\{|T_m^*|>(1+\epsilon)x,H_0=1\}) - \mathbb P(|X_0-T_0|>\epsilon x),\\ \mathbb P(E\cap\{|X_0|>x,H_0=1\}) & \le \mathbb P(E\cap\{|T_m^*|>(1-\epsilon)x,H_0=1\}) + \mathbb P(|X_0-T_0|>\epsilon x). \end{align*} The limit of \eqref{eq:fraction} with the numerator replaced by the upper and lower bounds above, as $x\rightarrow\infty$, can be determined by applying \eqref{eq:remainder_rate}, \eqref{eq:Xk_tail} and the relations \eqref{eq:2} to \eqref{eq:4} (with $x$ replaced by $(1\pm\epsilon)x$). Letting $\epsilon\downarrow0$, we conclude that \eqref{eq:fraction} has the same limit as \[ \frac{\mathbb P(E\cap \{|T_m^*|>x, H_0 = 1\})}{\mathbb P(|T_m^*|>x)\mathbb P(H_0 = 1)} = \mathbb P(E\mid |T_m^*|>x, H_0 = 1). \] This completes the proof of \eqref{eq:1'}. \end{proof} \section{Extremal index in the sub-critical regime, $p=2$}\label{sec:p=2} In this section, we restrict to the sub-critical regime $p =2, \ \beta\in(0,1/2)$, and compute the extremal index directly. Recall the series representation of $\{X_k\}_{k=0,\dots,n}$ in \eqref{eq:p>=1}, and for convenience we repeat here \begin{equation}\label{eq:representation} \{X_k\}_{k=0,\ldots,n}\eqd\ccbb{ X_{n,k}}_{k=0,\dots,n} \equiv\ccbb{ w_n^{2/\alpha}\sum_{1\le i_1<i_2} \frac{\varepsilon_{i_1}\varepsilon_{i_2}}{\Gamma_{i_1}^{1/\alpha}\Gamma_{i_2}^{1/\alpha}}\inddd{k\in R_{n,i_1}\cap R_{n,i_2}}}_{k=0,\dots,n}, \end{equation} and we follow the same notation in earlier sections except that we now have $n$ instead of $m$ and will let $n\to\infty$. We also emphasize on the dependence on $n$ of the series representation when writing $X_{n,k}$ instead of $X_k$. Introduce \[ b_n = \pp{\frac14n\log n}^{1/\alpha}. \] It can be verified based on \eqref{eq:top_dominates} that $P(X_1>b_n)\sim 1/n$ as $n\rightarrow\infty$. Then by a classical extreme limit theorem (e.g., \cite[Proposition 1.11]{resnick87extreme}, for i.i.d.~copies $\{X\topp 0_k\}_{k\in{\mathbb N}}$ of $X_1$, we have \begin{equation}\label{eq:iid} \frac1{b_n}\max_{k=1,\dots,n}X_k\topp 0 \Rightarrow Z_\alpha, \end{equation} where $Z_\alpha$ follows a standard $\alpha$-Fr\'echet distribution: $\mathbb P (Z_\alpha\le x) = e^{-x^{-\alpha}}, x\ge 0$. The goal is to establish a extreme limit theorem for the model \eqref{eq:representation} in the sub-critical case. \begin{Thm}\label{thm:EVT} Assume $\beta\in(0,1/2)$. As $n\rightarrow\infty$, we have \[ \frac{1}{b_n} \max_{k=1,\ldots,n} X_k \Rightarrow \theta^{1/\alpha} Z_\alpha \quad\mbox{ with }\quad \theta = (1-2\beta) \mathfrak q_{F,2}, \] where $Z_\alpha$ follows a standard $\alpha$-Fr\'echet distribution, and $\mathfrak q_{F,2}$ is as in \eqref{eq:qFp}. \end{Thm} Then, in comparison with \eqref{eq:iid} it follows that $\theta$ is the extremal index of the original process. The proof below is different from the one presented in \citep{bai21phase} and a non-trivial adaption is needed to extend the proof here to $p\ge 3$. We shall proceed with following approximation procedure. Fix throughout a sequence of increasing integers $\{m_n\}_{n\in{\mathbb N}}$ such that \begin{equation}\label{eq:m_n rate} \frac{w_n^2}{n \log^{2/(2-\alpha)}n} \ll m_n \ll \frac{w_n^2}{n} \end{equation} as $n\rightarrow\infty$, where $a_n\ll b_n$ means $a_n/b_n\rightarrow 0$ as $n\rightarrow\infty$. Introduce \begin{align*} \wt{X}_{n,k}:= w_n^{2/\alpha}\sum_{1\le i_1<i_2\le m_n} \frac{\varepsilon_{i_1}\varepsilon_{i_2}}{\Gamma_{i_1}^{1/\alpha}\Gamma_{i_2}^{1/\alpha}}\inddd{k\in R_{n,i_1}\cap R_{n,i_2}}. \end{align*} Then Theorem \ref{thm:EVT} follows from the following two results. \begin{Prop}\label{prop:EVT main} As $n\rightarrow\infty$, we have \[\frac{1}{b_n} \max_{k=1,\ldots,n} \wt{X}_{n,k} \Rightarrow \theta^{1/\alpha} Z_\alpha. \] \end{Prop} \begin{Lem}\label{Lem:reduction} As $n\rightarrow\infty$, we have \[ \frac{1}{b_n} \max_{k=1,\ldots,n} |X_{n,k}-\wt{X}_{n,k}|\ConvP 0, \] where $\ConvP$ stands for convergence in probability. \end{Lem} \subsection{Proof of Proposition \ref{prop:EVT main}} Let $\{U_{i}\}_{i\in{\mathbb N}}$ be i.i.d.~uniform random variables independent of everything else. Introduce \[ V_{i_1,i_2} := \varepsilon_{i_1}\varepsilon_{i_2} \pp{ U_{i_1} U_{i_2}}^{-1/\alpha},\quad R_{n,i_1,i_2} := R_{n,i_1}\cap R_{n,i_2}, \quad i_1,i_2\in {\mathbb N}, \ i_1<i_2. \] Then by a well-known relation $(\Gamma_1,\ldots,\Gamma_{m_n}) \eqd (U_{1,m_n},\ldots, U_{m_n,m_n} )\Gamma_{m_n+1}$, where $U_{1,m_n}<\ldots< U_{m_n,m_n}$ are order statistics of $\{U_1,\ldots,U_{m_n}\}$, we have \begin{align} \ccbb{\wt{X}_{n,k}}_{k=1,\ldots,n}&\eqd \ccbb{ \pp{\frac{w_n}{\Gamma_{m_n+1}}}^{2/\alpha} \sum_{1\le i_1<i_2\le m_n} V_{i_1,i_2} \inddd{k\in R_{n,i_1,i_2}}}_{k=1,\ldots,n}\notag \end{align} Noting that $\Gamma_{m_n+1}\sim m_n$ as $n\rightarrow\infty$ almost surely, we can instead work with \begin{align} \ccbb{X_{n,k}^*}_{k=1,\ldots,n}= \ccbb{ \pp{\frac{w_n}{m_n}}^{2/\alpha} \sum_{1\le i_1<i_2\le m_n} V_{i_1,i_2} \inddd{k\in R_{n,i_1,i_2}}}_{k=1,\ldots,n}. \label{eq:X star} \end{align} \begin{Lem} We have as $n\rightarrow\infty$, \[ \rho_n:=\mathbb P\pp{R_{n,1,2} \ne\emptyset} \sim \mathfrak q_{F,2}\frac{n}{w_n^2}. \] \end{Lem} \begin{proof} Introduce \begin{align*} q_{n,1} &:= \mathbb P\pp{R_{n,1,2}\cap\{1,\dots, n\}\ne\emptyset,\, \max R_{n,1,2}\le n},\\ q_{n,2} & := \mathbb P\pp{R_{n,1,2}\cap\{1,\dots, n\}\ne\emptyset,\, \max R_{n,1,2}> n}. \end{align*} Then, $q_{n,1}\le \rho_n\le q_{n,1}+q_{n,2}$. For $q_{n,1}$ we apply the last-renewal decomposition and Markov property: \begin{align*} q_{n,1} & = \summ i1{n} \mathbb P\pp{\max R_{n,1,2} = i\;\middle\vert\; i\in R_{n,1,2}}\mathbb P\pp{i\in R_{n,1,2}} = \summ i1{n} \mathfrak q_{F,2} \frac1{w_n^2} = \frac{\mathfrak q_{F,2}n}{w_n^2}, \end{align*} where we have used the fact that conditioning on $i\in R_{n,1,2}$, the count $\sum_{k=i}^\infty 1_{\{k\in R_{n,1,2}\}}$ follows a geometric distribution with mean $\mathfrak q_{F,2}^{-1}$. For $q_{n,2}$, write first by a similar decomposition based on the last renewal before time $n$, \begin{align*} q_{n,2} & = \summ i1{n}\mathbb P\pp{\max(R_{n,1,2}\cap\{1,\dots,n\}) = i, \max R_{n,1,2}>n} \\ & \le \summ i1{n} \mathbb P\pp{i\in R_{n,1,2}}\mathbb P\pp{\max R_{n,1,2}>n\;\middle\vert\; i\in R_{n,1,2}} \le \summ i1{n} \frac1{w_n^2}\sum_{j=n-i+1}^\infty u(j)^2. \end{align*} The last step above follows from the renewal property and the union bound. With $v(i) := \sum_{j=i}^\infty u(j)^2\downarrow 0$ as $i\to\infty$ due to the fact $u(j)\le C j^{\beta-1} $ with $\beta\in (0,1/2)$, the last displayed expression becomes $w_n^{-2}\summ i1{n} v(i) = nw_n^{-2}\summ i1{n}(v(i)/n) = nw_n^{-2}o(1) = o(q_{n,1})$, completing the proof. \end{proof} Introduce the following two counting numbers: \begin{equation}\label{eq:N_n} N_n :=\sum_{1\le i_1<i_2\le m_n}\inddd{R_{n,i_1,i_2}\cap \{1,\ldots,n\}\neq \emptyset}, \end{equation} and \begin{equation}\label{eq:M_n} M_n:=\sum_{\substack{1\le i_1<i_2\le m_n, 1\le j_1<j_2\le m_n,\\ \{i_1,i_2\}\neq \{j_1,j_2\}, \{i_1,i_2\}\cap\{j_1,j_2\}\neq \emptyset }} \inddd{ R_{n,i_1,i_2}\cap \{1,\ldots,n\}\neq \emptyset,\, R_{n,j_1,j_2}\cap \{1,\ldots,n\} \neq \emptyset}. \end{equation} \begin{Lem}\label{Lem:N M estimate} We have as $n\rightarrow\infty$ that \begin{equation}\label{eq:lambda} {\mathbb E} N_n \sim \lambda_n:= \frac{ \mathfrak q_{F,2} }{2}\frac{n m_n^2}{w_n^2} \rightarrow\infty , \quad \frac{N_n}{\lambda_n}\ConvP 1 \end{equation} and ${\mathbb E} M_n\le C \frac{n^2 m_n^3}{w_n^4} =o(\lambda_n)$. \end{Lem} \begin{proof} For the first part, we have \[ {\mathbb E} N_n \sim \frac{m_n^2 \rho_n}{2}=\frac{ \mathfrak q_{F,2} }{2}\frac{n m_n^2}{w_n^2}, \] which tends to $\infty$ due to $m_n\gg w_n^2/(n\log^{2/(2-\alpha)}n)$, the first part of assumption \eqref{eq:m_n rate}. Next, note that \begin{align*} \rho'_n&:=\mathbb P\pp{ R_{n,1,2}\cap \{1,\ldots,n\}\neq \emptyset,\, R_{n,1,3}\cap \{1,\ldots,n\} \neq \emptyset }\\ & \le \summ k1n\summ{k'}1n \mathbb P\pp{k\in R_{n,1,2}, k'\in R_{n,1,3}} \le \frac{2 }{w_n^3}\summ k1n \summ{k'}kn u(k'-k) \le C \frac{n^2}{w_n^4}. \end{align*} Hence \begin{align*} {\mathbb E} M_n \le C \rho_n' m_n^3 \le C \frac{n^2 m_n^3}{w_n^4} =o(\lambda_n), \end{align*} where the last relation follows from $m_n\ll w_n^2/n$, the second part of \eqref{eq:m_n rate}. Next, by a decomposition of the double sum over $1\le i_1<i_2\le m_n$ and $1\le j_1<j_2\le m_n$ according to $|\{i_1,i_2\}\cap \{j_1,j_2\}|=0,1,2$, we have as $n\rightarrow\infty$, \[ {\mathbb E} N_n^2= {m_n \choose 2} {m_n-2\choose 2} \rho_n^2+ {\mathbb E} M_n+ {\mathbb E} N_n=({\mathbb E} N_n)^2 (1-o(1)) + O(\lambda_n), \] and hence ${\rm{Var}} (N_n) =o(({\mathbb E} N_n)^2)$, which concludes the convergence in probability in \eqref{eq:lambda}. \end{proof} \begin{proof}[Proof of Proposition \ref{prop:EVT main}] We shall work with ${X}_{n,k}^*$ in \eqref{eq:X star}. The key underlying structure is that its partial maximum can be approximated by a collection of $\lambda_n$ (see \eqref{eq:lambda}) i.i.d.~random variables, as summarized in \eqref{eq:EVT main} below, and this approximation alone explains why the extra factor $\mathsf D_{2,\beta} = 1-2\beta$ shows up in the extremal index compared to the candidate extremal index. We start by setting the random index set \[ J_n(k):= \{(i_1,i_2)\in \{1,\ldots,m_n\}^2:\ i_1<i_2, \ k\in R_{n,i_1,i_2} \}, \quad k=1,\ldots,n. \] So \begin{equation}\label{eq:max Xstar rewrite} \max_{k=1,\ldots,n} {X}_{n,k}^* = \pp{\frac{w_n^2}{ m_n^2}}^{1/\alpha} \max_{k=1,\ldots,n} \left(\sum_{(i_1,i_2)\in J_n(k)} V_{i_1,i_2}\right) , \end{equation} where the sum over $(i_1,i_2)\in J_n(k)$ is understood as $0$ if $J_n(k)=\emptyset$. Define also \[I_n(k):=\bigcup_{(i_1,i_2)\in J_n(k)}\{ i_1,i_2 \}.\] In fact $|J_n(k)|={|I_n(k)|\choose 2}$, and hence $|J_n(k)|$ cannot take arbitrary integer values. But for simplicity of notation we shall still write a consecutive integer range for $|J_n(k)|$ below. Let \begin{align*} \mathcal K_n:=\Big\{&k\in \{1,\ldots,n\}: |J_n(k)|=1,~\text{and} \\ &\text{ either } J_n(k')=J_n(k) \text{ or } I_n(k')\cap I_n(k) =\emptyset \, \ \forall k'\in \{1,\ldots,n\}\setminus \{k\} \Big\}, \end{align*} and set $\mathcal K_n^c:= \{1,\ldots,n\}\setminus \mathcal K_n$. Then by independence, \begin{equation}\label{eq:max single} \max_{k\in \mathcal K_n} \left(\sum_{(i_1,i_2)\in J_n(k)} V_{i_1,i_2}\right)\eqd \max_{\ell = 1,\dots, \wt{N}_n} V_{\ell} \quad\mbox{ with }\quad \wt{N}_n:=|\cup_{k\in \mathcal{K}_n} J_n(k)|, \end{equation} where $\{V_\ell\}_{\ell\in{\mathbb N}}$ are i.i.d.\ of the same distribution as $V_{1,2}$ and independent of everything else. Here and below, when a maximum is performed over an empty index set, it is understood as $0$. Next, observe that that $\wt{N}_n\le N_n$ with $N_n$ in \eqref{eq:N_n}. In addition, any $(i_1,i_2)$ satisfying $R_{n,i_1,i_2}\neq \emptyset$ and $1\le i_1<i_2\le m_n$, but not included in $\cup_{k\in \mathcal{K}_n} J_n(k)$, must have been counted at least once by $M_n$ in \eqref{eq:M_n}. So $0\le N_n-\wt{N}_n\le M_n$. Recall $\lambda_n$ in \eqref{eq:lambda}. In summary, one can prove \begin{align} \lim_{n\to\infty}\mathbb P\pp{\frac{1}{b_n} \max_{k\in \mathcal K_n} {X}_{n,k}^*\le x} & = \lim_{n\to\infty} \mathbb P\pp{ \pp{\frac{w_n^2}{b_n^{\alpha} m_n^2}}^{1/\alpha}\max_{\ell=1,\dots, \wt{N}_n} V_{\ell}\le x}\nonumber\\ &= \lim_{n\to\infty}\mathbb P\pp{\pp{\frac{w_n^2}{b_n^{\alpha} m_n^2}}^{1/\alpha} \max_{ \ell=1,\dots, \floor{\lambda_n}} V_{\ell}\le x}\label{eq:EVT main} \\ & = \mathbb P\pp{\theta^{1/\alpha} Z_\alpha \le x}, \mbox{ for all } x>0. \label{eq:EVT std} \end{align} Indeed, the first relation above follows from \eqref{eq:max Xstar rewrite} and \eqref{eq:max single}. We postpone the proof for \eqref{eq:EVT main} for a moment. Then \eqref{eq:EVT std} follows from classical extreme-value limit theorem for $\lfloor \lambda_n \rfloor$ i.i.d.\ random variables of regularly-varying tail (e.g., \cite[Proposition 1.11]{resnick87extreme}). The normalization, say $d_n$, for \[ \frac1{d_n} \max_{\ell=1,\dots,\floor{\lambda_n}} V_\ell\Rightarrow Z_\alpha, \] is well-known to be determined by $\lim_{n\to\infty}\lambda_n\mathbb P(V_1>d_n) = 1$. It is elementary to verify that \begin{equation}\label{eq:V marginal tail} \mathbb P(V_1<-x)=\mathbb P(V_1>x)=\frac{1}{2}\mathbb P(U_1U_2<x^{-\alpha})\sim \frac{\alpha}{2} x^{-\alpha} \log x \end{equation} as $x\rightarrow\infty$. Recall also $\lambda_n \sim (\mathfrak q_{F,2}/2)nm_n^2/w_n^2$ and the requirement \eqref{eq:m_n rate} which implies $\log(\lambda_n)\sim \log(w_n^2/n)\sim (1-2\beta)\log n$. So we have \begin{align*} d_n & \sim \pp{\frac1{2} \lambda_n\log \lambda_n}^{1/\alpha} \sim \pp{\frac{1-2\beta}2\lambda_n\log n}^{1/\alpha}\\ &\sim \pp{\frac{b_n^\alpha m_n^2}{w_n^2}}^{1/\alpha}\pp{{\mathfrak q_{F,2}(1-2\beta)}}^{1/\alpha} = \pp{\frac{b_n^\alpha m_n^2}{w_n^2}}^{1/\alpha}\theta^{1/\alpha}, \end{align*} yielding \eqref{eq:EVT std}. Now we check \eqref{eq:EVT main}. Introduce $f(x) := ((1/2)x\log x)^{1/\alpha}$. So $d_n \sim f(\lambda_n)$. Introduce accordingly $\what d_n := f(\wt N_n) = ((1/2)\wt N_n\log\wt N_n)^{1/\alpha}$. By Lemma \ref{Lem:N M estimate}, we have $\wt{N}_n/\lambda_n\ConvP 1$ and $\lambda_n\rightarrow\infty$ as $n\rightarrow\infty$, and it follows that $\what d_n/d_n\ConvP 1$ since $f$ is a regularly varying function (e.g.~\citep[Theorem 1.12]{kulik20heavy}). Write $Z_n = f(n)^{-1} \max_{\ell = 1,\dots,n}V_\ell$ and $d_n ^{-1} \max_{\ell=1,\dots,\wt N_n}V_\ell = (\what d_n/d_n) \cdot Z_{\wt N_n}$. So to obtain \eqref{eq:EVT main} it is remains to argue $Z_{\wt N_n}$ and $Z_n$ have the same limit distribution as $n\to\infty$. The last step is an exercise. To complete the proof, it remains to show that as $n\rightarrow\infty$, $b_n^{-1} \max_{k\in \mathcal K_n^c} {X}_{n,k}^*\ConvP 0$. We fix $p^*$ large enough so that $1-(p^*+1)\beta<0$ (recall $\beta\in(0,1/2)$). Then \begin{align*} \mathbb P(|J_n(k)|> p^* \text{ for some }k =1,\ldots,n) & \le \sum_{k=1}^n \binom{m_n}{p^*+1}w_n^{-p^*-1} \le C \frac{w_n^{p^*+1}}{n^{p^*}}\le C n^{1-(p^*+1)\beta}\rightarrow 0, \end{align*} where we have used \eqref{eq:m_n rate} in the second inequality above. Hence with probability tending to 1 as $n\rightarrow\infty$, we have \begin{align*} \frac{1}{b_n} \max_{k\in \mathcal K_n^c} {X}_{n,k}^* = \pp{\frac{w_n^2}{b_n^{\alpha} m_n^2}}^{1/\alpha} \max_{j=1,\ldots,p^*} \max_{ k\in \mathcal K_n^c } \sum_{(i_1,i_2)\in J_n(k),\ |J_n(k)|=j} V_{i_1,i_2}. \end{align*} So it suffices to show for fixed $j=1,\ldots,p^*$ as $n\rightarrow\infty$ that \begin{equation}\label{eq:goal id} \pp{\frac{w_n^2}{b_n^{\alpha} m_n^2}}^{1/\alpha} \max_{k\in\mathcal K_n^c } W_j(k) \ConvP 0 \quad\mbox{ with }\quad W_j(k)=\sum_{(i_1,i_2)\in J_n(k),\ |J_n(k)|=j} |V_{i_1,i_2}|. \end{equation} Note that $\{W_j(k)\}_{k\in\mathcal K_n^c}$ are, given $\mathcal K_n^c$, identically distributed but possibly dependent random variables, and the total number of distinct $W_j(k)$'s, say $K_j(n)$, does not exceed $N_n-\wt{N}_n\le M_n$. On the other hand in view of \eqref{eq:V marginal tail}, \begin{equation}\label{eq:tail bound W} \mathbb P(W_j(1)>x)\le j \mathbb P(|V_{1,2}|>x/j)\le C x^{-\alpha} \log x \end{equation} for all $x>0$ and some constant $C>0$. Suppose $\{\wt{W}_j(\ell)\}_{\ell\in {\mathbb N}}$ are i.i.d.\ copies of $W_j(1)$ and independent of everything else. Then by the tail bound \eqref{eq:tail bound W}, the tail estimate \eqref{eq:V marginal tail}, the extreme-value limit theorem for $|V_\ell|$ and the fact that $M_n/\lambda_n\ConvP 0$ (Lemma \ref{Lem:N M estimate}), we have \[ \mathbb P\pp{ \pp{\frac{w_n^2}{b_n^{\alpha} m_n^2}}^{1/\alpha} \max_{\ell=1,\ldots,K_j(n)} \wt{W}_j(\ell)>\epsilon}\le C \mathbb P\pp{ \pp{\frac{w_n^2}{b_n^{\alpha} m_n^2}}^{1/\alpha} \max_{\ell=1,\ldots,M_n} |V_\ell|>\epsilon}\rightarrow 0 \] for any $\epsilon>0$ as $n\rightarrow\infty$. Then \eqref{eq:goal id} follows from \cite[Proposition 9.7.3]{samorodnitsky16stochastic}. \end{proof} \subsection{Proof of Lemma \ref{Lem:reduction}} Lemma \ref{Lem:reduction} follows if one establishes that \begin{align*} A_n & := \frac{w_n^{2/\alpha}}{b_n}\max_{k=1,\ldots,n}\sum_{1\le i_1 \le m_n<i_2} \frac{\varepsilon_{i_1}\varepsilon_{i_2}}{\Gamma_{i_1}^{1/\alpha}\Gamma_{i_2}^{1/\alpha}}\inddd{k\in R_{n,i_1,i_2}} \ConvP 0, \\ B_n& := \frac{w_n^{2/\alpha}}{b_n} \max_{k=1,\ldots,n}\sum_{ m_n<i_1<i_2 } \frac{\varepsilon_{i_1}\varepsilon_{i_2}}{\Gamma_{i_1}^{1/\alpha}\Gamma_{i_2}^{1/\alpha}}\inddd{k\in R_{n,i_1,i_2}} \ConvP 0, \end{align*} as $n\rightarrow\infty$. First, fix a number $m>4/\alpha$. Assume without loss of generality that $m_n>4/\alpha$. Then write \[ A_n=A_{n,1}+A_{n,2}:= \frac{w_n^{2/\alpha}}{b_n} \max_{k=1,\ldots,n}\sum_{m \le i_1\le m_n<i_2} \cdots + \frac{w_n^{2/\alpha}}{b_n}\max_{k=1,\ldots,n} \sum_{1 \le i_1<m, i_2>m_n} \cdots. \] Then by bounding the maximum of non-negative numbers with their sum, the orthogonality induced by $\epsilon_{i_1}\epsilon_{i_2}$ and \citep[the inequality (3.2)]{samorodnitsky89asymptotic}, we have \begin{align*} {\mathbb E} A_{n,1}^2 & \le \frac{ w_n^{4/\alpha}}{b_n^2} {\mathbb E}\sum_{k=1}^n \pp{\sum_{m \le i_1\le m_n<i_2} \frac{\varepsilon_{i_1}\varepsilon_{i_2}}{\Gamma_{i_1}^{1/\alpha}\Gamma_{i_2}^{1/\alpha}} 1_{\{k\in R_{n,i_1,i_2}\}}}^2 \\ &=\frac{ w_n^{4/\alpha}}{b_n^2} \sum_{m \le i_1\le m_n<i_2} {\mathbb E}\pp{\Gamma_{i_1}^{-2/\alpha} \Gamma_{i_2}^{-2/\alpha}} \pp{\sum_{k=1}^n \mathbb P\pp{k\in R_{n,i_1,i_2}}} \\&\le C \frac{n w_n^{4/\alpha-2}}{b_n^2} \sum_{m \le i_1\le m_n<i_2} i_1^{-2/\alpha} i_2^{-2/\alpha} \le C \frac{n w_n^{4/\alpha-2}}{b_n^2} m_n^{1-2/\alpha} \rightarrow 0, \end{align*} where we have used the relation $\frac{w_n^2}{n \log^{2/(2-\alpha)}(n)} \ll m_n $ in the first part of \eqref{eq:m_n rate} in the last step. For $A_{n,2}$, since $i_1$ takes finitely many values, it suffices to show for fixed $i_1$ that \begin{align*} b_n^{-2} w_n^{4/\alpha} {\mathbb E} \left| \max_{k=1,\ldots,n} \sum_{i_2>m_n} \Gamma_{i_2}^{-1/\alpha} \epsilon_{i_2} 1_{\{k\in R_{n,i_1,i_2}\}} \right|^2 \le C n b_n^{-2} w_n^{4/\alpha-2} m_n^{1-2/\alpha}\rightarrow 0, \end{align*} which follows as above. For $B_n$, similarly we have \[ {\mathbb E} B_n^2\le C n b_n^{-2} w_n^{4/\alpha-2} \sum_{m_n< i_1<i_2} i_1^{-2/\alpha} i_2^{-2/\alpha} \le C n b_n^{-2} w_n^{4/\alpha-2} m_n^{2-4/\alpha} \rightarrow 0. \] \section{Anti-clustering condition when $\beta_p<0$}\label{sec:anti} Assume $\beta_p<0$ in this section. Based on the convergence of the tail processes we can actually prove the convergence of the single-block cluster point process, which is based on the verification of the anti-clustering condition \eqref{eq:AC}. Strictly speaking we prove a stronger condition known as the $\mathcal S(r_n,a_n)$ condition as in \citep[P.243]{kulik20heavy}, in Lemma \ref{lem:AC} below. This condition could have other consequences, notably when proving limit theorems for tail empirical processes \citep[Chapter 9]{kulik20heavy}. It remains an interesting question whether one could prove limit theorems for tail empirical processes for our model here: the classical approach relies also on certain mixing-type condition and does not seem applicable here. \begin{Prop} Assume $\beta_p<0$. For $a_n\to \infty$, $r_n\to\infty$ and $r_n = o(\log a_n)$, with $M_{r_n}:=\max_{k=1,\dots,r_n}|X_k|$, the limit of \[ \mathcal L\pp{\summ i1{r_n}\delta_{X_i/a_n}\;\middle\vert\; M_{r_n}>a_n} \] in the space of $\mathfrak M_p(\wb{\mathbb R}\setminus\{0\})$ is $\mathfrak G\delta_\varepsilon$, a unit mass at a Rademacher random variable $\varepsilon$ multiplied by a geometric random variable $\mathfrak G$ with mean $1/\mathfrak q_{F,p}$ and independent from $\varepsilon$. \end{Prop} \begin{proof} By Theorem \ref{thm:tail}, \cite[Theorem 4.3]{basrak09regularly} and a continuous mapping argument (cf.\ \citep[Remark 4.6]{basrak09regularly}), the limit is a point measure \[ \sif i0 \delta_{\varepsilon\Theta_i^*} \] restricted to $\mathfrak M_p(\wb {\mathbb R}\setminus\{0\})$ given that the anti-clustering condition \eqref{eq:AC} holds \citep[e.g.~Condition 4.1]{basrak09regularly}, which is verified in Lemma \ref{lem:AC} below. Note that because of the restriction, the point process above is actually $\spp{\sif i0 \inddd{\Theta_i^* = 1}}\delta_{\varepsilon }$ (note that $\vv \Theta_0^*= 1$ always), and the summation in the parenthesis is readily checked to be a geometric random variable with the desired law, by examining the definition of $\vv\Theta^*$ in \eqref{eq:Theta*} and the renewal property. \end{proof} The following lemma implies the anti-clustering condition \eqref{eq:AC} by a simple argument based on union bound and the stationarity of the sequence. \begin{Lem}[Condition $\mathcal S(r_n,a_n)$]\label{lem:AC} Assume $\beta_p<0$. For $a_n\to\infty$, $r_n\to\infty$ and $r_n = o(\log a_n)$, \begin{equation}\label{eq:union} \mathbb P\pp{\max_{\ell\le |k|\le r_n}|X_k|>a_n\eta\;\middle\vert\; |X_0|>a_n\eta} \le \frac2{\mathbb P(|X_0|>a_n\eta)}\sum_{k=\ell}^{r_n}\mathbb P(|X_0|>a_n\eta, |X_{k}|>a_n\eta)\rightarrow 0. \end{equation} \end{Lem} \begin{proof} Unlike in the proof of Theorem \ref{thm:tail}, we only need a series representation for two-dimensional $(X_0,X_K)$, for each $K\in{\mathbb N}$ fixed. This can be done in the same way as \eqref{eq:p>=1} is derived, with the only modification that the restriction $\vv\tau^*\cap\{0,\dots,m\}\ne\emptyset$ is now replaced by $\vv\tau^*\cap\{0,K\}\ne\emptyset$. More precisely, let $\{\wt R_{K,i}\}_{i\in{\mathbb N}}$ be i.i.d.~copies of $\wt R_K$, of which the law, denoted by $\wt \mu_K$, is determined by \[ \frac{d\wt \mu_K}{d\mu^*} = \frac{\inddd{\vv\tau^*\cap\{0,K\}\ne\emptyset}}{\mu^*(\{\vv\tau^*:\vv\tau^*\cap \{0,K\}\ne\emptyset\})} = \frac{\inddd{\vv\tau^*\cap\{0,K\}\ne\emptyset}}{\wt w_K}, \] and one can compute by inclusion-exclusion formula that $\wt w_K =2-u(K)$. Then we arrive at \[ \ccbb{X_k}_{k=0,K}\eqd\ccbb{\wt w_K^{p/\alpha}\sum_{{\boldsymbol i}\in\mathcal D_p} \frac{[\varepsilon_{ {\boldsymbol i} }]}{[\Gamma_{ {\boldsymbol i} }]^{1/\alpha}}\inddd{k\in \wt R_{K,i}}}_{k=0,K}, \] where as usual $\{\wt R_{K,i}\}_{i\in{\mathbb N}}$ are independent from $\{\varepsilon_i\}_{i\in{\mathbb N}}$ and $\{\Gamma_i\}_{i\in{\mathbb N}}$. This time, introduce (similar to \eqref{eq:ell_k}) for $k=0,K$, \begin{align*} \what\ell_{k}(1) & :=\min\ccbb{j\in{\mathbb N}:k\in \wt R_{K,j}},\nonumber\\ \what\ell_{k}(s) &:=\min\ccbb{j>\what \ell_{K,k}(s-1):k\in \wt R_{K,j}},s\ge 2. \end{align*} We can write \begin{equation}\label{eq:what Tk} \what X_k:= \wt w_K^{p/\alpha}\sum_{{\boldsymbol i}\in\mathcal D_p} \frac{[\varepsilon_{\what\ell_{k}({\boldsymbol i})}]}{[\Gamma_{\what\ell_{k}({\boldsymbol i})}]^{1/\alpha}} \quad\mbox{ with }\quad \what T_k := \wt w_K^{p/\alpha} \frac{[\varepsilon_{\what\ell_{k}((1,\dots,p))}]}{[\Gamma_{\what\ell_{k}((1,\dots,p))}]^{1/\alpha}}, \ k=0,K. \end{equation} Then, $(\what X_0,\what X_K)\eqd(X_0,X_K)$ and $(\what X_k, \what T_k)\eqd(X_k,T_k)$, $k=0,K$. Then \begin{align} \mathbb P(|X_0|>a_n\eta,|X_K|>a_n\eta) & \le \mathbb P\pp{|\what T_{0}|>a_n\eta/2, |\what T_{K}|>a_n\eta/2} + 2\mathbb P\pp{|\what X_{0}-\what T_{0}|>a_n\eta/2}\nonumber\\ & \le \mathbb P\pp{|\what T_{0}|>a_n\eta/2, |\what T_{K}|>a_n\eta/2} + C \mathsf q_{p-1}\pp{\pp{a_n\eta/2}^\alpha},\label{eq:AC1} \end{align} where the last step follows from \eqref{eq:remainder_rate}. Introduce \[ \what H_{k} := \inddd{\what \ell_{k}((1,\dots,p)) = (1,\dots,p)}, \quad k=0,K. \] Therefore we have, using the representation \eqref{eq:what Tk} again, \begin{align} \mathbb P & \pp{|\what T_{0}|>\frac{a_n\eta}2, |\what T_{K}|>\frac{a_n\eta}2}\nonumber \\ & \quad\le \mathbb P\pp{\frac{(2-u(K))^{p/\alpha}}{[\Gamma_{(1,\dots,p)}]^{1/\alpha}}>\frac{a_n\eta}2}\mathbb P\pp{\what H_{0} = \what H_{K} = 1} + \mathbb P\pp{\frac{(2-u(K))^{p/\alpha}}{\pp{\Gamma_1\cdots\Gamma_{p-1}\Gamma_{p+1}}^{1/\alpha}}>\frac{a_n\eta}2}\nonumber\\ & \quad\label{eq:AC2} \le C\mathsf q_p((a_n\eta)^\alpha/2^{p+\alpha})\mathbb P\pp{\what H_{0} = \what H_{K} = 1} + C\mathsf q_{p-1}((a_n\eta)^\alpha/2^{p+\alpha}). \end{align} It is straightforward to compute \begin{align} \mathbb P\pp{\what H_{0} = \what H_{K} = 1} & = \mathbb P\pp{\what H_{0} = 1}\mathbb P\pp{\what H_{K} = 1\;\middle\vert\; \what H_{0} = 1} \le \mathbb P\pp{\what H_{K} = 1\;\middle\vert\; \what H_{0} = 1} = u(K)^p.\label{eq:AC3} \end{align} To sum up, by \eqref{eq:AC1}, \eqref{eq:AC2} and \eqref{eq:AC3}, the right-hand side of \eqref{eq:union} is further bounded from above by \[ \frac{C\mathsf q_p((a_n\eta)^\alpha/2^{p+\alpha})}{\mathsf q_p((a_n\eta)^\alpha)}\sif k\ell u(k)^{p} + \frac{Cr_n\mathsf q_{p-1}((a_n\eta)^\alpha/2^{p+\alpha})}{\mathsf q_p((a_n\eta)^\alpha)} \le C\sif k\ell u(k)^{-p}+C\frac {r_n}{\log a_n}. \] Recall $u(k)$ in \eqref{eq:u}. Since $\beta_p<0$, $\sif k1 u(k)^{p}<\infty$. We have thus proved the desired result. \end{proof} \begin{acks}[Acknowledgments] The authors are grateful to Gennady Samorodnitsky for suggesting to investigate the tail processes for stable-regenerative multiple-stable processes and for several helpful discussions, and to Rafa\l~Kulik for very detailed explanations regarding tail processes and extremal indices, a very careful reading of a primitive version of the paper, and stimulating discussions. The authors would also like to thank Olivier Wintenberger for pointing out the reference \citep{smith88counterexample}. The authors thank two anonymous referees for careful reading and constructive comments. \end{acks} \begin{funding} The second author was supported in part by Army Research Office, USA (W911NF-20-1-0139). \end{funding} \end{document}
arXiv
\begin{document} \allowdisplaybreaks \title[Generic Initial Ideal] {Generic Initial Ideals of Artinian ideals having Lefschetz Properties or the strong Stanley Property} \author[Jea Man Ahn, Young Hyun Cho, and Jung Pil Park] {Jea Man Ahn$^{\dag}$, Young Hyun Cho$^{\ddag}$, and Jung Pil Park$^{\dag\dag}$} \address{$\dag$ Computational Sciences, Korea Institute for Advanced Study, Seoul 130-722, South Korea } \email{[email protected]} \address{$\ddag$ Department of Mathematical Sciences and Research Institute for Mathematics, Seoul National University, Seoul 151-747, South Korea } \email{[email protected]} \address{$\dag\dag$ National Institute for Mathematical Sciences, Daejeon 305-340, South Korea } \email{[email protected]} \date{\today} \thanks{The second author is partially supported by BK21, and the third author is supported by National Institute for Mathematical Sciences.} \begin{abstract} For a standard Artinian $k$-algebra $A=R/I$, we give equivalent conditions for $A$ to have the weak (or strong) Lefschetz property or the strong Stanley property in terms of the minimal system of generators of the generic initial ideal $\mathrm{gin}(I)$ of $I$ under the reverse lexicographic order. Using the equivalent condition for the weak Lefschetz property, we show that some graded Betti numbers of $\mathrm{gin}(I)$ are determined just by the the Hilbert function of $I$ if $A$ has the weak Lefschetz property. Furthermore, for the case that $A$ is a standard Artinian $k$-algebra of codimension 3, we show that every graded Betti numbers of $\mathrm{gin}(I)$ are determined by the graded Betti numbers of $I$ if $A$ has the weak Lefschetz property. And if $A$ has the strong Lefschetz (resp. Stanley) property, then we show that the minimal system of generators of $\mathrm{gin}(I)$ is determined by the graded Betti numbers (resp. by the Hilbert function) of $I$. \end{abstract} \maketitle \section{\sc Introduction} Let $I$ be a homogeneous ideal of the polynomial ring $R=k[x_1,\ldots,x_n]$ over a field $k$. Throughout this paper, we use only a field $k$ of characteristic 0. After Fr\"{o}berg \cite{Fr} introduced the conjecture about the Hilbert series of generic algebras, it is one of the remarkable problems under which condition $A=R/I$ has the weak (or strong) Lefschetz property, which will be explicitly defined in Definition $\ref{Def:WSLP}$. Recently, many authors achieved some results on this problem. Watanabe \cite{Wa} showed that "most" Artinian Gorenstein rings with fixed socle degree have the strong Lefschetz property(Cf. \cite{St}). In particular, it was proved by Watanabe that any generic Artinian Gorenstein $k$-algebra has the strong Lefschetz property. And Harima, Migliore, Nagel and Watanabe \cite{HMNW} showed that $A$ has the strong Lefschetz property \ if $I$ is any complete intersection ideal of codimension 2, and has the weak Lefschetz property \ if $I$ is any complete intersection ideal of codimension 3. But it remains as an open problem whether any complete intersection Artinian $k$-algebra $A$ has the weak Lefschetz property \ or not in higher codimension case. In this paper, we study Lefschetz properties and the strong Stanley property in the view point of generic initial ideals. We begin Section 2 with definitions of the weak (or strong) Lefschetz property and the strong Stanley property of $A=R/I$. Then we introduce some results of Wiebe who firstly investigated the Lefschetz properties in the viewpoint of generic initial ideal. Wiebe \cite{Wie} proved that $A=R/I$ has the weak {\rm(}resp. strong{\rm)} Lefschetz property \ if and only if $R/\mathrm{gin}(I)$ has the weak {\rm(}resp. strong{\rm)} Lefschetz property, where $\mathrm{gin}(I)$ is the generic initial ideal of $I$ with respect to the reverse lexicographic order (see Proposition \ref{Prop:Equiv.Cond.I.and.GinI}). And he (or she) gave an equivalent condition for $A=R/I$ to have the weak Lefschetz property \ in terms of the graded Betti numbers of $\mathrm{gin}(I)$ (see Proposition \ref{Cond:BettiNumberForWLP}). We will give another equivalent conditions and compute the graded Betti numbers of $\mathrm{gin}(I)$ more precisely. To do so, we introduce the first reduction number and the minimal system of generators of $\mathrm{gin}(I)$ in section 2. In section 3, we give equivalent conditions for $A$ to have the weak Lefschetz property \ (Proposition \ref{Prop:WLP}), the strong Lefschetz property \ (Theorem \ref{Thm:SLP}), and the strong Stanley property \ (Theorem \ref{Thm:SSP}), respectively, in terms of the minimal system of generators of $\mathrm{gin}(I)$. And, as a result of Proposition \ref{Prop:WLP}, we show that some graded Betti numbers of $\mathrm{gin}(I)$ can be computed just from the Hilbert function of $I$ (Corollary \ref{Cor:BettiForWLP}). If we restrict ourselves to the case $R=k[x_1,x_2,x_3]$, then we can obtain more information on the minimal system of generators of $\mathrm{gin}(I)$ from the Hilbert function or the graded Betti numbers of $R/I$. In section 4, under this restriction for $R$, we show that if $R/I$ has the weak Lefschetz property, then the minimal generator $T$'s of $\mathrm{gin}(I)$ with $\max(T) \le 2$ are uniquely determined by the graded Betti numbers of $R/I$ (Proposition \ref{Prop:UniquelyDetermineMinimalGeneratorWithWLP}). Together with Corollary \ref{Cor:BettiForWLP}, this implies that every graded Betti numbers of $\mathrm{gin}(I)$ can be computed from the graded Betti numbers of $I$. And if $R/I$ has the strong Lefschetz property, then we show that the minimal system of generators of $\mathrm{gin}(I)$ is uniquely determined by the graded Betti numbers of $R/I$ (Proposition \ref{Prop:GinSLPUnique}). At last we show that if $R/I$ has the strong Stanley property, then the minimal system of generators of $\mathrm{gin}(I)$ is uniquely determined by the Hilbert function of $R/I$ (Proposition \ref{Prop:GinSSPUnique}). \vskip 1cm \section{\sc Preliminaries} Let $R=k[x_1,x_2,\ldots,x_n]$ be the polynomial ring over a field $k$. For a homogeneous ideal $I$ and a linear form $L$ in $R$, we have the multiplication map \begin{equation}\label{Mor:MultiMap} \times L^{i} : (R/I)_d \rightarrow (R/I)_{d+i}, \end{equation} for each $i \ge 1$, $d \ge 0$. \begin{Defn} A standard graded Artinian $k$-algebra $A=R/I$ is said to have the weak {\rm(}resp. strong{\rm)} Lefschetz property \ if there exists a linear form $L$, called a weak {\rm(}resp. strong{\rm)} Lefschetz element, such that the multiplication in {\rm(}\ref{Mor:MultiMap}{\rm)} has maximal rank for each $d$ and $i=1$ {\rm(}resp. for each $i \ge 1${\rm)}. \end{Defn} Consider the exact sequence induced by $\times L^{i}$ \[ 0 \rightarrow ((I:L^{i})/I)_{d} \rightarrow (R/I)_{d} \xrightarrow[\qquad]{\times L^{i}} (R/I)_{d+i} \rightarrow (R/(I+L^{i}))_{d+i} \rightarrow 0. \] If the multiplication $\times L^{i}$ has maximal rank, then either $((I:L^{i})/I)_d$ or $(R/(I+L^{i}))_{d+i}$ must be 0 for each $d$. There is more stronger property that a standard Artinian $k$-algebra may have. That is the strong Stanley property named by Watanabe in the paper \cite{Wa}, though it was originally called as the hard Lefschetz property. \begin{Defn}\label{Def:WSLP} A standard graded Artinian $k$-algebra $A=R/I=\bigoplus_{i=0}^{t}A_i$, where $t = \max \{ i \ | \, A_i \neq 0 \}$, is said to have the strong Stanley property \ if there exists a linear form $L \in R$ such that the multiplication map $\times L^{t-2i}:A_i \rightarrow A_{t-i}$ is bijective for each $i=0,1,\ldots,[t/2]$. In this case the element $L$ is called a strong Stanley element. \end{Defn} From the following exact sequence \begin{equation}\label{ExSeq:SSP} 0 \rightarrow ((I:L^{t-2i})/I)_{i} \rightarrow (R/I)_{i} \xrightarrow[\qquad]{\times L^{t-2i}} (R/I)_{t-i} \rightarrow (R/(I+L^{t-2i}))_{t-i} \rightarrow 0, \end{equation} we can see that $A=R/I$ has the strong Stanley property \ if and only if $(I:L^{t-2i})_i = I_i$ and $R_{t-i} = (I+L^{t-2i})_{t-i}$ for all $i=0,1,\ldots,[t/2]$. \begin{Remk}\label{Remk:SymmetricSLP} A standard Artinian $k$-algebra $A=R/I$ has the strong Stanley property \ if and only if $A$ has the strong Lefschetz property \ and the Hilbert function of $A$ is symmetric. \end{Remk} In this paper, we will investigate those properties in the viewpoint of the generic initial ideal of $I$ with respect to the reverse lexicographic order. The notion of a generic initial ideal originates with the following theorem of Galligo in \cite{Ga}. \begin{Thm}\cite{Ga} For any multiplicative monomial order $\tau$ and any homogeneous ideal $I \subset R$, there is a Zariski open subset $U \subset GL_{n}(k)$ such that initial ideals $\rm{in}_{\tau}$$(gI)$ are constant over all $g \in U$. We define the generic initial ideal of $I$, denoted by $\rm {gin}_{\tau}$$(I)$, to be \[ \mathrm{gin}_{\tau} (I):={\rm in}_{\tau} (gI) \] for $g \in U$. \end{Thm} The most important property of generic initial ideals is that they are {\it Borel-fixed}. In characteristic $0$ case, note that a monomial ideal $I$ is Borel-fixed if and only if $I$ is {\it strongly stable}: i.e., if $T$ is a monomial, \[ x_i T \in I \Rightarrow x_j T \in I, \qquad \forall j\le i. \] For a monomial ideal $I$, we denote by $\mathcal{G}(I)$ the minimal system of generators of $I$, and by $\mathcal{G}(I)_d$ the set of minimal generators of $I$ of degree $d$. For a monomial $T$ of $R$, we set \[ \max(T) = \max \{i \ | \, x_i \text{ divides } T \}. \] Eliahou-Kervaire Theorem gives us an easy way to compute the graded Betti numbers of a stable monomial ideal $I$ of $R$. \begin{Thm}[Eliahou-Kervaire] \cite{EK} Let $I$ be a stable monomial ideal of $R$. Then the graded Betti number $\beta_{q,i}(I)=\dim_k{\rm Tor}^R_{q}(I,k)_i$ of $I$ is given by \[ \beta_{q,i}(I) = \sum_{T \in \mathcal{G}(I)_{i-q}} \binom{\max(T)-1}{q}, \] for each integers $q$ and $i$. \end{Thm} Wiebe studied the Lefschetz properties in the view point of generic initial ideals, and obtained the following results. \begin{Lem}\cite{Wie} If $R/I$ is an Artinian ring such that $I$ is Borel-fixed, then the following conditions are equivalent: \begin{enumerate} \item $R/I$ has the weak {\rm(}resp. strong{\rm)} Lefschetz property. \item $x_n$ is a weak {\rm(}resp. strong{\rm)} Lefschetz element \ on $R/I$. \end{enumerate} \end{Lem} \begin{Prop}\cite{Wie}\label{Prop:Equiv.Cond.I.and.GinI} Let $R/I$ be a standard Artinian $k$-algebra, and $\mathrm{gin}(I)$ the generic initial ideal of $I$ with respect to the reverse lexicographic order. Then $R/I$ has the weak {\rm(}resp. strong{\rm)} Lefschetz property \ if and only if $R/\mathrm{gin}(I)$ has the weak {\rm(}resp. strong{\rm)} Lefschetz property. \end{Prop} Consider the following exact sequence \begin{align}\label{ExSeq:LPGin} \begin{split} 0 \rightarrow ((\mathrm{gin}(I):x_n^i)/\mathrm{gin}(I))_{d} \rightarrow (R/\mathrm{gin}(I)&)_{d} \xrightarrow[\quad]{\times x_n^i} (R/\mathrm{gin}(I))_{d+i} \\ & \rightarrow (R/(\mathrm{gin}(I)+x_n^i))_{d+i} \rightarrow 0. \end{split} \end{align} Then we can see that $R/I$ has the weak {\rm(}resp. strong{\rm)} Lefschetz property \ if and only if either $(\mathrm{gin}(I):x_n^i)_d = \mathrm{gin}(I)_d$ or $R_{d+i} = (\mathrm{gin}(I)+x_n^i)_{d+i}$ for each $d$ and $i=1$ (resp. for any $i \ge 1$). The following proposition is an equivalent condition for $R/I$ to have the weak Lefschetz property \ in terms of the graded Betti numbers of $\mathrm{gin}(I)$. \begin{Prop}\cite{Wie}\label{Cond:BettiNumberForWLP} Let $R/I$ be a standard Artinian $k$-algebra, and $\mathrm{gin}(I)$ the generic initial ideal of $I$ with respect to the reverse lexicographic order. If $d$ is the minimum of all $j \in \mathbb{N}$ with $\beta_{n-1,n-1+j}(\mathrm{gin}(I)) > 0$, then the following conditions are equivalent: \begin{enumerate} \item $R/I$ has the weak Lefschetz property. \item $\beta_{n-1,n-1+j}(\mathrm{gin}(I)) = \beta_{0,j}(\mathrm{gin}(I)) $ for all $j > d$. \item $\beta_{i,i+j}(\mathrm{gin}(I)) = \binom{n-1}{i}\beta_{0,j}(\mathrm{gin}(I)) $ for all $j > d$ and all $i$. \end{enumerate} \end{Prop} In Proposition \ref{Prop:WLP}, we will give another equivalent condition for $R/I$ to have the weak Lefschetz property. And we give a tool to compute the graded Betti numbers of $\mathrm{gin}(I)$ from the Hilbert function of $R/I$, if $R/I$ has the weak Lefschetz property, in Corollary \ref{Cor:BettiForWLP}. For these, we need the notion of reduction numbers. \begin{Defn} If $I$ is a homogeneous ideal of $R$ with $\dim R/I = s$, then we define the $i$-th reduction number of $R/I$ to be $r_i(R/I) = \min\{t \ | \, \dim_k(R/(I+J_i))_{t+1} = 0 \}$ for $i \ge s$, where $J_i$ is an ideal generated by $i$ general linear forms in $R$. \end{Defn} The following theorem implies that reduction numbers of $R/I$ and $R/\mathrm{gin}(I)$ are the same, if we use the reverse lexicographic order. \begin{Thm}\cite{HT} For a homogeneous ideal $I$ of $R$, if $\mathrm{gin}(I)$ is a generic initial ideal of $I$ with respect to the reverse lexicographic order, then \[ r_{i}(R/I) = r_{i}(R/\mathrm{gin}(I)) = \min \{t \ | \, x_{n-i}^{t+1} \in \mathrm{gin}(I) \}. \] \end{Thm} In what follows, we will use only the reverse lexicographic order as a multiplicative monomial order. Suppose that $I$ is a homogeneous ideal of $R$ such that $R/I$ is a Cohen-Macaulay ring with $\dim R/I = n-r$. In the paper \cite{CCP}, $\mathcal{G}(\mathrm{gin}(I))$ is completely determined by the positive integer $f_1$ and functions $f_i:\mathbb{Z}^{i}_{\ge 0} \longrightarrow \mathbb{Z}_{\ge 0} \cup \{\infty\}$ defined as follows: \begin{align}\label{f's} \begin{split} f_1=&\min \{t \ | \, x^t_1 \in \mathrm{gin}(I) \} \text{ and, }\\ f_i(\alpha_1, \ldots, \alpha_{i-1}) =& \min \{ t \ | \, x^{\alpha_1}_1 \cdots x^{\alpha_{i-1}}_{i-1} x^{t}_i \in \mathrm{gin}(I) \}, \end{split} \end{align} for each $ 2 \le i \le r$. \begin{Lem}\cite{CCP} \label{Lem:f'sProperty} Let $f_1,\ldots,f_r$ be defined as in $(\ref{f's})$. For $2 \le i \le r$, suppose that $\alpha_1,\ldots,\alpha_{i-1}$ are integers such that $0 \le \alpha_1 < f_1$ and $0 \le \alpha_{j} < f_{j}(\alpha_1, \ldots, \alpha_{j-1})$ for each $j\le i-1$. Then we have \begin{enumerate} \item $0<f_i(\alpha_1, \ldots, \alpha_{i-1}) <\infty $, \item $x^{\alpha_1}_1 \cdots x^{\alpha_{i-1}}_{i-1} x^{f_i(\alpha_1, \ldots, \alpha_{i-1})}_i \in \mathcal{G}(\mathrm{gin}(I))$, and \item if $1 \le j \le i-1$ and $\alpha_j \ge 1$, then \[f_i(\alpha_1, \ldots, \alpha_j, \ldots, \alpha_{i-1}) \le f_i(\alpha_1, \ldots, \alpha_j-1, \ldots, \alpha_{i-1}) - 1. \] \end{enumerate} \end{Lem} For $ 1 \le i \le r-1$, let \begin{equation} \label{Set:IndexMinGen} \begin{split} J_i = \left\{(\alpha_1,\ldots,\alpha_{i}) \left| \begin{array}{l} 0 \le \alpha_1 < f_1, \ \text{and for each} \ 2 \le j \le i, \\ 0 \le \alpha_{j} < f_{j}(\alpha_1, \ldots, \alpha_{j-1}) \end{array} \right. \right\}, \end{split} \end{equation} and let \begin{equation} \label{Set:MinGen} \begin{split} \mathcal{G}=\{x_1^{f_1}\} \cup \left\{ x^{\alpha_1}_1 \cdots x^{\alpha_{i-1}}_{i-1} x^{f_i(\alpha_1, \ldots, \alpha_{i-1})}_i \left| \begin{array}{l} (\alpha_1,\ldots,\alpha_{i-1}) \in J_{i-1}, \\ \text{for} \ 2 \le i \le r \end{array} \right. \right\}. \end{split} \end{equation} \begin{Prop}\cite{CCP}\label{Prop:MinimalGen} Let $f_1,\ldots,f_r$ be defined as in $(\ref{f's})$. Then the minimal system of generators of $\mathrm{gin}(I)$ is $\mathcal{G}$ in $(\ref{Set:MinGen})$. \end{Prop} \begin{Remk}\label{Cond:ElementInJ_i} \begin{enumerate} \item Let $1 \le i \le r-1$ and $(\alpha_1,\ldots,\alpha_i) \in \mathbb{Z}_{\ge 0}^i$. Then $(\alpha_1,\ldots,\alpha_i)$ belongs to $J_i$ if and only if $x_1^{\alpha_1} \cdots x_i^{\alpha_i} x_{i+1}^{f_{i+1}(\alpha_1,\ldots,\alpha_i)}$ is an element of $\mathcal{G}(\mathrm{gin}(I))$ and $0 < f_{i+1}(\alpha_1,\ldots,\alpha_i) < \infty$ by Lemma \ref{Lem:f'sProperty} (1). \item If $(\alpha_1,\ldots,\alpha_i) \in J_i$, then the element $(0,\ldots,0,|\alpha|) \in \mathbb{Z}_{\ge 0}^{i}$ belongs to $ J_i$, where $|\alpha| := \sum_{j=1}^{i} \alpha_j$. In particular, $|\alpha| \le f_i(0,\ldots,0) - 1$. \begin{proof} By the definition of $J_{i}$, it is enough to show that $|\alpha| < f_{i}(0,\ldots,0)$. If $|\alpha|=0$, then the assertion follows, hence we may assume that $|\alpha| \ge 1$. Suppose to the contrary that $|\alpha| \ge f_{i}(0,\ldots,0)$. Then $x_{i}^{|\alpha|} \in \mathrm{gin}(I)$. Since $\mathrm{gin}(I)$ is strongly stable, $x_1^{\alpha_1}\cdots x_{i}^{\alpha_{i}} \in \mathrm{gin}(I)$. This contradicts to (1). \end{proof} \item Let $(\alpha_1,\ldots,\alpha_s,\ldots,\alpha_t,\ldots,\alpha_i) \in J_i$, $1 \le i \le r-1$. If $\alpha_s \ge 1$, then $f_{i+1}(\alpha_1,\ldots, \alpha_s,\ldots,\alpha_t,\ldots,\alpha_i) \le f_{i+1}(\alpha_1,\ldots, \alpha_s - 1, \ldots, \alpha_t + 1, \ldots, \alpha_i) $, since $\mathrm{gin}(I)$ is strongly stable. \end{enumerate} \end{Remk} \begin{Remk}\label{Remk:JDetermineGin} Note that the set $J_{r-1}$ determines the minimal generator $T$'s of $\mathrm{gin}(I)$ satisfying $\max(T) \le r-1$. Indeed, if $i \le r-1$ and $x_1^{\alpha_1}\cdots x_{i-1}^{\alpha_{i-1}} x_i^{\alpha_i} \in \mathcal{G}(\mathrm{gin}(I))$ with $\alpha_i \ge 1$, then $(\alpha_1,\ldots,\alpha_{i-1},\alpha_i-1,0,\ldots,0) \in \mathbb{Z}_{\ge 0}^{r-1}$ belongs to $J_{r-1}$. Hence if we set $g_i(\alpha_1,\ldots,\alpha_{i-1}) := \max \{ \beta \ | \, (\alpha_1,\ldots,\alpha_{i-1},\beta,0,\ldots,0) \in J_{r-1} \} + 1$, then $g_i(\alpha_1,\ldots,\alpha_{i-1}) = \alpha_i$. This shows that the assertion follows. \end{Remk} The following example will help to understand above remarks. \begin{Ex} Suppose that an homogeneous ideal $I$ of $R=k[x,y,z,w]$ has the following the generic initial ideal: \[ \mathrm{gin}(I)=(x^2, xy, y^3, y^2z, xz^2, yz^2, z^3, xzw^2 , y^2w^3, yzw^3, z^2w^3, xw^5, yw^5, zw^5, w^7). \] Since $f_1=2$, we have $J_1=\{ \, 0,1\}$. And $f_2(0)=3$, $f_2(1)=1$ imply that $J_2=\{(0,0),(0,1),(0,2),(1,0)\}$. And note that \[ J_3=\{(0,0,0),(0,0,1),(0,0,2),(0,1,0),(0,1,1),(0,2,0),(1,0,0),(1,0,1)\}, \] since we have $f_3(0,0)=3$, $f_3(0,1)=2$, $f_3(0,2)=1$, $f_3(1,0)=2$. \end{Ex} \vskip 1cm \section{\sc Equivalent Conditions} Throughout this section we assume that $A=\bigoplus_{i=0}^{t} A_i=R/I$ is a standard Artinian algebra over a field $k$, where $I$ is a homogeneous ideal of $R=k[x_1,\ldots,x_n]$ and $t=\max\{i \ | \, A_i \neq 0 \}$. In Proposition \ref{Cond:BettiNumberForWLP}, an equivalent condition for $A$ to have the weak Lefschetz property \ was given in terms of the graded Betti numbers of $\mathrm{gin}(I)$. In this section, we give equivalent conditions for $A$ to have the weak Lefschetz property \ (Proposition \ref{Prop:WLP}), the strong Lefschetz property \ (Theorem \ref{Thm:SLP}), and the strong Stanley property \ (Theorem \ref{Thm:SSP}), respectively, in terms of the minimal system of generators of $\mathrm{gin}(I)$. Furthermore, as a result of Proposition \ref{Prop:WLP}, we show that graded Betti numbers of $\mathrm{gin}(I)$ can be computed just from the Hilbert function of $I$ (See Corollary \ref{Cor:BettiForWLP}). To see the relation between the Hilbert function of $I$ and the minimal generator $T$'s of $\mathrm{gin}(I)$ with $\max(T) = n$, consider the following lemma and proposition. Although these are introduced in the paper \cite{AS}, we give full proofs of them because we modified a little. \begin{Lem}\label{Lem:Cond_Belong_In_G(ginI)} If $T$ is a nonzero monomial in $((\mathrm{gin}(I):x_n^i)/(\mathrm{gin}(I):x_n^{i-1}))_{d}$ for $d \ge 0$ and $i \ge 1$, then $Tx_n^i \in \mathcal{G}(\mathrm{gin}(I))_{d+i}$. \end{Lem} \begin{proof} Suppose to the contrary that $Tx_n^i$ is not a minimal generator of $\mathrm{gin}(I)$. Then $Tx_n^i = uM$ for some monomials $M \in \mathrm{gin}(I)$, $u \in R$ with $\deg(u) \ge 1$. Since $T \notin (\mathrm{gin}(I):x_n^{i-1})$, $x_n$ cannot divide $u$, i.e. $\max(u) < n$, and so $x_n$ divides $M$. But since $\mathrm{gin}(I)$ is strongly stable, this implies that \[ Tx_n^{i-1} = u \frac{M}{x_n} \in \mathrm{gin}(I), \] which is a contradiction. \end{proof} \begin{Prop}\label{Prop:r1A} Suppose that $d > r_{1}(A)$. If $T_1,\ldots,T_t$ are monomials which form a $k$-basis of $((\mathrm{gin}(I):x_n)/\mathrm{gin}(I))_{d-1}$, then \[ \{ T_1x_n, \ldots, T_tx_n \} = \{ T \in \mathcal{G}(\mathrm{gin}(I)) \ | \, \max(T) = n, \deg(T) = d \}. \] In particular, \begin{align}\label{Eq:d>r_1(A)} \begin{split} H(A,d-1) - H(A,d) =& \ \dim_k ((\mathrm{gin}(I):x_n)/\mathrm{gin}(I))_{d-1} \\ =& \ \sharp \{ T \in \mathcal{G}(\mathrm{gin}(I)) \ | \, \max(T) = n, \deg(T) = d \}, \end{split} \end{align} where $H(A,d) := \dim_k A_d$ is the Hilbert function of $A$ at degree $d$. Thus we have $H(A,d-1) \ge H(A,d)$ for any $d > r_{1}(A)$. \end{Prop} \begin{proof} Note that if $T \in \mathcal{G}(\mathrm{gin}(I))$ is a monomial with $\deg(T) = d$ and $\max(T) = n$, then $T/x_n$ is a nonzero monomial in $((\mathrm{gin}(I):x_n)/\mathrm{gin}(I))_{d-1}$. Hence, it is enough to show that if $T_1,\ldots,T_t$ are monomials which form a $k$-basis of $((\mathrm{gin}(I):x_n)/\mathrm{gin}(I))_{d-1}$, then $T_ix_n \in \mathcal{G}(\mathrm{gin}(I))$ for all $i$. But this follows from Lemma \ref{Lem:Cond_Belong_In_G(ginI)}. For the second assertion, note that $d > r_1(A)$ implies $x_{n-1}^{d} \in \mathrm{gin}(I)$ by the definition of the reduction number $r_{1}(A)$. Since $\mathrm{gin}(I)$ is strongly stable, this implies that $(R/(\mathrm{gin}(I)+ x_n))_{d} = 0$. From the exact sequence (\ref{ExSeq:LPGin}), we have \begin{align*} \dim_k ((\mathrm{gin}(I):x_n)/\mathrm{gin}(I))_{d-1} =& \ H(R/\mathrm{gin}(I),d-1) - H(R/\mathrm{gin}(I),d) \\ =& \ H(A,d-1) - H(A,d). \end{align*} The second equality in (\ref{Eq:d>r_1(A)}) follows from the first assertion. \end{proof} \begin{Coro}\label{Cor:r_1WithWLP} Suppose that $A=R/I$ has the weak Lefschetz property. If $H(A,d-1) \ge H(A,d)$, then $d > r_{1}(A)$. In particular, \[ r_{1}(A) = \min \{ d \ | \, H(A,d-1) \ge H(A,d) \} - 1. \] \end{Coro} \begin{proof} If $d \le r_1(A)$, then $x_{n-1}^d \notin \mathrm{gin}(I)$. So $R_{d} \neq (\mathrm{gin}(I) + x_n)_{d}$. Since $R/\mathrm{gin}(I)$ has the weak Lefschetz property, we have $(\mathrm{gin}(I) : x_n)_{d-1} = (\mathrm{gin}(I))_{d-1}$. By the exact sequence (\ref{ExSeq:LPGin}), this implies that \[ H(A,d-1) = H(A,d) - \dim_k(R/(\mathrm{gin}(I) + x_n))_{d} < H(A,d), \] which is a contradiction. The last assertion follows from Proposition \ref{Prop:r1A} and the first assertion. \end{proof} The following proposition gives us a criterion to check that $A=R/I$ has the weak Lefschetz property, if we know the minimal system of generators or the graded Betti numbers of $\mathrm{gin}(I)$ and the first reduction number $r_1(A)$. Remind $J_{n-1}$ defined in (\ref{Set:IndexMinGen}). \begin{Prop}\label{Prop:WLP} The following statements are equivalent: \begin{enumerate} \item $A=R/I$ has the weak Lefschetz property. \item If $T$ is a minimal generator of $\mathrm{gin}(I)$ of degree $d$ with $\max(T) = n$, then $d \ge r_{1}(R/I) + 1$, i.e. \[ |\alpha| + f_n(\alpha_1,\ldots,\alpha_{n-1}) \ge r_1(A) + 1, \] for any $(\alpha_1,\ldots,\alpha_{n-1}) \in J_{n-1}$, where $|\alpha| = \sum_{i=1}^{n-1} \alpha_i$. \item $\beta_{n-1,n-1+j}(\mathrm{gin}(I)) = 0$ for all $j \le r_1(A)$. \end{enumerate} \end{Prop} \begin{proof} (1) $\Rightarrow$ (2): If $T$ is a minimal generator of $\mathrm{gin}(I)$ of degree $d$ with $\max(T) = n$, then $T/x_n$ is a nonzero element in $((\mathrm{gin}(I):x_n)/(\mathrm{gin}(I))_{d-1}$. Since $A$ has the weak Lefschetz property, $(R/(\mathrm{gin}(I) + x_n)_{d} = 0$. Hence $x_{n-1}^d \in \mathrm{gin}(I)$, this implies that $d \ge r_1(A) + 1$. (2) $\Rightarrow$ (1): Suppose that $((\mathrm{gin}(I):x_n)/\mathrm{gin}(I))_{d-1} \neq 0$. Let $T$ be a monomial in $(\mathrm{gin}(I):x_n)_{d-1}$ such that $T \notin \mathrm{gin}(I)$. Then we have $Tx_n \in \mathcal{G}(\mathrm{gin}(I))_{d}$, by Lemma \ref{Lem:Cond_Belong_In_G(ginI)}. By assumption, we have $d \ge r_1(A) + 1$. This implies that $x_{n-1}^{d} \in \mathrm{gin}(I)$, so $(R/(\mathrm{gin}(I)+x_n))_{d} = 0$. (2) $\Leftrightarrow$ (3): It is clear from Eliahou-Kervaire Theorem. \end{proof} \begin{Coro}\label{Cor:BettiForWLP} If $A=R/I$ has the weak Lefschetz property, then \begin{enumerate} \item $\beta_{n-1,n-1+d}(\mathrm{gin}(I))= \begin{cases} 0& \ \text{ if } d \le r_1(A), \\ H(A,d-1)-H(A,d)& \ \text{ if } d > r_1(A). \end{cases} $ \item For any $0 \le i \le n-1$ and $d \ge r_1(A) + 2$, \[ \beta_{i,i+d}(\mathrm{gin}(I)) = \binom{n-1}{i}(H(A,d-1)-H(A,d)). \] \end{enumerate} \end{Coro} \begin{proof} The assertions follow from Proposition \ref{Prop:r1A}, Proposition \ref{Prop:WLP}, and Eliahou-Kervaire Theorem. \end{proof} \begin{Ex} Consider two strongly stable ideals of $R=k[x_1,x_2,x_3]$ defined by \begin{align*} I &= (x_1^2, x_1x_2, \underline{x_1x_3, x_2^3}, x_2x_3^2, x_3^4), \\ J &= (x_1^2, x_1x_2, \underline{x_2^2, x_1x_3^2}, x_2x_3^2, x_3^4). \end{align*} Since $\deg(x_1x_3) = 2 < 3 = r_1(R/I) + 1$, $R/I$ does not have the weak Lefschetz property, but $R/J$ has. \end{Ex} The following theorem gives an equivalent condition for $A=R/I$ to have the strong Lefschetz property \ in terms of the minimal system of generators of $\mathrm{gin}(I)$. \begin{Thm}\label{Thm:SLP} $A=R/I$ has the strong Lefschetz property \ if and only if for any $\alpha=(\alpha_1,\ldots,\alpha_{n-1}) \in J_{n-1}$, \begin{enumerate} \item $|\alpha| + f_n(\alpha_1,\ldots,\alpha_{n-1}) \ge r_1(A) + 1$, and \item $f_n(\alpha_1,\ldots,\alpha_{n-1}) \ge f_n(0,\ldots,0,|\alpha| + 1) + 1 $, \end{enumerate} where $|\alpha| = \sum_{j=1}^{n-1} \alpha_j$. \end{Thm} \begin{proof} Suppose that $A$ has the strong Lefschetz property, and $(\alpha_1,\ldots,\alpha_{n-1}) \in J_{n-1}$. The first assertion follows from Proposition \ref{Prop:WLP}. For the second assertion, note that $(0,\ldots,0,|\alpha|) \in J_{n-1}$ by Remark \ref{Cond:ElementInJ_i} (2). Hence if $(0,\ldots,0,|\alpha|+1) \notin J_{n-1}$, then $x_{n-1}^{|\alpha|+1} \in \mathrm{gin}(I)$ by the definition of $J_{n-1}$. So we have $f_n(0,\ldots,0,|\alpha|+1) = 0$, and the second assertion follows from Lemma \ref{Lem:f'sProperty} (1). Thus we can assume that $(0,\ldots,0,|\alpha|+1) \in J_{n-1}$. For simplicity, let $\beta = f_n(\alpha_1, \ldots, \alpha_{n-1})$. Then $x_1^{\alpha_1} \cdots x_{n-1}^{\alpha_{n-1}} x_n^{\beta} \in \mathcal{G}(\mathrm{gin}(I))$. This shows that $x_1^{\alpha_1} \cdots x_{n-1}^{\alpha_{n-1}}$ is a nonzero monomial in $((\mathrm{gin}(I):x_n^{\beta})/\mathrm{gin}(I))_{|\alpha|}$. From the strong Lefschetz property \ of $A$, we have $R_{|\alpha|+\beta} = (\mathrm{gin}(I)+x_n^{\beta})_{|\alpha|+\beta}$, and hence $x_{n-1}^{|\alpha|+1} x_n^{\beta-1} \in \mathrm{gin}(I)$. By the definition of $f_n$, this implies that $f_n(0,\ldots,0,|\alpha|+1) \le \beta - 1$. For the converse, it is enough to show that if $((\mathrm{gin}(I) : x_n^i)/\mathrm{gin}(I))_{d} \neq 0$, then $R_{d+i} = (\mathrm{gin}(I) + x_n^i)_{d+i}$ for all $d$ and $i$. We will show this by induction on $i$. By Proposition \ref{Prop:WLP}, $A$ has the weak Lefschetz property, i.e. $(R/(\mathrm{gin}(I) + x_n^i))_{d+i} = 0$ for all $d$ and $i=1$. Suppose that $i > 1$ and that $T=x_1^{\alpha_1} \cdots x_{n-1}^{\alpha_{n-1}} x_n^{\alpha_n}$ is a nonzero monomial in $((\mathrm{gin}(I) : x_n^i)/\mathrm{gin}(I))_{d}$. If $T$ is a nonzero monomial in $((\mathrm{gin}(I) : x_n^{i-1})/\mathrm{gin}(I))_{d}$, then $R_{d+i-1} = (\mathrm{gin}(I) + x_n^{i-1})_{d+i-1}$, by induction hypothesis. So $x_{n-1}^{d+1} x_n^{i-2} \in \mathrm{gin}(I)$, and hence $x_{n-1}^{d+1} x_n^{i-1} \in \mathrm{gin}(I)$. Since $\mathrm{gin}(I)$ is strongly stable, this shows that $R_{d+i}=(\mathrm{gin}(I) + x_n^i)_{d+i}$. Thus we may assume that $T$ is a nonzero monomial in $((\mathrm{gin}(I) : x_n^i)/(\mathrm{gin}(I):x_n^{i-1}))_{d}$. Then $Tx_n^i = x_1^{\alpha_1} \cdots x_{n-1}^{\alpha_{n-1}} x_n^{\alpha_n + i} \in \mathcal{G}(\mathrm{gin}(I))$ by Lemma \ref{Lem:Cond_Belong_In_G(ginI)}. This implies that $(\alpha_1,\ldots,\alpha_{n-1}) \in J_{n-1}$ by Remark \ref{Cond:ElementInJ_i}(1) and \[ \alpha_n + i = f_n(\alpha_1,\ldots,\alpha_{n-1}) \ge f_n(0,\ldots,0,d - \alpha_n + 1) + 1, \] since $d=\deg(T)=\sum_{j=1}^{n} \alpha_j$ and $f_n$ satisfies the condition (2). By the definition of $f_n$, we have $x_{n-1}^{d - \alpha_n + 1} x_n^{\alpha_n + i - 1} \in \mathrm{gin}(I)$. Since $\mathrm{gin}(I)$ is strongly stable, this shows that $x_{n-1}^{d + 1} x_n^{i-1} \in \mathrm{gin}(I)$, and hence the assertion follows. \end{proof} \begin{Ex}\label{Ex:NonUniqueSLP} Consider the following stable monomial ideal of $R=k[x,y,z,w]$ \[ I=(x^2, xy, y^3, y^2z, xz^2, yz^2, z^3, xzw^2, y^2w^3, yzw^3, z^2w^3, xw^5, yw^5, zw^5, w^7). \] Every generator $T$ which is divisible by $w$ has degree $ \ge 3$, and $r_1(R/I) = 2$. Furthermore, $I$ satisfies the second condition in Theorem \ref{Thm:SLP}. Hence $R/I$ has the strong Lefschetz property. \end{Ex} Finally, we will give an equivalent condition for $A=R/I$ to have the strong Stanley property \ in terms of the minimal system of generators of $\mathrm{gin}(I)$. To do so, note that $A=R/I$ has the strong Stanley property \ if and only if $R/\mathrm{gin}(I)$ has the strong Stanley property \ with a strong Stanley element \ $x_n$, as shown in the cases of weak Lefschetz property \ and strong Lefschetz property \ (cf. Proposition \ref{Prop:Equiv.Cond.I.and.GinI}). Consider the following exact sequence \begin{align*} 0 \rightarrow ((\mathrm{gin}(I):x_n^{t-2i})/\mathrm{gin}(I))_{i} \rightarrow (R/\mathrm{gin}(I)&)_{i} \xrightarrow[\quad]{\times x_n^{t-2i}} (R/\mathrm{gin}(I))_{t-i} \\ & \rightarrow (R/(\mathrm{gin}(I)+ x_n^{t-2i}))_{t-i} \rightarrow 0. \end{align*} Then we can see that $A=R/I$ has the strong Stanley property \ if and only if $(\mathrm{gin}(I):x_n^{t-2i})_i = \mathrm{gin}(I)_i$ and $R_{t-i} = (\mathrm{gin}(I)+ x_n^{t-2i})_{t-i}$, or equivalently $x_{n-1}^{i+1} x_n^{t-2i-1} \in \mathrm{gin}(I)_{t-i}$ for any $i=0,1,\ldots,[t/2]$, since $\mathrm{gin}(I)$ is strongly stable. \begin{Lem}\label{Lem:SizeOfAlpha} Suppose that $A=R/I=\bigoplus_{i=0}^{t} A_i$ has the strong Stanley property. If $(\alpha_1,\ldots,\alpha_{n-1}) \in J_{n-1}$, then $|\alpha| := \sum_{i=0}^{n-1} \alpha_i \le [t/2]$. \end{Lem} \begin{proof} Since the Hilbert function of $A$ is symmetric, $H(A,[t/2]) \ge H(A,[t/2]+1)$. By Corollary \ref{Cor:r_1WithWLP}, this implies that $r_1(A) \le [t/2]$. Since $(\alpha_1,\ldots,\alpha_{n-1}) \in J_{n-1}$, we have \[ |\alpha| \le f_{n-1}(0,\ldots,0) - 1 = r_1(A) \le [t/2], \] by Remark \ref{Cond:ElementInJ_i} (2). \end{proof} \begin{Thm}\label{Thm:SSP} A standard Artinian $k$-algebra $A=R/I=\bigoplus_{i=0}^{t}A_i$ has the strong Stanley property \ if and only if for any $(\alpha_1,\ldots,\alpha_{n-1}) \in J_{n-1}$, \begin{equation}\label{Eq:SSP} f_n(\alpha_1,\ldots,\alpha_{n-1}) = t - 2|\alpha| + 1, \end{equation} where $|\alpha| = \sum_{i=1}^{n-1} \alpha_i$. \end{Thm} \begin{proof} Suppose that $A$ has the strong Stanley property. Let $(\alpha_1,\ldots,\alpha_{n-1}) \in J_{n-1}$. Then $|\alpha| \le [t/2]$ by Lemma \ref{Lem:SizeOfAlpha}. Hence we have $(\mathrm{gin}(I):x_n^{t-2|\alpha|})_{|\alpha|} = \mathrm{gin}(I)_{|\alpha|}$. If $f_n(\alpha_1,\ldots,\alpha_{n-1}) \le t - 2|\alpha|$, then $x_1^{\alpha_1}\cdots x_{n-1}^{\alpha_{n-1}}x_n^{t-2|\alpha|} \in \mathrm{gin}(I)$. So $x_1^{\alpha_1}\cdots x_{n-1}^{\alpha_{n-1}} \in (\mathrm{gin}(I):x_n^{t-2|\alpha|})_{|\alpha|} = \mathrm{gin}(I)_{|\alpha|}$. But this contradicts that $J_{n-1}$ contains $(\alpha_1,\ldots,\alpha_{n-1})$. Thus we have \[ f_n(\alpha_1,\ldots,\alpha_{n-1}) > t - 2|\alpha|. \eqno{(1)} \] On the other hand, note that $x_{n-1}^{|\alpha|}x_n^{t-2|\alpha|+1} \in \mathrm{gin}(I)$ because $R_{t-|\alpha| + 1} = (\mathrm{gin}(I)+ x_n^{t-2|\alpha|+2})_{t-|\alpha| + 1}$. This shows that $f_n(0,\ldots,0,|\alpha|) \le t-2|\alpha|+1$, by the definition of $f_n$. Since $\mathrm{gin}(I)$ is strongly stable, we have \[ f_n(\alpha_1,\ldots,\alpha_{n-1}) \le f_n(0,\ldots,0,|\alpha|) \le t- 2|\alpha| + 1. \eqno{(2)} \] The assertion follows from (1) and (2). Conversely, suppose that the equation in (\ref{Eq:SSP}) holds. We need to show that $(\mathrm{gin}(I):x_n^{t-2i})_i = \mathrm{gin}(I)_i$ and $x_{n-1}^{i+1}x_n^{t-2i-1} \in \mathrm{gin}(I)_{t-i}$ for all $0 \le i \le [t/2]$. First we will show that $(\mathrm{gin}(I):x_n^{t-2i})_i = \mathrm{gin}(I)_i$ for all $0 \le i \le [t/2]$. Suppose $x_1^{\alpha_1} \cdots x_{n-1}^{\alpha_{n-1}}x_n^{i-|\alpha|} \in (\mathrm{gin}(I):x_n^{t-2i})_i$, where $|\alpha| = \sum_{j=1}^{n-1} \alpha_j \le i$. Then $x_1^{\alpha_1}\cdots x_{n-1}^{\alpha_{n-1}} x_n^{t-|\alpha|-i} \in \mathrm{gin}(I)$. Now if $(\alpha_1,\ldots,\alpha_{n-1})$ is an element of $J_{n-1}$, then $f_n(\alpha_1,\ldots,\alpha_{n-1}) = t - 2|\alpha| + 1$ by the assumption. Hence we have \[ t - 2|\alpha| + 1 = f_n(\alpha_1,\ldots,\alpha_{n-1}) \le t -|\alpha| - i, \] by the definition of $f_{n}$. But this contradicts that $|\alpha| \le i$. So $(\alpha_1,\ldots,\alpha_{n-1}) \notin J_{n-1}$. By definition of $J_{n-1}$, this implies that either $\alpha_1 \ge f_1$ or there exists $\nu$ such that $1 < \nu \le n-1$, $(\alpha_1,\ldots,\alpha_{\nu-1}) \in J_{\nu-1}$ and $f_{\nu}(\alpha_1,\ldots,\alpha_{\nu-1}) \le \alpha_{\nu}$. In any case $x_1^{\alpha_1} \cdots x_{n-1}^{\alpha_{n-1}} \in \mathrm{gin}(I)$, so $x_1^{\alpha_1} \cdots x_{n-1}^{\alpha_{n-1}} x_n^{i-|\alpha|} \in \mathrm{gin}(I)$, and hence $(\mathrm{gin}(I):x_n^{t-2i})_i = \mathrm{gin}(I)_i$. Finally, to show that $x_{n-1}^{i+1}x_n^{t-2i-1} \in \mathrm{gin}(I)_{t-i}$, we may assume that $i+1 \le r_1(A)$, otherwise $x_{n-1}^{i+1} \in \mathrm{gin}(I)$ by the definition of $r_1(A)$, hence the assertion is achieved. Since $i+1 \le r_1(A) = f_{n-1}(0,\ldots,0)-1$, the element $(0,\ldots,0,i+1) \in \mathbb{Z}_{\ge 0}^{n-1}$ belongs to $J_{n-1}$. By the assumption, we have $f_{n}(0,\ldots,0,i+1) = t-2(i+1)+1 = t-2i-1$, and hence $x_{n-1}^{i+1}x_n^{t-2i-1} \in \mathrm{gin}(I)_{t-i}$. \end{proof} \begin{Coro}\label{Cor:FlagMinGenForSSP} If $A$ has the strong Stanley property, then \[ \{ x_{n-1}^{r_1(A)+1}, x_{n-1}^{r_1(A)} x_n^{t-2r_1(A)+1}, \ldots, x_{n-1}^{i} x_n^{t-2i+1}, \ldots, x_{n-1} x_n^{t-1}, x_n^{t+1} \} \subset \mathcal{G}(\mathrm{gin}(I)). \] \end{Coro} \begin{proof} $x_{n-1}^{r_1(A)+1} \in \mathcal{G}(\mathrm{gin}(I))$ is clear. By Proposition \ref{Prop:MinimalGen}, $x_{n-1}^{i} x_n^{f_n(0,\ldots,0,i)} \in \mathcal{G}(\mathrm{gin}(I))$ for all $0 \le i \le r_1(A)$. Since $A$ has the strong Stanley property, \[ f_n(0,\ldots,0,i) = t -2i +1, \] by Theorem \ref{Thm:SSP}, and hence the assertion follows. \end{proof} \begin{Ex} Let $I$ and $J$ be the monomial ideal of $R=k[x,y,z,w]$ defined by \[ I=(x^2, xy, y^3, y^2z, xz^2, yz^2, z^3, \underline{xzw^2 , y^2w^3}, yzw^3, z^2w^3, xw^5, yw^5, zw^5, w^7), \] \[ J=(x^2, xy, y^3, y^2z, xz^2, yz^2, z^3, \underline{xzw^3, y^2w^3}, yzw^3, z^2w^3, xw^5, yw^5, zw^5, w^7). \] As shown in Example \ref{Ex:NonUniqueSLP}, the Artinian ring $R/I$ satisfies the strong Lefschetz property. But since $\max \{i \ | \, (R/I)_i \neq 0 \} = 6$ and $f_4(1,0,1) = 2 < 6 - 2 \times 2 + 1 = 3$, $R/I$ doesn't have the strong Stanley property. But we can see that $R/J$ satisfies the condition in Theorem \ref{Thm:SSP} by just reading the $w$-degrees of the minimal generators of $J$. Hence $R/J$ has the strong Stanley property. \end{Ex} \vskip 1cm \section{\sc Uniquely Determined Minimal Generators of Generic initial ideals} We showed that if a standard Artinian $k$-algebra $A = R/I$ of codimension $n$ has the weak Lefschetz property, then graded Betti numbers of $\mathrm{gin}(I)$ are determined just by the Hilbert function of $I$ in Corollary \ref{Cor:BettiForWLP}. But, if we restrict ourselves to the case of codimension 3, then we can say much more: Throughout this section, we assume that $R=k[x_1,x_2,x_3]$ is a polynomial ring over a field $k$ and $I$ is a homogeneous Artinian ideal of $R$. Let $A=R/I=\bigoplus_{i=0}^{t} A_i$, where $t=\max\{i \ | \, A_i \neq 0 \}$. In this section, we show that if $R/I$ has the weak Lefschetz property, then the minimal generator $T$'s of $\mathrm{gin}(I)$ with $\max(T) \le 2$ are uniquely determined by the graded Betti numbers of $R/I$ (Proposition \ref{Prop:UniquelyDetermineMinimalGeneratorWithWLP}). Together with Corollary \ref{Cor:BettiForWLP}, this implies that every graded Betti numbers of $\mathrm{gin}(I)$ can be computed from the graded Betti numbers of $I$. And if $R/I$ has the strong Lefschetz property, then we show that the minimal system of generators of $\mathrm{gin}(I)$ is uniquely determined by the graded Betti numbers of $R/I$ (Proposition \ref{Prop:GinSLPUnique}). At last we show that if $R/I$ has the strong Stanley property, then the minimal system of generators of $\mathrm{gin}(I)$ is uniquely determined by the Hilbert function of $R/I$ (Proposition \ref{Prop:GinSSPUnique}). To do so, we use the following theorem proved by Green. \begin{Thm}[Cancellation Principle] \cite{Gr}\label{Thm:Cancellation} For any homogeneous ideal $J$ in the polynomial ring $S=k[x_1,\ldots,x_n]$ and any $i$ and $d$, there is a complex of $k$-modules $V^{d}_{\bullet}$ such that \begin{align*} V_i^d &\cong \mathrm{Tor}_i^S(\mathrm{in}(J),k)_d, \\ H_i(V_{\bullet}^d) &\cong \mathrm{Tor}_i^S(J,k)_d. \end{align*} \end{Thm} This theorem means we can obtain the minimal free resolution of $J$ from the minimal free resolution of $\mathrm{gin}(J)$ by deleting some adjacent summands of the same degree. Using this theorem, we get the following lemma. \begin{Lem}\label{Lem:BettiNumOfGin(I)} Suppose that $A=R/I$ has the weak Lefschetz property. If $a = \min\{ d \ | \, \beta_{0,d}(I) \neq 0 \}$, then we have \begin{enumerate} \item $\beta_{0,i}(\mathrm{gin}(I)) = \begin{cases} \beta_{1,a+1}(\mathrm{gin}(I)) - \beta_{2,a+2}(\mathrm{gin}(I)) + 1 & \ \text{ if } \ i = a,\\ \beta_{1,i+1}(\mathrm{gin}(I)) - \beta_{2,i+2}(\mathrm{gin}(I))& \ \text{ if } \ i \neq a. \end{cases} $ \item For any $i \le r_1(A)+1$, \[ \beta_{1,i+1}(\mathrm{gin}(I)) = \beta_{0,i+1}(\mathrm{gin}(I)) + \beta_{1,i+1}(I) - \beta_{0,i+1}(I). \] \end{enumerate} \end{Lem} \begin{proof} First we will show that the equality in (2) holds. By Theorem \ref{Thm:Cancellation}, for each $i$, there exist $s_i$, $t_i \in \mathbb{Z}_{\ge 0}$ such that \begin{align*} \beta_{0,i+1}(\mathrm{gin}(I))& - t_i = \beta_{0,i+1}(I), \\ \beta_{1,i+1}(\mathrm{gin}(I))& - t_i - s_i = \beta_{1,i+1}(I), \\ \beta_{2,i+1}(\mathrm{gin}(I))& - s_i = \beta_{2,i+1}(I). \end{align*} By Proposition \ref{Prop:WLP}, if $i \le r_1(A) + 1$, then $\beta_{2,i+1}(\mathrm{gin}(I)) = 0$, and so $s_i = 0$. This implies that $\beta_{1,i+1}(\mathrm{gin}(I)) - t_i = \beta_{1,i+1}(I)$, and $\beta_{0,i+1}(\mathrm{gin}(I)) - t_i = \beta_{0,i+1}(I)$. Hence the equality follows. By Eliahou-Kervaire Theorem, we have \begin{align*} \beta_{1,i+1}(\mathrm{gin}(I)) = &\sum_{\substack{T \in \mathcal{G}(\mathrm{gin}(I))_{i} \\ \max(T)=2}} \binom{2-1}{1} + \sum_{\substack{T \in \mathcal{G}(\mathrm{gin}(I))_{i} \\ \max(T)=3}} \binom{3-1}{1} \\ = &\sum_{\substack{T \in \mathcal{G}(\mathrm{gin}(I))_{i} \\ \max(T)=1}} 1 + \sum_{\substack{T \in \mathcal{G}(\mathrm{gin}(I))_{i} \\ \max(T)=2}} 1 + \sum_{\substack{T \in \mathcal{G}(\mathrm{gin}(I))_{i} \\ \max(T)=3}} 1 \\ & \ + \sum_{\substack{T \in \mathcal{G}(\mathrm{gin}(I))_{i} \\ \max(T)=3}} 1 - \sum_{\substack{T \in \mathcal{G}(\mathrm{gin}(I))_{i} \\ \max(T)=1}} 1 \\ =& \begin{cases} \beta_{0,a}(\mathrm{gin}(I)) + \beta_{2,a+2}(\mathrm{gin}(I)) - 1 & \ \text{if} \ i = a, \\ \beta_{0,i}(\mathrm{gin}(I)) + \beta_{2,i+2}(\mathrm{gin}(I)) & \ \text{if} \ i \neq a, \end{cases} \end{align*} and hence the equality in (1) follows. \end{proof} \begin{Coro}\label{Cor:UniqueBettiNumofGinForWLP} If $A=R/I$ has the weak Lefschetz property, then the graded Betti numbers of $\mathrm{gin}(I)$ are completely determined by the graded Betti numbers of $I$. In particular, if $J$ is a homogeneous ideal of $R$ such that $R/J$ has the weak Lefschetz property \ and $\beta_{i,j}(I) = \beta_{i,j}(J)$ for all $i, j$, then $\beta_{i,j}(\mathrm{gin}(I)) = \beta_{i,j}(\mathrm{gin}(J))$ for all $i$,$j$. \end{Coro} \begin{proof} The graded Betti numbers of $I$ determine the Hilbert function of $A=R/I$. Since $A=R/I$ has the weak Lefschetz property, $r_1(A)$ is obtained by Corollary \ref{Cor:r_1WithWLP}. And the graded Betti numbers $\beta_{2,2+s}(\mathrm{gin}(I))$, for any $s$, and $\beta_{i,i+j}(\mathrm{gin}(I))$, for $i=0$,$1$ and $j \ge r_1(A) + 2$, are determined by the Hilbert function of $R/I$, as shown in Corollary \ref{Cor:BettiForWLP}. Hence it is enough to show that $\beta_{0,j}(\mathrm{gin}(I))$ and $\beta_{1,j+1}(\mathrm{gin}(I))$ are determined by the graded Betti numbers of $I$ for $j \le r_1(A) + 1$. Since $\beta_{0,r_1(A)+2}(\mathrm{gin}(I))$ is already known, $\beta_{1,r_1(A)+2}(\mathrm{gin}(I))$ is determined by Lemma \ref{Lem:BettiNumOfGin(I)} (2), and also $\beta_{0,r_1(A)+1}(\mathrm{gin}(I))$ is determined by Lemma \ref{Lem:BettiNumOfGin(I)} (1). Inductively, if we know the value of $\beta_{0,i+1}(\mathrm{gin}(I))$ for $i \le r_1(A)$, then $\beta_{1,i+1}(\mathrm{gin}(I))$ and $\beta_{0,i}(\mathrm{gin}(I))$ are determined by Lemma \ref{Lem:BettiNumOfGin(I)} (2) and (1), respectively. Hence the assertion follows. \end{proof} Now we will show that if $A=R/I$ has the weak Lefschetz property, then the minimal generator $T$ of $\mathrm{gin}(I)$ with $\max(T) \le 2$ is determined by the graded Betti numbers of $I$. We need the following easy lemma. \begin{Lem}\label{Lem:UniqueSequence} Let $a_1,\ldots,a_n$ be a non-decreasing sequence of integers. Let $\beta_d := \sharp \{ i \ | \, a_i = d \}$ for each $d \in \mathbb{Z}$. If $\beta_d$ is given for all $d$, then the integers $a_1,\ldots, a_n$ are determined by $\beta_d$. $\qed$ \end{Lem} \begin{Prop}\label{Prop:UniquelyDetermineMinimalGeneratorWithWLP} Suppose that $I, J$ are homogeneous Artinian ideals of $R$ having the same graded Betti numbers. If $R/I$ and $R/J$ have the weak Lefschetz property, then $\{ T \in \mathcal{G}(\mathrm{gin}(I)) \ | \, \max(T) \le 2 \} = \{ T \in \mathcal{G}(\mathrm{gin}(J)) \ | \, \max(T) \le 2 \}$. In particular, $\mathrm{gin}(I)_d = \mathrm{gin}(J)_d$ for any $d \le r_1(A)$. \end{Prop} \begin{proof} By Corollary \ref{Cor:UniqueBettiNumofGinForWLP}, we have to show that the minimal generator $T$'s of $\mathrm{gin}(I)$ with $\max(T) \le 2$ are determined by the graded Betti numbers of $\mathrm{gin}(I)$. At first note that the minimal generator $T$'s of $\mathrm{gin}(I)$ such that $\max(T) \le 2$ are of the forms; \[ x_1^a, x_1^{a-1}x_2^{f_2(a-1)}, x_1^{a-2}x_2^{f_2(a-2)}, \ldots, x_1x_2^{f_2(1)}, x_2^{f_2(0)}, \] where $a := \min \{i \ | \, \beta_{0,i}(I) \neq 0 \}$. So it is enough to show that each $f_2(i)$ is determined by the graded Betti numbers of $\mathrm{gin}(I)$. Since $\mathrm{gin}(I)$ is strongly stable, \[ \begin{array}{rl} a \ \le \ a-1 + f_2(a-1) &\le \ a - 2 + f_2(a-2) \ \le \ \cdots \\ &\le \ 1 + f_2(1) \ \le \ f_2(0) = r_1(A)+1. \end{array} \] And note that for $i \le r_1(A)$, $\beta_{0,i}(\mathrm{gin}(I))$ is the number of minimal generator $T$'s of $\mathrm{gin}(I)$ such that $\max(T) \le 2$ and $\deg(T) = i$, by Proposition \ref{Prop:WLP}. This implies that $\beta_{0,i}(\mathrm{gin}(I)) = \sharp \{ j \ | \, j + f_2(j) = i \}$. Using Lemma \ref{Lem:UniqueSequence}, we can see that $f_2$ is determined by $\beta_{0,i}(\mathrm{gin}(I))$. And the last assertion follows by Proposition \ref{Prop:WLP}. \end{proof} If $R/I$ has the strong Lefschetz property, then we will show that the minimal system of generators of $\mathrm{gin}(I)$ is determined by the graded Betti numbers of $I$. Remind that determining the minimal system of generators of $\mathrm{gin}(I)$ is equivalent to computing $f_1$, $f_2$, and $f_3$ defined as in (\ref{f's}). From Proposition \ref{Prop:UniquelyDetermineMinimalGeneratorWithWLP}, we already know that $f_1$, $f_2$ are determined by the graded Betti numbers of $I$. Hence it is enough to show that $f_3$ is also determined by those of $I$ in the case that $A=R/I$ has the strong Lefschetz property. Now \begin{equation} \label{Set:IndexMinGenWithn=3} \begin{split} J_2 = \left\{(\alpha_1,\alpha_{2}) \left| \begin{array}{l} 0 \le \alpha_1 < f_1 = a, \\ 0 \le \alpha_2 < f_2(\alpha_1) \end{array} \right. \right\}, \end{split} \end{equation} where $f_1=a$ and $f_2$ is as shown in the proof of Proposition \ref{Prop:UniquelyDetermineMinimalGeneratorWithWLP}. Partition $J_{2}$ into the sets $J[t]$ defined by \[ J[t] := \{ (\alpha_1, \alpha_{2}) \in J_{2} \ | \, \alpha_1 + \alpha_2 = t \}, \] for $0 \le t \le r_1(A)$. Note that $J_{2} = \cup_{t=0}^{r_1(A)} J[t]$, and the minimal generator of $\mathrm{gin}(I)$ corresponding to the element $(\alpha_1, \alpha_2) \in J[t]$ is $x_1^{\alpha_1}x_2^{\alpha_2}x_3^{f_3(\alpha_1,\alpha_2)}$, whose degree is $\alpha_1 + \alpha_2 + f_3(\alpha_1,\alpha_2) = t+f_3(\alpha_1,\alpha_2)$. We will say that $\alpha_1 + \alpha_2 + f_3(\alpha_1,\alpha_2)$ is the degree of $(\alpha_1,\alpha_2)$. \begin{Prop}\label{Prop:GinSLPUnique} If $R/I$ has the strong Lefschetz property, then the minimal system of generators of $\mathrm{gin}(I)$ is determined by the graded Betti numbers of $I$. In particular, if $J$ is a homogeneous ideal of $R$ such that $\beta_{i,j}(J) = \beta_{i,j}(I)$ for all $i, j$, and if $R/J$ has the strong Lefschetz property, then $\mathrm{gin}(I)=\mathrm{gin}(J)$. \end{Prop} \begin{proof} Let $(\alpha_1,\alpha_2)$, $(\beta_1,\beta_2) \in J[t]$. If $\alpha_1 \le \beta_1$, then \[ f_3(\alpha_1,\alpha_2) \ge f_3(\beta_1,\beta_2), \] by Remark \ref{Cond:ElementInJ_i} (3), and hence \begin{align*} \alpha_1 + \alpha_2 + f_3(\alpha_1,\alpha_2) & = t + f_3(\alpha_1,\alpha_2) \\ & \ge t + f_3(\beta_1,\beta_2) \\ &= \beta_1 + \beta_2 + f_3(\beta_1,\beta_2). \end{align*} This shows that, for each $t$, we have a nondecreasing sequence $\{a_{t,i}\}$ of degrees of elements in $J[t]$: Indeed, suppose that $J[t] = \{ (\alpha_{1,1},\alpha_{1,2}), \ldots, (\alpha_{j_t,1},\alpha_{j_t,2}) \}$ and $\alpha_{j,1} \ge \alpha_{j+1,1}$. If we set $a_{t,i} = t + f_3(\alpha_{i,1},\alpha_{i,2})$, then we have \[ a_{t,1} \le a_{t,2} \le \cdots \le a_{t,|J[t]|}, \] where $|J[t]|$ is the number of elements in $J[t]$. Note that $a_{t,|J[t]|} = t + f_3(0,t)$, as shown in the above. And since $A=R/I$ has the strong Lefschetz property, $(\alpha_1,\alpha_2) \in J[t]$ implies that \[ f_{3}(0,t+1) + 1 \le f_{3}(\alpha_1,\alpha_{2}), \] and hence \[ t + 1 + f_{3}(0,t+1) \le t + f_{3}(\alpha_1,\alpha_{2}), \] by Theorem \ref{Thm:SLP}. This shows that $a_{t+1,|J[t+1]|} \le a_{t,1}$ for any $0 \le t \le r_1(A) - 1$. Thus we have a non-decreasing sequence of degrees of the elements in the whole set $J_2$: \[ a_{r_1(A),1} \le a_{r_1(A),2} \le \ldots \le a_{r_1(A),|J[r_1(A)]|} \le a_{r_1(A)-1,1} \le \ldots \le a_{1,|J[1]|} \le a_{0,1}. \] Since $\beta_{2,2+d}(\mathrm{gin}(I))$ is the number of the minimal generator $T$'s of $\mathrm{gin}(I)$ such that $\deg(T) = d$ and $\max(T) = 3$, i.e., \begin{align*} \beta_{2,2+d}(\mathrm{gin}(I)) &= \sharp \{ (\alpha_1,\alpha_2) \in J_2 \ | \, \alpha_1 + \alpha_2 + f_3(\alpha_1,\alpha_2) = d \} \\ &= \sharp \{ (i,j) \ | \, a_{i,j} = d \}, \end{align*} ${a_{i,j}}$ is uniquely determined by $\beta_{2,2+d}(\mathrm{gin}(I))$, by Lemma \ref{Lem:UniqueSequence}, so we are done. \end{proof} Until now, we show that if $A=R/I$ is a standard Artinian graded $k$-algebra of codimension 3 which has the strong Lefschetz property, then the minimal system of generators of $\mathrm{gin}(I)$ is determined by the graded Betti numbers of $I$. But if $A$ has the strong Stanley property, then we will show that the minimal system of generators of $\mathrm{gin}(I)$ is determined by the Hilbert function of $A$. For any $(\alpha_1,\alpha_2)$, $(\beta_1,\beta_2) \in \mathbb{Z}_{\ge 0}^{2}$, we say that $(\beta_1,\beta_2) \le (\alpha_1,\alpha_2)$ if $x_1^{\beta_1} x_2^{\beta_2} \le x_1^{\alpha_1} x_2^{\alpha_2}$ with respect to the reverse lexicographic order. We will use this order in the following lemma and proposition. And we say that a monomial $T_1 = x_1^{\alpha_1} x_2^{\alpha_2}$ is obtained from a monomial $T_2=x_1^{\beta_1} x_2^{\beta_2}$ by elementary Borel move if $T_1 = x_1 \frac{T_2}{x_2}$. In that case, we also say that $(\alpha_1,\alpha_2)$ is obtained from $(\beta_1,\beta_2)$ by elementary Borel move. Set $|\alpha| := \alpha_1 + \alpha_2$. Note that if $|\alpha| = |\beta|$ and $(\beta_1,\beta_2) \le (\alpha_1,\alpha_2)$, then $x_1^{\alpha_1} x_{2}^{\alpha_{2}}$ is obtained from $x_1^{\beta_1} x_{2}^{\beta_{2}}$ by consecutive elementary Borel move. \begin{Lem}\label{Lem:Contin_of_minimal_Generator} Suppose that $x_1^{\alpha_1} x_{2}^{\alpha_{2}}x_{3}^{\gamma}$, $x_{2}^{|\alpha|}x_3^{\gamma}$ are minimal generators of $\mathrm{gin}(I)$ for some $(\alpha_1,\alpha_{2}) \in \mathbb{Z}_{\ge 0}^{2}$. Then $x_1^{\beta_1} x_{2}^{\beta_{2}} x_{3}^{\gamma}$ is also a minimal generators of $\mathrm{gin}(I)$, for any $(\beta_1,\beta_{2})$ $\le$ $(\alpha_1,\alpha_{2})$ with $|\beta| = |\alpha|$. \end{Lem} \begin{proof} Set $T_{\alpha}:=x_1^{\alpha_1} x_{2}^{\alpha_{2}}x_{3}^{\gamma}$, and $T_{\beta}:=x_1^{\beta_1} x_{2}^{\beta_{2}}x_{3}^{\gamma}$. By induction, it is enough to show that if $(\alpha_1,\alpha_2)$ is obtained from $(\beta_1,\beta_2)$ by elementary Borel move, then $T_{\beta} \in \mathcal{G}(\mathrm{gin}(I))$. Note that, in that case, $(\alpha_1,\alpha_2) = (\beta_1+1,\beta_2-1)$. Since $x_{2}^{|\beta|}x_{3}^{\gamma} \in \mathcal{G}(\mathrm{gin}(I))$ and $\mathrm{gin}(I)$ is strongly stable, $T_{\beta} \in \mathrm{gin}(I)$. So it is enough to show that $T_{\beta}$ is a minimal generator. If not, there exists $(\delta_1,\delta_{2},\delta) \in \mathbb{Z}_{\ge 0}^{3}$ such that $\delta \le \gamma$, $\delta_l \le \beta_l$ for any $l = 1, 2$, and $T_{\delta} := x_1^{\delta_1} x_{2}^{\delta_{2}} x_{3}^{\delta} \in \mathrm{gin}(I)$. If $\delta_2 < \beta_2$, then $\alpha_2 = \beta_2 - 1 \ge \delta_2$, hence $T_{\delta}$ divides $T_{\alpha}$, this contradicts that $T_{\alpha}$ is a minimal generator of $\mathrm{gin}(I)$. Thus we have $\delta_2 = \beta_2$. But, in this case, $T'_{\delta} = x_1 \frac{T_{\delta}}{x_2} \in \mathrm{gin}(I)$ divides $T_{\alpha}$, a contradiction. This shows that $T_{\beta} \in \mathcal{G}(\mathrm{gin}(I))$. \end{proof} \begin{Prop}\label{Prop:GinSSPUnique} If $A=R/I$ has the strong Stanley property, then the minimal system generators of $\mathrm{gin}(I)$ is determined by the Hilbert function of $I$. In particular, if $J$ is a homogeneous ideal of $R$ with $H(R/I,d)=H(R/J,d)$ for all $d$, and if $R/J$ has the strong Stanley property, then $\mathrm{gin}(I) = \mathrm{gin}(J)$. \end{Prop} \begin{proof} Let $t = \max \{ i \ | \, A_i \neq 0 \}$. Since $A$ has the strong Stanley property, $A$ has also the weak Lefschetz property. From Corollary \ref{Cor:r_1WithWLP} and Corollary \ref{Cor:BettiForWLP}, $r_1(A)$ and $\beta_{2,2+d}(\mathrm{gin}(I))$ are given by the Hilbert function of $R/I$. For $0 \le d \le r_1(A)$, let $B_d := \{ (\alpha_1,\alpha_{2}) \in \mathbb{Z}_{\ge 0}^{2} \ | \, \alpha_1 + \alpha_2 = d \}$, and let $J[d]$ be the set consisting of the smallest $\beta_{2,2+(t-d+1)}(\mathrm{gin}(I))$ elements in the set $B_d$ with respect to the order we defined. We will show that $J_{2}$ is the disjoint union of $J[d]$, $0 \le d \le r_1(A)$. Since disjointness is clear by the definition of $J[d]$, it is enough to show that $J_2$ is the union of the sets $J[d]$. Let $(\alpha_1,\alpha_{2}) \in J_{2}$. Then $x_1^{\alpha_1} x_{2}^{\alpha_{2}} x_3^{f_3(\alpha_1,\alpha_{2})} \in \mathcal{G}(\mathrm{gin}(I))$, by Remark \ref{Cond:ElementInJ_i} (1). Since $A$ has the strong Stanley property, \[ f_3(\alpha_1,\alpha_{2}) = t-2|\alpha| + 1, \] where $|\alpha| = \alpha_1 + \alpha_2$, by Theorem \ref{Thm:SSP}. Note that the degree of $x_1^{\alpha_1} x_{2}^{\alpha_{2}} x_3^{t-2|\alpha| + 1}$ is $t - |\alpha| + 1$ and $|\alpha| \le r_1(A)$. Otherwise, $x_1^{\alpha_1} x_{2}^{\alpha_{2}}x_3^{t-2|\alpha| + 1}$ cannot be a minimal generator of $\mathrm{gin}(I)$, since $x_{2}^{r_1(A)+1} \in \mathrm{gin}(I)$ and $\mathrm{gin}(I)$ is strongly stable. Hence $x_{2}^{|\alpha|} x_3^{t-2|\alpha|+1} \in \mathcal{G}(\mathrm{gin}(I))$, by Corollary \ref{Cor:FlagMinGenForSSP}. Suppose that $(\alpha_1,\alpha_{2})$ is the $l$-th smallest element of $B_{|\alpha|}$. By Lemma \ref{Lem:Contin_of_minimal_Generator}, there are at least $l$ minimal generator $T$'s of $\mathrm{gin}(I)$ such that $\max(T) = 3$ and $\deg(T) = t-|\alpha| + 1$. So $l \le \beta_{2,2+(t-|\alpha|+1)}(\mathrm{gin}(I))$, and hence $(\alpha_1,\alpha_{2}) \in J[|\alpha|]$. This shows that $\sum_{d=0}^{r_1(A)} \sharp J[d] \ \ge \sharp J_{2}$. And \begin{align*} \sharp J_{2} & = \sharp \{ T \in \mathcal{G}(\mathrm{gin}(I) \ | \, \max(T) = 3 \} \\ & = \sum_{i=0}^{t+1} \beta_{2,2+i}(\mathrm{gin}(I)) \ \ge \sum_{d=0}^{r_1(A)} \sharp J[d] \ \ge \sharp J_{2}, \end{align*} by Remark \ref{Cond:ElementInJ_i} (1). Hence $J_2$ is the disjoint union of the sets $J[d]$, $0 \le d \le r_1(A)$. The assertion follows by Remark \ref{Remk:JDetermineGin}. \end{proof} \begin{Coro}\label{Cor:NSCondForSSP} Suppose that $A=R/I$ has the strong Stanley property. If $J$ is a homogeneous ideal of $R$ having the same Hilbert function with $I$, and if $R/J$ has the weak Lefschetz property, then $R/J$ has the strong Stanley property \ if and only if \[ \{ x_{2}^{r_1(A)+1}, x_{2}^{r_1(A)} x_3^{t-2r_1(A)+1}, \ldots, x_{2}^{i}x_3^{t-2i+1}, \ldots, x_{2} x_3^{t-1}, x_3^{t+1} \} \subset \mathrm{gin}(J), \] where $t = \max \{ i \ | \, A_i \neq 0 \}$. \end{Coro} \begin{proof} $(\Rightarrow)$ It follows from Proposition \ref{Prop:GinSSPUnique} and Corollary \ref{Cor:FlagMinGenForSSP}. $(\Leftarrow)$ Note that every element of the left-hand side is a minimal generator of $\mathrm{gin}(I)$, as shown in the proof of Proposition \ref{Prop:GinSSPUnique}. Because the other minimal generator $T$'s of $\mathrm{gin}(I)$ with $\max(T)=3$ are obtained from $x_{2}^{i} x_3^{t-2i+1}$ by consecutive elementary Borel move for some $0 \le i \le r_1(A)$, we have $\{ T \in \mathcal{G}(\mathrm{gin}(I)) \ | \, \max(T) = 3 \} \subset \mathrm{gin}(J)$. Since $H(R/I,d)=H(R/J,d)$, $\dim_k \mathrm{gin}(I)_d = \dim_k \mathrm{gin}(J)_d$ for all $d$. This shows that $\{ T \in \mathcal{G}(\mathrm{gin}(I)) \ | \, \max(T) = 3 \} = \{ T \in \mathcal{G}(\mathrm{gin}(J)) \ | \, \max(T) = 3 \}$, by Proposition \ref{Prop:WLP}. Hence the assertion follows from Remark \ref{Remk:JDetermineGin}. \end{proof} Proposition \ref{Prop:GinSLPUnique} and Proposition \ref{Prop:GinSSPUnique} do not work if $A=R/I$ is a standard graded Artinian $k$-algebra of codimension $\ge 4$, as shown in the following example. \begin{Ex}\label{Ex:NonUniqueSSP} Consider two strongly stable ideals of $R=k[x,y,z,w]$ defined as \begin{align*} I=(&x^2, xy^2, y^4, \underline{y^3z, xyz^3, y^2z^3, xz^5}, yz^5, z^7, \\ &z^6w, \underline{xz^4w^3}, yz^4w^3, z^5w^3, \underline{xyz^2w^5}, y^2z^2w^5, xz^3w^5, yz^3w^5, z^4w^5, \\ &y^3w^7, xyzw^7, y^2zw^7, xz^2w^7, yz^2w^7, z^3w^7, xyw^9, y^2w^9, xzw^9, yzw^9, z^2w^9, \\ &xw^{11}, yw^{11}, zw^{11}, w^{13}), \\ I'=(&x^2, xy^2, y^4, \underline{xyz^2, y^3z^2, xz^4, y^2z^4}, yz^5, z^7, \\ &z^6w, \underline{y^2z^3w^3}, yz^4w^3, z^5w^3, \underline{y^3zw^5}, y^2z^2w^5, xz^3w^5, yz^3w^5, z^4w^5, \\ &y^3w^7, xyzw^7, y^2zw^7, xz^2w^7, yz^2w^7, z^3w^7, xyw^9, y^2w^9, xzw^9, yzw^9, z^2w^9, \\ &xw^{11}, yw^{11}, zw^{11}, w^{13}). \end{align*} Note that $I$ and $I'$ have the same graded Betti numbers. And for any $(\alpha_1,\alpha_2,\alpha_3) \in J_3$, $f_4(\alpha_1,\alpha_2,\alpha_3) = t - 2|\alpha| + 1$, where $t=\max\{ i \ | \, (R/I)_i \neq 0 \} = \max\{ i \ | \, (R/I')_i \neq 0 \} = 12$. Hence they have the strong Stanley property. \end{Ex}\vskip 1cm \end{document}
arXiv
\begin{document} \title{\huge The Residual Intersection Formula of Type II Exceptional Curves} \author{Ai-Ko Liu\footnote{Current Address: Mathematics Department of U.C. Berkeley}\footnote{ HomePage:math.berkeley.edu/$\sim$akliu}} \date{June, 16} \maketitle \newtheorem{main}{Main Theorem} \newtheorem{theo}{Theorem} \newtheorem{lemm}{Lemma} \newtheorem{prop}{Proposition} \newtheorem{rem}{Remark} \newtheorem{cor}{Corollary} \newtheorem{mem}{Examples} \newtheorem{defin}{Definition} \newtheorem{axiom}{Axiom} \newtheorem{conj}{Conjecture} \newtheorem{assum}{Assumption} \section{Preliminary} This paper is a part of the program [Liu1], [Liu3], [Liu4], [Liu5], [Liu6], [Liu7], to understand the family Seiberg-Witten theory and its relationship with the enumeration of nodal or singular curves in linear systems of algebraic surfaces. In [Liu1] a symplectic approach to the universality theorem is given. In [Liu6] the algebraic geometric approach is given. In [Liu7] this result has been interpreted as an enumerative Riemann-Roch formula probing the non-linear information of the linear systems. The universality theorem implies that for $5n-1$ very ample line bundle $L\mapsto M$, the ``number of $n$-node nodal curves'' in a generic $n$ dimensional linear sub-system of $|L|$ can be expressed as a universal polynomial of the characteristic numbers $c_1^2(L), c_1({\bf K}_M) \cdot c_1(L)$, $c_1^2({\bf K}_M)$ and $c_2(M)$, in the spirit of the surface Riemann-Roch formula. On the other hand, for $L$ not sufficiently very ample, the actual virtual number of nodal curves differs from the universal formula predicted by G$\ddot{o}$ttsche [Got]. In [Liu2] the corrections from the type $II$ exceptional classes have been interpreted as a non-linear analogue of second sheaf cohomology. In this paper, we build up the theory of type $II$ exceptional classes, parallel to the type $I$ theory built up in [Liu1], [Liu5] and [Liu6]. One major application of the type $II$ theory is to define the ``virtual number of nodal curves'' in $|L|$ on algebraic surfaces without any condition on $L$. A direct application of our theory is to argue the vanishing result of type $II$ contributions on universal families of $K3$ surfaces. Once this is achieved, the ``virtual numbers of nodal curves'' on $K3$ are equal to the polynomials constructed from the universality theorem [Liu6]. Another interesting application of the theory of type $II$ exceptional classes to enumerative problem is the solution of Harvey-Moore conjecture [Liu2] on the enumeration of nodal curves on Calabi-Yau K3 fibrations. The layout of the paper is as the following. In section \ref{section; type2}, we review the algebraic family Kuranishi models of type $II$ exceptional classes. Then in section \ref{section; kura}, we construct the Kuranishi models explicitly. In section \ref{section; blowup}, we consider the blowup construction of the algebraic family Seiberg-Witten invariants and prove the main theorem of the paper on the mixed algebraic family Seiberg-Witten invariants attached to a finite collection of type $II$ exceptional classes. The following is an abbreviated form of our main theorem of the paper, stated in a less technical term. Please refer to theorem \ref{theo; degenerate} on page \pageref{theo; degenerate} for the more complete statement. \begin{main}\label{main; 1} Given an algebraic family of algebraic surfaces $\pi:{\cal X}\mapsto B$ and a finite collection of type $II$ exceptional \footnote{For the definition of exceptional classes, please consult definition \ref{defin; exception} on page \pageref{defin; exception}.} classes, $e_{II; 1}$, $e_{II; 2}$, $e_{II; 3}$, $\cdots$, $e_{II; p}$ along ${\cal X}\mapsto B$ satisfying $e_{II; i}\cdot e_{II; j}\geq 0$ for $i\not=j$, then the localized (excess) contribution of the algebraic family Seiberg-Witten invariant ${\cal AFSW}_{{\cal X}\mapsto B}(1, C)$ along the locus of co-existence $\times_B^{1\leq i\leq p}\pi_i(\times_B^{1\leq i\leq p}{\cal M}_{e_{II; i}})$ of type $II$ exceptional classes is well defined as a mixed algebraic family Seiberg-Witten invariant of $C-\sum_{1\leq i\leq p}e_{II; i}$, manifestly a topological invariant independent of the choices of the family Kuranishi models and the possible deformations of the family $\pi:{\cal X}\mapsto B$. As a direct consequence, the residual contribution of the family invariant ${\cal AFSW}_{{\cal X}\mapsto B}(1, C)$, which is ${\cal AFSW}_{{\cal X}\mapsto B}(1, C)$ subtracted by the localized excess contribution, is well defined. \end{main} The above theorem also works for the mixed invariants ${\cal AFSW}_{{\cal X}\mapsto B}(\eta, C)$, $\eta\in {\cal A}_{\cdot}(B)$. The above theorem generalizes the theory of type $I$ exceptional curves developed in [Liu5] and [Liu6] to the cases when the moduli spaces of exceptional curves are not regular. At the end of the paper, we outline the procedure to extend the scheme to an inductive scheme on hierarchies of finite collections of type $II$ classes. Then we apply the inductive scheme to the universal families of $K3$ surfaces and argue the vanishing results on $K3$ universal families. \section{The Review of Algebraic Family Kuranishi Models of Type II Exceptional Classes}\label{section; type2} Recall that a type $I$ exceptional class $e_i=E_i-\sum_{j_i}E_{j_i}$ of the fiber bundle of universal spaces $M_{n+1}\mapsto M_n$ has the following two crucial properties: \noindent 1. The family moduli space of $e_i$ is smooth of codimension $-{e_i^2-c_1({\bf K}_{M_{n+1}/M_n})\cdot e_i\over 2}$ $=-e_i^2-1$ in $M_{n}$ and can be identified with the closure of an admissible stratum, $Y(\Gamma_{e_i})$, for a fan-like admissible graph $\Gamma_{e_i}$. \label{fanlike} \noindent 2. Over each $b\in Y(\Gamma_{e_i})$, the class $e_i$ is represented by a unique holomorphic curve, called type $I$ exceptional curve, in the fiber $M_{n+1}|_b=M_{n+1}\times_{M_n}\{b\}$. Over a Zariski open subset $Y_{\Gamma_{e_i}}$ of $Y(\Gamma_{e_i})$, the curves representing $e_i$ are irreducible. Over a finite union of codimension one loci in $Y(\Gamma_{e_i})-Y_{\Gamma_{e_i}}$, the curves are disjoint unions of irreducible components, each of them being an irreducible type $I$ exceptional curve. The fact that the fibrations of type $I$ curves are smooth and ``universal'' has played a crucial role in understanding the universal nature of the ``universality'' theorem [Liu1], [Liu6], as a natural extension of surface Riemann-Roch theorem in enumerative geometry. The above properties about type $I$ exceptional curves are the consequence of the existence of a ``canonical'' algebraic family Kuranishi model of $e_i$. These two properties have been used extensively in the proofs of the universality theorem [Liu1], [Liu6]. In this paper, we develop the necessary algebraic technique to deal with type $II$ exceptional curves. Recall the following definition of exceptional classes which had already appeared in [Liu4], [Liu7], \begin{defin}\label{defin; exception} A class $e$ is said to be exceptional over $\pi:{\cal X}\mapsto B$ if it satisfies the following conditions: \noindent (i). The fiberwise self-intersection number $\int_B\pi_{\ast}(e^2)<0$. \noindent (ii). The relative degree $deg_{{\cal X}/B}e>0$. \end{defin} \begin{defin}\label{defin; type2} Consider the universal family ${\cal X}=M_{n+1}\mapsto B=M_n$. An exceptional class $e$ is said to be a type $II$ exceptional class if $e$ does not lie in the kernel of ${\cal A}_{\cdot}(M_{n+1})\mapsto {\cal A}_{\cdot}(M\times M_n)$. \end{defin} For a general (non-universal) fiber bundle ${\cal X}\mapsto B$, we also use the term ``type $II$ exceptional class'' in referring exceptional classes of the fibration ${\cal X}\mapsto B$. In this paper, we often use $e_{II}$ with a subscript $II$ to denote a type $II$ exceptional class. To simplify our discussion and get to the key point, we impose an additional condition on $e_{II}$. \begin{assum}( Assumption on $e_{II}$)\label{assum; 2} $deg_{{\cal X}/B}(c_1({\bf K}_{{\cal X}/B})-e_{II})<0$. \end{assum} This implies that ${\cal R}^2\pi_{\ast}\bigl({\cal E}_{e_{II}}\bigr)=0$ by relative Serre duality. For a type $II$ exceptional class $e_{II}$ of the fiber bundle ${\cal X}\mapsto B$, there is usually no canonical choices of the algebraic family Kuranishi models. We know that when a curve representing $e_{II}$ is irreducible, it must be unique in the same fiber. However the reducible representatives may even contain some irreducible component with a non-negative self-intersection. Thus, the following general symptoms have to be kept in mind, \label{bizzare} \noindent 1'. The family moduli space of $e_{II}$, ${\cal M}_{e_{II}}\mapsto B$ (discussed in more detail below), may be not of the expected algebraic family dimension $dim_{\bf C}B+p_g+{e_{II}^2-c_1({\bf K})\cdot e_{II}\over 2}$. The sub-locus $\subset {\cal M}_{e_{II}}$ over which the universal curve representing $e_{II}$ remains irreducible may not be open and dense in ${\cal M}_{e_{II}}$ and can be even empty sometimes. \noindent 2'. The projection map ${\cal M}_{e_{II}}\mapsto B$ is usually not a closed immersion. And for each $b\in B$, the fiber of ${\cal M}_{e_{II}}\mapsto B$ above $b$, ${\cal M}_{e_{II}}|_b$, parametrizing all the curves dual to $e_{II}$ in the fiber ${\cal X}_b$, can be of positive dimension. This means that there can be (uncountably) many representatives of $e_{II}$ above this given $b\in B$. By the exceptionality condition on $e_{II}$, this can occur only when the representative contains more than one irreducible component. Our task is to develop a version of residual intersection formula of the algebraic family Seiberg-Witten invariant based on the general symptoms 1' and 2'. We will demonstrate that there is a well-defined theory of type $II$ exceptional classes parallel to the theory of type $I$ exceptional classes. The basic tool we will use is the construction of the algebraic family Kuranishi models of $e_{II}$. \subsection{\bf A Short Review About the Kuranishi-Model of $e_{II}$} \label{subsection; review} In the following, we give a short review about the construction of the algebraic family Kuranishi models of $e_{II}$ and discuss their basic properties. Because our main application is about the universal families, we assume that ${\cal R}^2\pi_{\ast}{\cal O}_{\cal X}$ is isomorphic to ${\cal O}_B^{p_g}$. In this case\footnote{For the definition of the formal excess base dimension $febd$, please consult definition 4.5. of [Liu3] for more details.}, $febd(e_{II}, {\cal X}/B)=p_g$. We will assume implicitly in most of the current paper that $febd(e_{II}, {\cal X}/B)=p_g$ to simplify our discussion. Because in each fiber algebraic surface of the fiber bundle ${\cal X}\mapsto B$, the curve (divisor) representing $e_{II}$ may not be unique, we consider the base space, ${\cal T}_B({\cal X})$, of relative $Pic^0$ tori parametrizing all the holomorphic structures of ${\cal E}_{e_{II}}$ over $B$. Let $({\bf V}_{II}, {\bf W}_{II}, \Phi_{{\bf V}_{II}{\bf W}_{II}})$ be an algebraic family Kuranishi model of $e_{II}$, defined over ${\cal T}_B({\cal X})$ and let $\Phi_{{\cal V}_{II}{\cal W}_{II}}:{\cal V}_{II} \mapsto {\cal W}_{II}$ be the corresponding morphism of locally free sheaves. Then by its defining property and the simplifying assumption \ref{assum; 2} on $e_{II}$ (on page \pageref{assum; 2}), we have $Ker(\Phi_{{\cal V}_{II}{\cal W}_{II}})\cong {\cal R}^0\pi_{\ast}{\cal E}_{e_{II}}$ and $Coker(\Phi_{{\cal V}_{II}{\cal W}_{II}})\cong {\cal R}^1\pi_{\ast}{\cal E}_{e_{II}}$, where ${\cal R}^i\pi_{\ast}\bigl({\cal E}_{e_{II}}\bigr)$ over ${\cal T}_B({\cal X})$ is the $i$-th derived image sheaf of ${\cal E}_{e_{II}}$ over ${\cal X}\times_B {\cal T}_B({\cal X})$ along $\pi:{\cal X}\times_B {\cal T}_B({\cal X})\mapsto {\cal T}_B({\cal X})$. The kernel $Ker(\Phi_{{\bf V}_{II}{\bf W}_{II}})$ defines an algebraic sub-cone ${\bf C}_{e_{II}}$ over ${\cal T}_B({\cal X})$, and its projectification ${\bf P}({\bf C}_{e_{II}})$ is nothing but the algebraic family moduli space ${\cal M}_{e_{II}}$ of $e_{II}$ over $B$. The bundle map ${\bf V}_{II}\mapsto {\bf W}_{II}$ induces a global section $s_{II}$ of ${\bf H}\otimes \pi_{{\bf P}({\bf V}_{II})}^{\ast}{\bf W}_{II}$ over ${\bf P}({\bf V}_{II})$ such that ${\cal M}_{e_{II}}$ can be identified with the zero locus $Z(s_{II})$ of the section $s_{II}\in \Gamma({\bf P}({\bf V}_{II}), {\bf H}\otimes \pi_{{\bf P}({\bf V}_{II})}^{\ast}{\bf W}_{II})$. As $Z(s_{II})$ may not be regular or be of the expected dimension, we have to rely on intersection theory [F] to construct the virtual fundamental class of ${\cal M}_{e_{II}}$. By applying the concept of localized top Chern class on page 244, section 14.1 of [F], $$[{\cal M}_{e_{II}}]_{vir}\doteq {\bf Z}(s_{II})\in {\cal A}_{dim_{\bf C}{\cal T}_B({\cal X})+rank_{\bf C}{\bf V}_{II}-1-rank_{\bf C} {\bf W}_{II}}({\cal M}_{e_{II}})$$ defines a unique cycle class representing the ``virtual fundamental class'' of the family moduli space ${\cal M}_{e_{II}}$ graded by the expected algebraic family Seiberg-Witten dimension \footnote{This works for $e_{II}$ with $febd(e_{II}, {\cal X}/B)=p_g$.}, $$\hskip -.2in ed=dim_{\bf C}{\cal T}_B({\cal X})+rank_{\bf C}{\bf V}_{II}-1-rank_{\bf C} {\bf W}_{II}=dim_{\bf C}B+q+(p_g-q+{e_{II}^2-e_{II}\cdot c_1({\bf K}_{{\cal X}/B}) \over 2})$$ $$=dim_{\bf C}B+p_g+{e_{II}^2-e_{II}\cdot c_1({\bf K}_{{\cal X}/B}) \over 2}.$$ This virtual fundamental class, i.e. the localized top Chern class ${\bf Z}(s_{II})$, can be pushed-forward and mapped into a global cycle class in \noindent ${\cal A}_{dim_{\bf C}{\cal T}_B({\cal X})+rank_{\bf C}{\bf V}_{II}-1-rank_{\bf C} {\bf W}_{II}}({\bf P}({\bf V}_{II}))$ induced by the inclusion ${\cal M}_{e_{II}}\mapsto {\bf P}({\bf V}_{II})$. As we expect, the virtual fundamental class ${\bf Z}(s_{II})$ localized in ${\cal A}_{\cdot}({\cal M}_{e_{II}})$ is independent to the choices of the algebraic family Kuranishi models of $e_{II}$. \begin{prop}\label{prop; independent} The cycle class ${\bf Z}(s_{II})\in {\cal A}_{\cdot}({\cal M}_{e_{II}})$ is independent to the choices of the algebraic family Kuranishi model $({\bf V}_{II}, {\bf W}_{II}, \Phi_{{\bf V}_{II}{\bf W}_{II}})$ of $e_{II}$. \end{prop} \noindent Proof: Consider the fiber square \[ \begin{array}{ccc} Z(s_{II}) & \longrightarrow & {\bf P}({\bf V}_{II}) \\ \Big\downarrow & & \Big\downarrow \vcenter{ \rlap{$\scriptstyle{\mathrm{s_{II}}}\,$}}\\ {\bf P}({\bf V}_{II}) & \stackrel{s_{\pi_{{\bf P}({\bf V}_{II})}^{\ast}{\bf W}_{II}\otimes {\bf H}}}{\longrightarrow} & \pi_{{\bf P}({\bf V}_{II})}^{\ast}{\bf W}_{II}\otimes {\bf H} \end{array} \] , where $s_{\pi_{{\bf P}({\bf V}_{II})}^{\ast}{\bf W}_{II}\otimes {\bf H}}$ is the zero cross section of $\pi_{{\bf P}({\bf V}_{II})}^{\ast}{\bf W}_{II}\otimes {\bf H}$. By the fact that ${\bf Z}(s_{II})= s_{\pi_{{\bf P}({\bf V}_{II})}^{\ast}{\bf W}_{II}\otimes {\bf H}}^{!} [{\bf P}({\bf V}_{II})]$, ${\bf Z}(s_{II})$ is nothing but the following localized contribution of top Chern class along $Z(s_{II})$, $$\hskip -.2in \{c_{total}(\pi_{{\bf P}({\bf V}_{II})}^{\ast}{\bf W}_{II}\otimes {\bf H}|_{Z(s_{II})})\cap s_{total}(Z(s_{II}), {\bf P}({\bf V}_{II})) \}_{dim_{\bf C}{\cal T}_B({\cal X})+rank_{\bf C}{\bf V}_{II}-1-rank_{\bf C} {\bf W}_{II}}.$$ It suffices to show that the above localized contribution of top Chern class has been independent of the choices of ${\bf V}_{II}$ and ${\bf W}_{II}$. \begin{lemm}\label{lemm; stable} the localized contribution of top Chern class of $s:\pi_{{\bf P}({\bf E})}^{\ast}{\bf F}\otimes {\bf H}$ induced by $\sigma:{\bf E}\mapsto {\bf F}$ is invariant under the stabilization $\sigma\sim \sigma\oplus id_{\bf G}:{\bf E}\oplus {\bf G}\mapsto {\bf F}\oplus {\bf G}$. \end{lemm} The lemma is similar to lemma 5.3. in [Liu3]. \noindent Proof of the lemma: Under the smooth embedding ${\bf P}({\bf E})\hookrightarrow {\bf P}({\bf E}\oplus {\bf G})$, the normal bundle of ${\bf P}({\bf E})$ in ${\bf P}({\bf E}\oplus {\bf G})$ is isomorphic to the bundle $\pi_{{\bf P}({\bf E})}^{\ast}{\bf G}\otimes {\bf H}$, as ${\bf P}({\bf E})$ can be viewed as the zero locus of a regular section of $\pi_{{\bf P}({\bf E}\oplus {\bf G})}^{\ast}{\bf G}\otimes {\bf H}$ induced by the bundle projection ${\bf E}\oplus {\bf G}\mapsto {\bf G}$. So the total Segre class $$s_{total}({\bf P}({\bf E}), {\bf P}({\bf E}\oplus {\bf G}))= s_{total}(\pi_{{\bf P}({\bf E})}^{\ast}{\bf G}\otimes {\bf H}),$$ and $$s_{total}(Z(s_{II}), {\bf P}({\bf E}\oplus {\bf G})) =s_{total}({\bf P}({\bf E}), {\bf P}({\bf E}\oplus {\bf G}))\cap s_{total}(Z(s_{II}), {\bf P}({\bf E}))$$ $$=c_{total}(-\pi_{{\bf P}({\bf E})}^{\ast}{\bf G}\otimes {\bf H})\cap s_{total}(Z(s_{II}), {\bf P}({\bf E})).$$ Thus $$ c_{total}(\pi_{{\bf P}({\bf E}\oplus {\bf G})}^{\ast}({\bf E}\oplus {\bf G})\otimes {\bf H}|_{Z(s_{II})}) \cap s_{total}(Z(s_{II}), {\bf P}({\bf E}\oplus {\bf G}))$$ $$=c_{total}(\bigl(\pi_{{\bf P}({\bf E})}^{\ast}({\bf E}\oplus {\bf G})\otimes {\bf H}- \pi_{{\bf P}({\bf E})}^{\ast}{\bf G}\otimes {\bf H}\bigr)|_{Z(s_{II})})\cap s_{total}(Z(s_{II}), {\bf P}({\bf E}))$$ $$=c_{total}(\pi_{{\bf P}({\bf E})}^{\ast}({\bf E}\otimes {\bf H}|_{Z(s_{II})})\cap s_{total}(Z(s_{II}), {\bf P}({\bf E})).$$ So the localized contribution of top Chern class is invariant under the stabilization. $\Box$ Once the lemma is proved, we may show that the localized top Chern classes defined by any two algebraic family Kuranishi models $(\Phi_{{\bf V}_{II}{\bf W}_{II}}, {\bf V}_{II}, {\bf W}_{II})$ and $(\Phi_{{\bf V}_{II}'{\bf W}_{II}'}, {\bf V}_{II}', {\bf W}_{II}')$ are equal. In fact one may stabilize $(\Phi_{{\bf V}_{II}{\bf W}_{II}}, {\bf V}_{II}, {\bf W}_{II})$ into $(\Phi_{{\bf V}_{II}{\bf W}_{II}}\oplus id_{{\bf V}_{II}'}, {\bf V}_{II}\oplus {\bf V}_{II}', {\bf W}_{II}\oplus {\bf V}_{II}')$ and $(\Phi_{{\bf V}_{II}'{\bf W}_{II}'}, {\bf V}_{II}', {\bf W}_{II}')$ into $(id_{{\bf V}_{II}}\oplus\Phi_{{\bf V}_{II}'{\bf W}_{II}'}, {\bf V}_{II}\oplus {\bf V}_{II}', {\bf V}_{II}\oplus {\bf W}_{II}')$, respectively, by applying lemma \ref{lemm; stable}. We find that the localized top Chern classes are stabilized into $$\{c_{total}(\pi_{{\bf P}({\bf V}_{II}\oplus {\bf V}_{II}')}^{\ast} ({\bf W}_{II}\oplus {\bf V}_{II}')\otimes {\bf H})\cap s_{total}(i_1({\cal M}_{e_{II}}, {\bf P}({\bf V}_{II}\oplus {\bf V}_{II}'))\}_{ed}$$ and $$\{c_{total}(\pi_{{\bf P}({\bf V}_{II}\oplus {\bf V}_{II}')}^{\ast} ({\bf W}_{II}'\oplus {\bf V}_{II})\otimes {\bf H})\cap s_{total}(i_2({\cal M}_{e_{II}}, {\bf P}({\bf V}_{II}\oplus {\bf V}_{II}'))\}_{ed},$$ respectively. Over here $i_1, i_2:{\cal M}_{e_{II}}\hookrightarrow {\bf P}({\bf V}_{II}\oplus {\bf V}_{II}')$ denote two different imbeddings ${\cal M}_{e_{II}}\subset {\bf P}({\bf V}_{II})\subset {\bf P}({\bf V}_{II}\oplus {\bf V}_{II}')$ and ${\cal M}_{e_{II}}\subset {\bf P}({\bf V}_{II}')\subset {\bf P}({\bf V}_{II}\oplus {\bf V}_{II}')$, respectively. Firstly, because both ${\cal V}_{II}-{\cal W}_{II}$ and ${\cal V}_{II}'-{\cal W}_{II}'$ are equal to ${\cal R}^0\pi_{\ast}\bigl({\cal O}_{\cal X}(e_{II})\bigr)- {\cal R}^1\pi_{\ast}\bigl({\cal O}_{\cal X}(e_{II})\bigr)$ in $K_0({\cal T}_B({\cal X}))$, we have ${\bf W}_{II}\oplus {\bf V}_{II}'\equiv {\bf W}_{II}'\oplus {\bf V}_{II}$ and the corresponding total Chern classes are equal. Secondly, to show that $\rho_1=s_{total}(i_1({\cal M}_{e_{II}}), {\bf P}({\bf V}_{II}\oplus {\bf V}_{II}'))$ and $\rho_2=s_{total}(i_2({\cal M}_{e_{II}}), {\bf P}({\bf V}_{II}\oplus {\bf V}_{II}'))$ are equal, we notice that $i_1, i_2$ are within a ${\bf P}^1$ pencil of imbeddings of ${\cal M}_{e_{II}}$ induced by $j_{a, b}:{\bf C}_{e_{II}}\mapsto {\bf V}_{II}\oplus {\bf V}_{II}'$; $j_{a, b}(v)=aj_1(v)\oplus bj_2(v)$ for $v\in {\bf C}_{e_{II}}$, $(a, b)\in {\bf C}^2-(0, 0)$. Here $j_1:{\bf C}_{e_{II}}\mapsto {\bf V}_{II}$ and $j_2:{\bf C}_{e_{II}}\mapsto {\bf V}_{II}'$ are the imbeddings of abelian cones projectified into ${\cal M}_{e_{II}}\subset {\bf P}({\bf V}_{II}), {\bf P}({\bf V}_{II}')$, respectively. So we may consider the embedding ${\bf P}^1\times {\cal M}_{e_{II}}\mapsto {\bf P}^1\times {\bf P}({\bf V}_{II}\oplus {\bf V}_{II}')$ and the total Segre class $s_{total}({\bf P}^1\times {\cal M}_{e_{II}}, {\bf P}^1\times {\bf P}({\bf V}_{II}\oplus {\bf V}_{II}'))$ of its normal cone. It is clear that if we restrict the total Segre class to different $\{t\}\times {\cal M}_{e_{II}}$, $t\in {\bf P}^1$, the resulting class $\in {\cal A}_{\cdot}({\cal M}_{e_{II}})$ is independent to $t$. When $t=0$ and $t=\infty$, we recover $\rho_1$ and $\rho_2$, respectively. Thus $\rho_1=\rho_2$. Because the Chern classes and Segre classes are identified, so are the corresponding localized top Chern classes of ${\cal M}_{e_{II}}$. $\Box$ Because the localized top Chern class is canonically defined, we will denote them by $[{\cal M}_{e_{II}}]_{vir}$. If we push-forward the cycle class ${\bf Z}(s_{II})=[{\cal M}_{e_{II}}]_{vir}$ into ${\bf P}({\bf V}_{II})$, then by proposition 14.1(a) on page 244 of [F], the image cycle class is equal to the global $c_{top}(\pi_{{\bf P}({\bf V}_{II})}^{\ast} {\bf W}_{II}\otimes {\bf H})\cap [{\bf P}({\bf V}_{II})]$. \section{The Construction of Algebraic Family Kuranishi Models}\label{section; kura} In the previous section, we have discussed the independence of $[{\cal M}_{e_{II}}]_{vir}$ to the choices of the algebraic family Kuranishi models of $e_{II}$. In the following, we will first review the construction of the family Kuranishi models. When we perform the blowup/residual intersection theory construction of algebraic family Seiberg-Witten invariants in subsection \ref{subsection; stable}, these explicitly constructed algebraic family Kuranishi models will play a crucial role. As was mentioned earlier, we focus mostly on the $febd(e_{II}, {\cal X}/B)=p_g$ case in the following discussion. \subsection{The Construction of family Kuranishi Model of the Type $II$ Exceptional Curves}\label{subsection; type2K} In this subsection, we review the explicit construction of the algebraic family Kuranishi model of a type $II$ exceptional class $e_{II}$. Let $\pi:{\cal X}\mapsto B$ be a fiber bundle of algebraic surfaces and let ${\cal T}_B({\cal X})$ be the fiber bundle of the relative $Pic^0$ tori. As usual, let ${\cal E}_{e_{II}}\mapsto {\cal T}_B({\cal X})$ be the invertible sheaf corresponding to $e_{II}$. Let $D\subset {\cal X}$ be an ample effective divisor on ${\cal X}$ and let $n$ be a sufficiently large integer. \begin{lemm}\label{lemm; ample} Suppose that $|D|$ is chosen to be sufficiently very ample, then the divisor $D$ in $|D|$ can be chosen such that the composition map $D\subset {\cal X}\mapsto B$ is of relative dimension one. \end{lemm} \noindent Proof of lemma \ref{lemm; ample}: For all the closed points $b\in B$, consider the ${\cal O}_{{\cal X}}(D)$-twisted short exact sequence, $$0\mapsto {\cal I}_{{\cal X}_b}(D)\mapsto {\cal O}_{\cal X}(D)\mapsto {\cal O}_{{\cal X}_b}(D)\mapsto 0.$$ By theorem 1.5. of [Ko], we may replace $D$ by a suitably large multiple such that ${\cal R}^i\pi_{\ast}{\cal I}_{{\cal X}_b}(D)=0$, ${\cal R}^i\pi_{\ast}{\cal O}_{{\cal X}_b}(D)=0$, for $i>0$. So the derived exact sequence from the above short exact sequence generates a short exact sequence \footnote{Here $k(b)$ is the residue field of $b$.} for each $b\in B$, $$0\mapsto H^0({\cal X}, {\cal I}_{{\cal X}_b}(D))\otimes k(b) \mapsto H^0({\cal X}, {\cal O}_{\cal X}(D))\mapsto H^0({\cal X}_b, {\cal O}_{{\cal X}_b}(D))\mapsto 0.$$ The space $H^0({\cal X}, {\cal I}_{{\cal X}_b}(D))\otimes k(b)$ is the subspace of the global sections $H^0({\cal X}, {\cal O}_{\cal X}(D))$ which restricts to the trivial section to ${\cal X}_b$. When $b$ moves these vector spaces form a vector bundle, denoted by ${\bf U}$. Its rank can be calculated to be $$\chi({\cal X}, {\cal O}_{\cal X}(D))-\chi({\cal X}_b, {\cal O}_{{\cal X}_b}(D)) \ll dim_{\bf C}|D|-dim_{\bf C}B,$$ if $D$ is chosen such that $\chi({\cal X}_b, {\cal O}_{{\cal X}_b}(D))\gg dim_{\bf C}B$. If such an inequality has been achieved, then $dim_{\bf C}|D|$ is much larger than the dimension of the projective space bundle ${\bf P}({\bf U})$ over $B$. By choosing an element of $|D|-Im({\bf P}({\bf U}))$, it gives rise to a cross section which restricts to non-trivial sections to each fiber ${\cal X}_b$. So after replacing the original $D$ by the defining divisor $D$ of the chosen section, the newly chosen $D$ intersects each ${\cal X}_b$ properly and cuts it down into a curve. The lemma is proved. $\Box$ Once we have such a carefully chosen $D$, we are ready to construct Kuranishi models of $e_{II}$. The following short exact sequence $$\hskip -.3in 0\mapsto {\cal O}_{\cal X}\otimes {\cal E}_{e_{II}}\mapsto {\cal O}_{\cal X}(nD)\otimes {\cal E}_{e_{II}}\mapsto {\cal O}_{nD}(nD)\otimes {\cal E}_{e_{II}}\mapsto 0$$ implies $$ 0\mapsto {\cal R}^0\pi_{\ast}\bigl({\cal O}_{\cal X}\otimes {\cal E}_{e_{II}}\bigr) \mapsto {\cal R}^0\pi_{\ast}\bigl({\cal O}_{\cal X}(nD)\otimes {\cal E}_{e_{II}} \bigr)\mapsto {\cal R}^0\pi_{\ast}\bigl({\cal O}_{nD}(nD)\otimes {\cal E}_{e_{II}}\bigr)$$ $$\hskip -.4in \mapsto {\cal R}^1\pi_{\ast}\bigl({\cal O}_{\cal X}\otimes {\cal E}_{e_{II}} \bigr)\mapsto 0,$$ and ${\cal R}^1\pi_{\ast}\bigl({\cal O}_{nD}(nD)\otimes {\cal E}_{e_{II}}\bigr) \cong {\cal R}^2\pi_{\ast}\bigl({\cal E}_{e_{II}}\bigr)$, because by relative Serre vanishing theorem ${\cal R}^i\pi_{\ast}\bigl({\cal O}_{\cal X}(nD)\otimes {\cal E}_{e_{II}}\bigr)=0$ for large enough $n$. By our simplifying assumption \ref{assum; 2} on $e_{II}$ (on page \pageref{assum; 2}), ${\cal R}^2\pi_{\ast}\bigl({\cal E}_{e_{II}}\bigr)=0$. So ${\cal R}^1\pi_{\ast}\bigl({\cal O}_{nD}(nD)\otimes {\cal E}_{e_{II}}\bigr)=0$ and ${\cal R}^0\pi_{\ast}\bigl({\cal O}_{nD}(nD)\otimes {\cal E}_{e_{II}}\bigr)$ is locally free. Then we may take ${\cal V}_{e_{II}}$ and ${\cal W}_{e_{II}}$ to be the locally free sheaves ${\cal R}^0\pi_{\ast}\bigl({\cal O}_{\cal X}(nD)\otimes {\cal E}_{e_{II}} \bigr)$ and ${\cal R}^0\pi_{\ast}\bigl({\cal O}_{nD}(nD)\otimes {\cal E}_{e_{II}}\bigr)$, respectively. Set ${\bf V}_{e_{II}}$, ${\bf W}_{e_{II}}$ to be the vector bundles associated with the locally free sheaves ${\cal V}_{e_{II}}={\cal R}^0\pi_{\ast}\bigl({\cal O}_{\cal X}(nD)\otimes {\cal E}_{e_{II}}\bigr)$ and ${\cal W}_{e_{II}}= {\cal R}^0\pi_{\ast}\bigl({\cal O}_{nD}(nD) \otimes {\cal E}_{e_{II}}\bigr)$, respectively. These bundles depend on the choices of $D$ and $n$ and are not canonical\footnote{ We drop the notational dependence of ${\bf V}_{e_{II}}$ and ${\bf W}_{e_{II}}$ on $D$ and $n$ to simplify our symbols.}. Then $\Phi_{{\bf V}_{e_{II}}{\bf W}_{e_{II}}}:{\bf V}_{e_{II}}\mapsto {\bf W}_{e_{II}}$ defines an algebraic family Kuranishi model of $e_{II}$, as was described in section \ref{subsection; type2K}. Under the assumption that $e_{II}-c_1({\bf K}_{{\cal X}/B})$ is nef, by Riemann-Roch theorem $rank_{\bf C}({\bf V}_{e_{II}}-{\bf W}_{e_{II}})=1-q+p_g+{e_{II}^2- c_1({\bf K}_{{\cal X}/B})\cdot e_{II}\over 2}$. \begin{rem}\label{rem; geometric} It is useful to comprehend the geometric meaning of the above Kuranishi model. By adjoining a very ample $nD$, the family moduli space of curves ${\cal M}_{e_{II}}$ is naturally embedded into the family moduli space of a better behaved class $e_{II}+nD$, which \footnote{Thanks to the sufficiently very ampleness of $D$ and the large number $n$.} has the nice structure of a projective space bundle over ${\cal T}_B{\cal X}$. Then the sub-locus ${\cal M}_{e_{II}}$ is characterized by the cross section of the obstruction bundle induced by $\Phi_{{\bf V}_{e_{II}}{\bf W}_{e_{II}}}$, which requries the ray of non-zero sections in $H^0({\cal X}_b, {\cal E}_{e_{II}}\otimes {\cal O}_{\cal X}(nD)|_b)$ to vanish along the effective divisor $nD$ to recover a curve representing $e_{II}$. \end{rem} In the above argument, we have not made use of the exceptionality property, i.e. definition \ref{defin; exception}, on $e_{II}$. So we may replace $e_{II}$ by any $\underline{C}$ or $\underline{C}-e_{II}$ which satisfies the nef condition on $\underline{C}-e_{II}-c_1({\bf K}_{{\cal X}/B})$. As before, we still assume $febd(\underline{C}, {\cal X}/B)=febd(\underline{C}-e_{II}, {\cal X}/B)=p_g$ to simplify our discussion. The above argument still goes through without modification. We can choose the same effective ample $D$ and a uniformly large $n$ such that the sheaf morphisms ${\cal R}^0\pi_{\ast}\bigl({\cal O}_{\cal X}(nD)\otimes {\cal E}_{\underline{C}}\bigr)\mapsto {\cal R}^0\pi_{\ast}\bigl({\cal O}_{nD}(nD) \otimes {\cal E}_{\underline{C}}\bigr)$ and ${\cal R}^0\pi_{\ast}\bigl({\cal O}_{\cal X}(nD)\otimes {\cal E}_{\underline{C}-e_{II}}\bigr)\mapsto {\cal R}^0\pi_{\ast}\bigl({\cal O}_{nD}(nD) \otimes {\cal E}_{\underline{C}-e_{II}}\bigr)$ define the algebraic family Kuranishi models of $\underline{C}$ and $\underline{C}-e_{II}$, respectively. We denote the corresponding Kuranishi-model vector bundles by ${\bf V}_{\underline{C}}$, ${\bf W}_{\underline{C}}$ and ${\bf V}_{\underline{C}-e_{II}}$, ${\bf W}_{\underline{C}-e_{II}}$, respectively. In the following, we will fix a pair of $D$ and $n$ and discuss the switching of the family Kuranishi models between $\underline{C}$ and $\underline{C}-e_{II}$. \begin{rem}\label{rem; similar} We notice that if we formally replace $nD+\underline{C}$ by $C$, $nD$ by ${\bf M}(E)E$ and $\underline{C}$ by $C-{\bf M}(E)E$, the above algebraic family Kuranishi model of $\underline{C}$ corresponds formally to the canonical algebraic family Kuranishi model of $C-{\bf M}(E)E$, introduced in [Liu3] and used heavily in [Liu6], $${\cal R}^0\pi_{\ast}\bigl({\cal O}_{M_{n+1}}\otimes {\cal E}_C\bigr)\mapsto {\cal R}^0\pi_{\ast}\bigl({\cal O}_{{\bf M}(E)E}\otimes {\cal E}_C\bigr).$$ \end{rem} This analogue provides us an easy way to memorize and link their family Kuranishi models. \subsection{The Switching of Family Kuranishi Models Involving type $II$ Exceptional Classes}\label{subsection; type II} In the following, we compare the Kuranishi datum of $\underline{C}$ and $\underline{C}-e_{II}$ (using only one $e_{II}$). Consider the pull-back of the Kuranishi models datum of $\underline{C}$ and $\underline{C}-e_{II}$ from $B$ to ${\cal M}_{e_{II}}$ by the natural projection ${\cal M}_{e_{II}}\mapsto B$. Let ${\bf e}_{II}$, with the following commutative diagram, \[ \begin{array}{ccc} {\bf e}_{II} & \hookrightarrow & {\cal X}\times_B{\cal M}_{e_{II}}\\ \Big\downarrow & & \Big\downarrow \\ {\cal M}_{e_{II}} & = & {\cal M}_{e_{II}} \end{array} \] denote the universal type $II$ curve over ${\cal M}_{e_{II}}$. \begin{prop}\label{prop; compare} Consider the pull-backs of the family Kuranishi models of $\underline{C}$ and $\underline{C}-e_{II}$ to ${\cal M}_{e_{II}}$. Between the pull-backs to ${\cal M}_{e_{II}}$ of the algebraic family Kuranishi models of $\underline{C}$ and $\underline{C}-e_{II}$ constructed following the recipe in subsection \ref{subsection; type2K}, there is a commutative diagram of ``columns of short exact sequence'' of locally free sheaves, \[ \begin{array}{ccc} 0 & & 0\\ \Big\downarrow & & \Big\downarrow\\ {\cal R}^0\pi_{\ast}\bigl({\cal O}_{\cal X}(nD)\otimes {\cal E}_{\underline{C}-e_{II}}\bigr)&\stackrel{\Phi_{{\cal V}_{\underline{C} -e_{II}}{\cal W}_{\underline{C}-e_{II}}}}{\longrightarrow} & {\cal R}^0\pi_{\ast}\bigl({\cal O}_{nD}(nD)\otimes {\cal E}_{\underline{C}-e_{II}}\bigr)\\ \Big\downarrow & & \Big\downarrow\\ {\cal R}^0\pi_{\ast}\bigl({\cal O}_{\cal X}(nD)\otimes {\cal E}_{\underline{C}}\bigr)\otimes {\cal H}_{II} & \stackrel{\Phi_{{\cal V}_{\underline{C}}{\cal W}_{\underline{C}}}\otimes id_{{\cal H}_{II}}}{\longrightarrow}& {\cal R}^0\pi_{\ast}\bigl( {\cal O}_{nD}(nD)\otimes {\cal E}_{\underline{C}}\bigr) \otimes {\cal H}_{II}\\ \Big\downarrow & & \Big\downarrow\\ {\cal R}^0\pi_{\ast}\bigl({\cal O}_{{\bf e}_{II}}(nD) \otimes {\cal E}_{\underline{C}}\bigr) \otimes {\cal H}_{II} & \stackrel{}{\longrightarrow}& {\cal R}^0\pi_{\ast}\bigl({\cal O}_{{\bf e}_{II}\cap nD}(nD)\otimes {\cal E}_{\underline{C}}\bigr)\otimes {\cal H}_{II} \\ \Big\downarrow & & \Big\downarrow\\ 0 & & 0\\ \end{array} \] \end{prop} Here the hyperplane bundle ${\cal H}_{II}\mapsto {\cal M}_{e_{II}}$ in the above diagram is induced from the embedding ${\cal M}_{e_{II}}\hookrightarrow {\bf P}({\bf V}_{e_{II}})$. \noindent Sketch of the Proof: Both of the columns are the derived long exact sequences from some twisted versions of the following short exact sequence, pull-back to ${\cal X}\times_B{\cal M}_{e_{II}}\mapsto {\cal M}_{e_{II}}$, $ 0\mapsto {\cal O}_{\cal X}(-{\bf e}_{II})\mapsto {\cal O}_{\cal X}\mapsto {\cal O}_{{\bf e}_{II}}\mapsto 0$ and its restriction to the non-reduced $nD$. These derived sequences are short exact as $n$ has been chosen to be large enough to guarantee the vanishing of ${\cal R}^i\pi_{\ast}\bigl({\cal O}_{\cal X}(nD)\otimes {\cal E}\bigr)$, $i>0$ for either ${\cal E}={\cal E}_{\underline{C}}$ or ${\cal E}_{\underline{C}-e_{II}}$. The commutativity of the sequences follow from the commutativity of the tensoring operations (by tensoring the defining sections of ${\bf e}_{II}\subset {\cal X}\times_B{\cal M}_{e_{II}}$) and the restriction to $nD$. The map of the last row is induced by the derived exact sequence of $0\mapsto {\cal O}_{{\bf e}_{II}}\otimes {\cal E}_{\underline{C}}\mapsto {\cal O}_{{\bf e}_{II}}(nD)\otimes {\cal E}_{\underline{C}}\mapsto {\cal O}_{{\bf e}_{II}\cap nD}(nD)\otimes {\cal E}_{\underline{C}}\mapsto 0$. If ${\bf e}_{II}\cap nD\mapsto {\cal M}_{e_{II}}$ has been a finite morphism, the sheaf ${\cal R}^0\pi_{\ast}\bigl({\cal O}_{{\bf e}_{II}\cap nD}(nD)\otimes {\cal E}_{\underline{C}}\bigr)$ is automatically locally free with its rank equal to the relative length of ${\bf e}_{II}\cap nD\mapsto {\cal M}_{e_{II}}$. Without the finiteness assumption of the morphism ${\bf e}_{II}\cap nD\mapsto {\cal M}_{e_{II}}$, we check the locally freeness of ${\cal R}^0\pi_{\ast}\bigl({\cal O}_{{\bf e}_{II}\cap nD}(nD)\otimes {\cal E}_{\underline{C}}\bigr)$ by proving that the second column in the above commutative diagram remains left exact after tensoring with $k(y)$, for all the closed points $y\in {\cal M}_{e_{II}}$. $\Box$ The exact sequences in proposition \ref{prop; compare} imply that ${\bf V}_{\underline{C}-e_{II}}$ is a sub-bundle of ${\bf V}_{\underline{C}}\otimes {\bf H}_{II}$. Denote the quotient bundles associated with ${\cal R}^0\pi_{\ast}\bigl({\cal O}_{e_{II}}(nD)\otimes {\cal E}_{\underline{C}} \bigr)\otimes {\cal H}_{II}$ and ${\cal R}^0\pi_{\ast}\bigl({\cal O}_{{\bf e}_{II}\cap nD}(nD)\otimes {\cal E}_{\underline{C}}\bigr)\otimes {\cal H}_{II}$ by ${\bf V}'$ and ${\bf W}'$, respectively. Then ${\bf P}_B({\bf V}_{\underline{C}-e_{II}})$ can be viewed as the smooth sub-scheme of ${\bf P}({\bf V}_{\underline{C}}\otimes {\bf H}_{II})$ $\cong {\bf P}({\bf V}_{\underline{C}})$ defined by the zero locus of the section of ${\bf H}\otimes \pi_{{\bf P}_B({\bf V}_{\underline{C}})}^{\ast}{\bf V}'$ induced by ${\bf V}_{\underline{C}}\otimes {\bf H}_{II}\mapsto {\bf V}'\mapsto 0$. This implies that we may replace the original ambient space of the family moduli space ${\cal M}_{\underline{C}-e_{II}}$ of $\underline{C}-e_{II}$ from ${\bf P}_B({\bf V}_{\underline{C}-e_{II}})$ by ${\bf P}_B({\bf V}_{\underline{C}})$ and replace the original obstruction bundle ${\bf H}\otimes \pi_{{\bf P}_B({\bf V}_{\underline{C}}\otimes {\bf H}_{II})}^{\ast}{\bf W}_{\underline{C}-e_{II}}$ by an extended obstruction bundle equivalent to ${\bf H}\otimes \pi_{{\bf P}_B({\bf V}_{\underline{C}\otimes {\bf H}_{II}})}^{\ast}({\bf W}_{\underline{C}- e_{II}}\oplus {\bf V}')$ in the appropriated $K$ group. In the following we discuss how the desired ``extended obstruction bundle'' can be constructed from the standard bundle extension construction. Consider a bundle extension of ${\bf V}'$ by ${\bf W}_{\underline{C}-e_{II}}$. All such bundle extensions are classified by the group $Ext^1({\bf V}', {\bf W}_{\underline{C}-e_{II}})$. By applying the left exact functor $HOM({\bf V}', \bullet)$ to the short exact sequence $0\mapsto {\bf W}_{\underline{C}-e_{II}}\mapsto {\bf W}_{\underline{C}}\otimes {\bf H}_{II}\mapsto {\bf W}' \mapsto 0$, we get the following portion of derived long exact sequence, $$ HOM({\bf V}', {\bf W}_{\underline{C}}\otimes {\bf H}_{II})\mapsto HOM({\bf V}', {\bf W}')\stackrel{\delta}{\longrightarrow} Ext^1({\bf V}', {\bf W}_{\underline{C}-e_{II}})\cdots.$$ Apparently the bundle map ${\bf V}'\mapsto {\bf W}'$ induced from the sheaf commutative diagram in proposition \ref{prop; compare} gives an element in $HOM({\bf V}', {\bf W}')$. Its image in $Ext^1({\bf V}', {\bf W}_{\underline{C}-e_{II}})$ under the connecting homomorphism determines a bundle extension and therefore defines the new extended obstruction bundle ${\bf W}_{new}$. And we have the following defining short exact sequence $$0\mapsto {\bf W}_{\underline{C}-e_{II}}\mapsto {\bf W}_{new}\mapsto {\bf V}'\mapsto 0.$$ To show that there is a canonically induced bundle map ${\bf W}_{new}\mapsto {\bf W}_{\underline{C}}\otimes {\bf H}_{II}$, we take $HOM({\bf W}_{new}, \bullet)$ on the short exact sequence $0\mapsto {\bf W}_{\underline{C}-e_{II}}\mapsto {\bf W}_{\underline{C}}\otimes {\bf H}_{II}\mapsto {\bf W}'\mapsto 0$ and get the following portion of derived long exact sequence, $$ \cdots\mapsto HOM({\bf W}_{new}, {\bf W}_{\underline{C}}\otimes {\bf H}_{II}) \mapsto HOM({\bf W}_{new}, {\bf W}')\mapsto Ext^1({\bf W}_{new}, {\bf W}_{\underline{C}-e_{II}})\mapsto \cdots.$$ The composition ${\bf W}_{new}\mapsto {\bf V}'\mapsto {\bf W}'$ induces an element in $HOM({\bf W}_{new}, {\bf W}')$. To show that it is the image of an element in $HOM({\bf W}_{new}, {\bf W}_{\underline{C}}\otimes {\bf H}_{II})$, it suffices to show that its image into $Ext^1({\bf W}_{new}, {\bf W}_{\underline{C}-e_{II}})$ vanishes. On the other hand, the derived long exact sequence of the contravariant functor $HOM(\bullet, {\bf W}_{\underline{C}-e_{II}})$ upon the defining short exact sequence of ${\bf W}_{new}$, $0\mapsto {\bf W}_{\underline{C}-e_{II}}\mapsto {\bf W}_{new}\mapsto {\bf V}'\mapsto 0$ implies that $$HOM({\bf W}_{\underline{C}-e_{II}}, {\bf W}_{\underline{C}-e_{II}}) \mapsto Ext^1({\bf V}', {\bf W}_{\underline{C}-e_{II}}) \mapsto Ext^1({\bf W}_{new}, {\bf W}_{\underline{C}-e_{II}})\cdots.$$ The extension class $\in Ext^1({\bf V}', {\bf W}_{\underline{C}-e_{II}})$ defining ${\bf W}_{new}$ is the image of $id_{{\bf W}_{\underline{C}-e_{II}}} \in HOM({\bf W}_{\underline{C}-e_{II}}, {\bf W}_{\underline{C}-e_{II}})$. Therefore by exactness of the above derived sequence its image in $Ext^1({\bf W}_{new}, {\bf W}_{\underline{C}-e_{II}})$ must vanish. The bundle morphism $\in HOM({\bf W}_{new}, {\bf W}_{\underline{C}}\otimes {\bf H}_{II})$ restricts to the the bundle injection ${\bf W}_{\underline{C}-e_{II}}\mapsto {\bf W}_{\underline{C}}\otimes {\bf H}_{II}$ on the sub-bundle ${\bf W}_{\underline{C}-e_{II}}\subset {\bf W}_{new}$. To show that ${\bf V}_{\underline{C}}\otimes {\bf H}_{II}\mapsto {\bf V}'$ can be lifted \footnote{Notice that the lifting may not be unique!} to some ${\bf V}_{\underline{C}}\otimes {\bf H}_{II}\mapsto {\bf W}_{new}$, it suffices to show that its image $\iota$ in $Ext^1({\bf V}_{\underline{C}}\otimes {\bf H}_{II}, {\bf W}_{\underline{C}-e_{II}})$ vanishes. This is ensured by the following derived long exact sequence, $$\cdots HOM({\bf V}_{\underline{C}}\otimes {\bf H}_{II}, {\bf W}_{\underline{C}}\otimes {\bf H}_{II}) \mapsto HOM({\bf V}_{\underline{C}}\otimes {\bf H}_{II}, {\bf W}')\mapsto Ext^1({\bf V}_{\underline{C}}\otimes {\bf H}_{II}, {\bf W}_{\underline{C}-e_{II}})\cdots,$$ the fact that $\iota$ is the composite image of the element ${\bf V}_{\underline{C}}\otimes {\bf H}_{II} \mapsto {\bf W}_{\underline{C}}\otimes {\bf H}_{II}$ in $HOM({\bf V}_{\underline{C}}\otimes {\bf H}_{II}, {\bf W}_{\underline{C}}\otimes {\bf H}_{II})$, and the acyclicity of the above long exact sequence. The following lemma fixes the unique lifting of ${\bf V}_{\underline{C}}\otimes {\bf H}_{II}\mapsto {\bf V}'$. \begin{lemm}\label{lemm; compatible} Among the possible liftings of ${\bf V}_{\underline{C}}\otimes {\bf H}_{II} \mapsto {\bf V}'$, there is a unique lifting ${\bf V}_{\underline{C}}\otimes {\bf H}_{II}\mapsto {\bf W}_{new}$ which makes the following diagram commutative, \[ \begin{array}{ccc} {\bf V}_{\underline{C}}\otimes {\bf H}_{II} & \longrightarrow & {\bf W}_{new}\\ \Big\downarrow & & \Big\downarrow \\ {\bf V}_{\underline{C}}\otimes {\bf H}_{II} & \longrightarrow & {\bf W}_{\underline{C}}\otimes {\bf H}_{II} \end{array} \] \end{lemm} The above commutative diagram ensures the compatibility between the (extended) algebraic family Kuranishi models of $\underline{C}-e_{II}$ and of $\underline{C}$ above ${\cal M}_{e_{II}}$. \noindent Proof of Lemma \ref{lemm; compatible}: Start with an arbitrary lifting $\kappa:{\bf V}_{\underline{C}}\otimes {\bf H}_{II}\mapsto {\bf W}_{new}$. We have the following commutative diagram among $HOM$ groups, \[ \hskip -.4in \begin{array}{ccccc} HOM({\bf V}_{\underline{C}}\otimes {\bf H}_{II}, {\bf W}_{\underline{C}-e_{II}}) & \mapsto & HOM({\bf V}_{\underline{C}}\otimes {\bf H}_{II}, {\bf W}_{new} )& \mapsto & HOM({\bf V}_{\underline{C}}\otimes {\bf H}_{II}, {\bf V}')\\ \Big\downarrow \vcenter{ \rlap{$\scriptstyle{\mathrm{=}}\,$}} & & \Big\downarrow & & \Big\downarrow \\ HOM({\bf V}_{\underline{C}}\otimes {\bf H}_{II}, {\bf W}_{\underline{C}-e_{II}})& \mapsto & HOM({\bf V}_{\underline{C}}\otimes {\bf H}_{II}, {\bf W}_{\underline{C}}\otimes {\bf H}_{II})& \mapsto & HOM({\bf V}_{\underline{C}}\otimes {\bf H}_{II}, {\bf W}') \end{array} \] We consider the difference between ${\bf V}_{\underline{C}}\otimes {\bf H}_{II}\mapsto {\bf W}_{\underline{C}}\otimes {\bf H}_{II}$ and the image of $\kappa$ in $HOM({\bf V}_{\underline{C}}\otimes {\bf H}_{II}, {\bf W}_{\underline{C}}\otimes {\bf H}_{II})$ and denote it by $\zeta$. The image of $\zeta$ in $HOM({\bf V}_{\underline{C}}\otimes {\bf H}_{II}, {\bf W}')$ vanishes because both the image of $\kappa$ and ${\bf V}_{\underline{C}}\otimes {\bf H}_{II}\mapsto {\bf W}_{\underline{C}}\otimes {\bf H}_{II}$ have the same image ${\bf V}_{\underline{C}}\otimes {\bf H}_{II}\mapsto {\bf W}'$ in $HOM({\bf V}_{\underline{C}}\otimes {\bf H}_{II}, {\bf W}')$. By exactness of the second row above, there exists an element $\rho\in HOM({\bf V}_{\underline{C}}\otimes {\bf H}_{II}, {\bf W}_{\underline{C}-e_{II}})$ which maps onto $\zeta$. By the commutativity of the diagram, denote the image of the element $\rho$ in $HOM({\bf V}_{\underline{C}}\otimes {\bf H}_{II}, {\bf W}_{new})$ by $\rho'$. Then we replace $\kappa$ by $\kappa+\rho'$ and it is rather easy to see that the image of $\kappa+\rho'$ in $HOM({\bf V}_{\underline{C}}\otimes {\bf H}_{II}, {\bf W}_{\underline{C}}\otimes {\bf H}_{II})$ is ${\bf V}_{\underline{C}}\otimes {\bf H}_{II}\mapsto {\bf W}_{\underline{C}}\otimes {\bf H}_{II}$. The commutativity of the original diagram follows. $\Box$ It is vital to understand the bundle map ${\bf W}_{new}\mapsto {\bf W}_{\underline{C}}\otimes {\bf H}_{II}$. If the bundle map is injective over ${\cal M}_{e_{II}}$, then the restriction of the family moduli space of $\underline{C}-e_{II}$ is isomorphic to the restriction of family moduli space of $\underline{C}$ above ${\cal M}_{e_{II}}$. Namely, we have the isomorphism ${\cal M}_{\underline{C}-e_{II}}\times_B{\cal M}_{e_{II}}\cong {\cal M}_{\underline{C}}\times_B{\cal M}_{e_{II}}$. This will be formulated as a {\bf special condition} on page \pageref{special}. On the other hand, the possible failure of the injectivity of ${\bf W}_{new}\mapsto {\bf W}_{\underline{C}}\otimes {\bf H}_{II}$ over ${\cal M}_{e_{II}}$ may result in the discrepancy of identifying the fiber products ${\cal M}_{\underline{C}-e_{II}}\times_B{\cal M}_{e_{II}}$ with ${\cal M}_{\underline{C}}\times_B{\cal M}_{e_{II}}$. In the following, we identify the kernel cone of ${\bf W}_{new}\mapsto {\bf W}_{\underline{C}}\otimes {\bf H}_{II}$. \begin{prop}\label{prop; thesame} Given the unique bundle map lifting ${\bf V}_{\underline{C}}\otimes {\bf H}_{II} \mapsto {\bf W}_{new}$ of ${\bf V}_{\underline{C}}\otimes {\bf H}_{II}\mapsto {\bf V}'$ fixed by lemma \ref{lemm; compatible}, the following commutative diagram of vector bundles \[ \begin{array}{ccc} {\bf W}_{new} & \longrightarrow & {\bf W}_{\underline{C}}\otimes {\bf H}_{II}\\ \Big\downarrow & & \Big\downarrow\\ {\bf V}' & \longrightarrow & {\bf W}' \end{array} \] induces an isomorphism between the kernel cones of the horizontal bundle morphisms. \end{prop} \noindent Proof of proposition \ref{prop; thesame}: The vertical arrows are known to be bundle surjections by our construction. The kernels of both the vertical bundle maps of the columns are isomorphic to ${\bf W}_{\underline{C}-e_{II}}$. Let ${\bf C}_{new}$ and $ {\bf C}'$ be the kernel sub-cones of ${\bf W}_{new}\mapsto {\bf W}_{\underline{C}}\otimes {\bf H}_{II}$ and of ${\bf V}'\mapsto {\bf W}'$, respectively. We prove that ${\bf C}_{new}\mapsto {\bf C}'$ induced from ${\bf W}_{new}\mapsto {\bf V}'$ is an isomorphism of abelian cones. , i.e., it suffices to show that for all the closed $b\in B$ the fibers of the cones are isomorphic under ${\bf W}_{new}|_b\mapsto {\bf V}'|_b$. Because the restriction of ${\bf W}_{new}\mapsto {\bf W}_{\underline{C}}\otimes {\bf H}_{II}$ to the sub-bundle ${\bf W}_{\underline{C}-e_{II}}$ has been injective, so ${\bf C}_{new} \cap {\bf W}_{\underline{C}-e_{II}}$ is embedded as the zero section sub-cone of ${\bf W}_{new}$. By the above commutative diagram it is clear that ${\bf C}_{new}\mapsto {\bf C}'$. On the other hand for all vectors $t\in {\bf C}'|_b$, an arbitrary lifting of $t$, $\tilde{t}\in {\bf W}_{new}|_b$, may or may not lie in the kernel space above $b$, ${\bf C}_{new}|_b$. But the image of $t\in {\bf C}'|_b$ into ${\bf W}'|_b$ is zero. So the image of $\tilde{t}$ in ${\bf W}_{\underline{C}}\otimes {\bf H}_{II}$ has to map trivially into ${\bf W}'|_b$. So one may find a unique element $w(t)$ in ${\bf W}_{\underline{C}-e_{II}}|_b$ which maps onto the image of $\tilde{t}$ in ${\bf W}_{\underline{C}}\otimes {\bf H}_{II}|_b$. Because ${\bf W}_{\underline{C}-e_{II}}\subset {\bf W}_{new}$, $w(t)$ can be viewed as an element in ${\bf W}_{new}|_b$ as well and then $t-w(t)$ will be mapped trivially into ${\bf W}_{\underline{C}}\otimes {\bf H}_{II}|_b$. So $t-w(t)\in {\bf C}_{new}|_b$. We have shown that every element $t\in {\bf C}'$ can be lifted uniquely into an element in ${\bf C}_{new}|_b$ and we establish their bijections for all the closed points $b\in B$. $\Box$ \begin{rem}\label{rem; many} In the above discussion we have hardly used any special property of $e_{II}$. Suppose that there are $p$ distinct type $II$ exceptional classes $e_{II; 1}$, $e_{II; 2}$, $\cdots$, $e_{II; p}$ over ${\cal X}\mapsto B$, with $e_{II; i}\cdot e_{II; j}\geq 0$ for $i\not=j$. If we replace ${\cal M}_{e_{II}}$ by the co-existence locus \footnote{ It will be defined and discussed in more details in subsection \ref{subsection; stable}.} of the type II classes, $\times_B^{i\leq p} {\cal M}_{e_{II; i}}$, and replace the single universal curve ${\bf e}_{II}$ by the total sum $\sum_{i\leq p} {\bf e}_{II; i}$ of all the universal curves, the above discussion can be generalized easily to the cases involving more than one type $II$ class. \end{rem} \begin{rem}\label{rem; analogue} The above discussion and remark \ref{rem; many} imply that ${\bf W}_{new}\mapsto {\bf W}_{\underline{C}}\otimes {\bf H}_{II}$ has been the analogue of ${\bf W}_{canon}^{\circ}\mapsto {\bf W}_{canon}$ in comparing the family Kuranishi models of $C-{\bf M}(E)E-\sum e_{k_i}$ and $C-{\bf M}(E)E$ involving type $I$ exceptional classes. The main difference is that in the type $II$ case they are not canonical. \end{rem} Recall that in [Liu5], [Liu6], we have discussed the relationship of the canonical algebraic family Kuranishi models of $C-{\bf M}(E)E$ and $C-{\bf M}(E)E-\sum_{i\leq p} e_{k_i}$ over $Y(\Gamma)\times T(M)$, where $e_{k_i}\cdot (C-{\bf M}(E)E)<0$ for $1\leq i\leq p$. Let $\Phi_{{\bf V}_{canon}^{\circ}{\bf W}_{canon}^{\circ}}: {\bf V}_{canon}^{\circ}\mapsto {\bf W}_{canon}^{\circ}$ and $\Phi_{{\bf V}_{canon}{\bf W}_{canon}}: {\bf V}_{canon}\mapsto {\bf W}_{canon}$ be the canonical algebraic family Kuranishi models of $C-{\bf M}(E)E$ and $C-{\bf M}(E)E-\sum e_{k_i}$, respectively. Please consult lemma 6 of [Liu5] for their definitions. Then we have ${\bf V}_{canon}^{\circ}={\bf V}_{canon}$ and the bundle map over $Y(\Gamma)\times T(M)$, ${\bf W}_{canon}^{\circ}|_{Y(\Gamma)\times T(M)} \mapsto {\bf W}_{canon}|_{Y(\Gamma)\times T(M)}$, parallel to the sheaf sequence below. $$\hskip -1.2in {\cal R}^0\pi_{\ast}\bigl({\cal O}_{\sum \Xi_{k_i}}\otimes {\cal E}_{C-{\bf M}(E)E}\bigr)\mapsto {\cal R}^0\pi_{\ast}\bigl({\cal O}_{{\bf M}(E)E+\sum \Xi_{k_i}}\otimes {\cal E}_C \bigr)\mapsto {\cal R}^0\pi_{\ast}\bigl( {\cal O}_{{\bf M}(E)E}\otimes {\cal E}_C\bigr)\mapsto {\cal R}^1\pi_{\ast}\bigl({\cal O}_{\sum \Xi_{k_i}}\otimes {\cal E}_{C-{\bf M}(E)E}\bigr),$$ for the sum of the type $I$ universal curves $\Xi_{k_i}\mapsto Y(\Gamma)$. It is not hard to establish the following correspondence, \noindent $\bullet$ ${\bf V}_{canon}^{\circ}={\bf V}_{canon}$ corresponds to ${\bf V}_{\underline{C}}\otimes {\bf H}_{II}= {\bf V}_{\underline{C}}\otimes {\bf H}_{II}$. \noindent $\bullet$ ${\bf W}_{canon}^{\circ}\mapsto {\bf W}_{canon}$ corresponds to ${\bf W}_{new}\mapsto {\bf W}_{\underline{C}}\otimes {\bf H}_{e_{II}}$. \begin{rem}\label{rem; independent} For the purpose of defining the virtual fundamental classes of ${\cal M}_{\underline{C}}$, it is easy to see that the twisting operation from ${\bf V}_{\underline{C}}\mapsto {\bf W}_{\underline{C}}$ to ${\bf V}_{\underline{C}}\otimes {\bf H}_{II} \mapsto {\bf W}_{\underline{C}}\otimes {\bf H}_{II}$ does not affect the virtual fundamental class $[{\cal M}_{\underline{C}}]_{vir}$ \footnote{The line bundle ${\bf H}_{II}$ does not show up in the theory of the type $I$ exceptional classes. Since for type $I$ classes we use the canonical algebraic family Kuranishi models of $e_{k_i}$ and ${\bf H}_{II}$ is reduced to trivial line bundle over ${\cal M}_{e_{k_i}}\cong Y(\Gamma_{e_{k_i}})$.} of ${\cal M}_{\underline{C}}$. \end{rem} \subsection{The Vector Bundle ${\bf W}'$ and the Type $II$ Class $e_{II}$ }\label{subsection; bundle} In the previous subsection, section \ref{subsection; type II}, we have defined ${\bf W}'$ to be the vector bundle associated with ${\cal R}^0\pi_{\ast}\bigl({\cal O}_{{\bf e}_{II}\cap nD}(nD)\otimes {\cal E}_{\underline{C}}\bigr)\otimes {\cal H}_{II}$ and it is a quotient bundle of ${\bf W}_{\underline{C}}\otimes {\bf H}_{II}$. In this subsection, we would like to bridge ${\bf W}'$ with the virtual fundamental class of the type $II$ class $e_{II}$. The discussion can be extended to more than one $e_{II; i}$ in a parallel manner, once we substitute $e_{II}$ and ${\bf e}_{II}$ by $e_{II; i}$ and $\sum {\bf e}_{II; i}$, respectively. The general version will be address in subsection \ref{subsection; virtual} later. Firstly we prove a simple lemma, \begin{lemm}\label{lemm; factor} Let ${\bf W}_{II}'\mapsto {\cal M}_{e_{II}}$ denote the vector bundle associated with the locally free sheaf ${\cal R}^0\pi_{\ast}\bigl( {\cal O}_{{\bf e}_{II}\cap nD}({\bf e}_{II}+nD)\bigr)\otimes {\cal H}_{II}$. Then we have the equality on the ranks $rank_{\bf C}{\bf W}_{II}'=rank_{\bf C}{\bf W}'$, and the total Chern class of ${\bf W}'$, $c_{total}({\bf W}')$, can be identified with $c_{total}({\bf W}_{II}')+\eta$, where $\eta$ is a polynomial of the ``${\bf e}_{II}\mapsto {\cal M}_{e_{II}}$'' push-forward of monomials the variables $c_1({\cal E}_{\underline{C}-e_{II}})$, $e_{II}$ and $nD$. \end{lemm} \noindent Proof: To determine the ranks of ${\bf W}_{II}'$ and ${\bf W}'$, it suffices to calculate them at a closed point $b\in {\cal M}_{e_{II}}$. Because $nD$ is very ample in ${\cal X}$, we can replace $nD$ by a linearly equivalent very ample divisor such that it intersects with the curve ${\bf e}_{II}|_b$ at a finite number of points. It is easy to see by base-change theorem that the ranks of both ${\bf W}_{II}'$ and ${\bf W}'$ are equal to $\int_{\cal X}{\bf e}_{II}|_b\cap nD\in {\bf N}$. Thus they must be equal. Because the higher derived images vanish, both ${\cal R}^0\pi_{\ast}({\cal O}_{{\bf e}_{II}\cap nD}({\bf e}_{II}+nD))=\pi_{\ast} ({\cal O}_{{\bf e}_{II}\cap nD}({\bf e}_{II}+nD))$ and ${\cal R}^0\pi_{\ast}({\cal O}_{{\bf e}_{II}\cap nD}(\underline{C}+nD))=\pi_{\ast} ({\cal O}_{{\bf e}_{II}\cap nD}(\underline{C}+nD))$, and their Chern characters can be computed by Grothendieck-Riemann-Roch formula (See chapter 15. and chapter 18. of [F]). Since ${\cal O}_{{\bf e}_{II}\cap nD}(\underline{C}+nD)$ can be constructed from ${\cal O}_{{\bf e}_{II}\cap nD}({\bf e}_{II}+nD)$ by twisting ${\cal E}_{\underline{C}-e_{II}}$, the conclusion follows from Grothendieck-Riemann-Roch formula and the fact that total Chern class can be expressed in terms of the Chern character algebraically. $\Box$ Next we consider the following short exact commutative diagram\footnote{We pull back ${\cal X}\mapsto B$ by the mapping ${\cal M}_{e_{II}}\mapsto B$ and abbreviate ${\cal M}_{e_{II}}\times_B{\cal X}$ by the same notation ${\cal X}$.}, \[ \hskip -.1in \begin{array}{ccccc} {\cal O}_{\cal X} & \mapsto & {\cal O}_{\cal X}(nD) & \mapsto& {\cal O}_{nD}(nD)\\ \Big\downarrow & &\Big\downarrow & & \Big\downarrow\\ {\cal O}_{\cal X}({\bf e}_{II})\otimes {\cal H}_{II} &\mapsto & {\cal O}_{\cal X}({\bf e}_{II}+nD) \otimes {\cal H}_{II} & \mapsto & {\cal O}_{nD}({\bf e}_{II}+nD)\otimes {\cal H}_{II}\\ \Big\downarrow& & \Big\downarrow& & \Big\downarrow\\ {\cal O}_{{\bf e}_{II}}({\bf e}_{II})\otimes {\cal H}_{II} & \mapsto& {\cal O}_{{\bf e}_{II}}({\bf e}_{II}+nD) \otimes {\cal H}_{II}& \mapsto & {\cal O}_{{\bf e}_{II}\cap nD}({\bf e}_{II}+ nD)\otimes {\cal H}_{II} \end{array} \] This diagram is constructed from the twisted versions of the defining short exact sequences of the form $0\mapsto {\cal O}\mapsto {\cal O}({\bf D})\mapsto {\cal O}_{\bf D}({\bf D})\mapsto 0$ with ${\bf D}={\bf e}_{II}$, ${\bf e}_{II}$ and ${\bf e}_{II}|_{nD}$ for the columns and ${\bf D}=nD$, $nD$ and $nD|_{{\bf e}_{II}}$ for the rows, respectively. By pushing these exact sequences forward along (the suitable restriction of) $\pi:{\cal M}_{e_{II}}\times_B {\cal X}\mapsto {\cal M}_{e_{II}}$, we get the following commutative diagram of short exact sequences, \[ \hskip -.3in \begin{array}{ccccc} {\cal R}^0\pi_{\ast}\bigl({\cal O}_{\cal X}\bigr) & \mapsto & {\cal R}^0\pi_{\ast}\bigl({\cal O}_{\cal X}(nD)\bigr)&\mapsto & {\cal R}^0\pi_{\ast}\bigl({\cal O}_{nD}(nD)\bigr)\\ \Big\downarrow & & \Big\downarrow& & \Big\downarrow\\ {\cal R}^0\pi_{\ast}\bigl({\cal O}_{\cal X}({\bf e}_{II})\bigr)\otimes {\cal H}_{II} & \mapsto & {\cal R}^0\pi_{\ast}\bigl({\cal O}_{\cal X}({\bf e}_{II}+nD)\bigr)\otimes {\cal H}_{II} & \mapsto & {\cal R}^0\pi_{\ast}\bigl({\cal O}_{nD}({\bf e}_{II}+nD)\bigr)\otimes {\cal H}_{II}\\ \Big\downarrow & & \Big\downarrow& & \Big\downarrow\\ {\cal R}^0\pi_{\ast}\bigl({\cal O}_{{\bf e}_{II}}({\bf e}_{II})\bigr)\otimes {\cal H}_{II} & \mapsto& {\cal R}^0\pi_{\ast}\bigl({\cal O}_{{\bf e}_{II}}({\bf e}_{II}+nD)\bigr) \otimes {\cal H}_{II} & \mapsto & {\cal R}^0\pi_{\ast}\bigl({\cal O}_{{\bf e}_{II}\cap nD}({\bf e}_{II}+nD)\bigr) \otimes {\cal H}_{II} \end{array} \] As usual when we assume $e_{II}-c_1({\bf K}_{{\cal X}/B})$ is nef, the second derived image sheaf ${\cal R}^2\pi_{\ast}\bigl({\cal O}_{\cal X}({\bf e}_{II})\bigr)$ vanishes and we have the following sheaf surjection, $$\hskip -.2in {\cal R}^1\pi_{\ast}\bigl({\cal O}_{\cal X}({\bf e}_{II}) \bigr)\otimes {\cal H}_{II} \mapsto {\cal R}^1\pi_{\ast}\bigl({\cal O}_{{\bf e}_{II}}({\bf e}_{II}) \bigr)\otimes {\cal H}_{II} \mapsto {\cal R}^2\pi_{\ast}\bigl({\cal O}_{\cal X}\bigr)\mapsto 0.$$ And this implies that ${\cal R}^1\pi_{\ast}\bigl({\cal O}_{{\bf e}_{II}}({\bf e}_{II}))\otimes {\cal H}_{II}$ is mapped onto the locally free quotient sheaf ${\cal R}^2\pi_{\ast}\bigl({\cal O}_{\cal X}\bigr)$ of rank $p_g$. On the other hand, we have the isomorphism ${\cal R}^1\pi_{\ast}\bigl({\cal O}_{nD}(nD)\bigr)\cong {\cal R}^2\pi_{\ast}\bigl({\cal O}_{\cal X}\bigr)$, due to the vanishing of ${\cal R}^i\pi_{\ast} \bigl({\cal O}_{\cal X}(nD)\bigr)$ for $i>0$ and $n\gg 0$. Thus we have the following commutative diagram of sheaves\footnote{These two diagrams have overlapping blocks.}, \[ \hskip -.9in \begin{array}{ccccccc} {\cal R}^0\pi_{\ast}\bigl({\cal O}_{\cal X}({\bf e}_{II}+nD)\bigr) \otimes {\cal H}_{II} & \mapsto & {\cal R}^0\pi_{\ast}\bigl({\cal O}_{nD}({\bf e}_{II}+nD)\bigr) \otimes {\cal H}_{II} &\mapsto & {\cal R}^1\pi_{\ast}\bigl({\cal O}_{\cal X}({\bf e}_{II})\bigr) \otimes {\cal H}_{II}& \mapsto & 0\\ \Big\downarrow & &\Big\downarrow & & \Big\downarrow & & \\ {\cal R}^0\pi_{\ast}\bigl({\cal O}_{{\bf e}_{II}}({\bf e}_{II}+nD)\bigr) \otimes {\cal H}_{II} & \mapsto& {\cal R}^0\pi_{\ast}\bigl({\cal O}_{{\bf e}_{II}\cap nD}({\bf e}_{II}+nD)\bigr) \otimes {\cal H}_{II}& \mapsto& {\cal R}^1\pi_{\ast}\bigl({\cal O}_{{\bf e}_{II}}({\bf e}_{II})\bigr) \otimes {\cal H}_{II}& \mapsto & 0\\ \Big\downarrow & & \Big\downarrow& & \Big\downarrow & & \\ 0 &\mapsto & {\cal R}^1\pi_{\ast}\bigl({\cal O}_{nD}(nD)\bigr) & \stackrel{\cong}{\mapsto} & {\cal R}^2\pi_{\ast}\bigl({\cal O}_{\cal X}\bigr) & \mapsto & 0\\ & & \Big\downarrow & & \Big\downarrow & & \\ & & 0 & & 0 & & \end{array} \] Notice that ${\cal R}^0\pi_{\ast}\bigl({\cal O}_{nD}({\bf e}_{II}+nD)\bigr)$ in the first row is nothing but the ${\bf W}_{e_{II}}$ in the datum of algebraic family Kuranishi model of $e_{II}$. The second row is a part of a four-term exact sequence regarding the fiberwise infinitesimal deformations and obstructions of ${\bf e}_{II}$. The above observation indicates that there exists a $4-$term exact sequence of obstruction vector bundles $$\hskip -.7in 0\mapsto {\bf R}^0\pi_{\ast}\bigl({\cal O}_{nD}(nD)\bigr)\mapsto {\bf R}^0\pi_{\ast}\bigl({\cal O}_{nD}({\bf e}_{II}+nD)\bigr) \otimes {\bf H}_{II} \mapsto {\bf R}^0\pi_{\ast}\bigl({\cal O}_{{\bf e}_{II}\cap nD}({\bf e}_{II} +nD)\bigr)\otimes {\bf H}_{II} \mapsto {\bf R}^2\pi_{\ast}\bigl({\cal O}_{\cal X}\bigr)\mapsto 0,$$ between the algebraic family obstruction bundle ${\bf W}_{e_{II}}$ of $e_{II}$ and the vector bundle ${\bf R}^0\pi_{\ast}\bigl({\cal O}_{{\bf e}_{II}\cap nD}({\bf e}_{II}+nD)\bigr) \otimes {\bf H}_{II}$ onto the infinitesimal obstructions ${\bf R}^1\pi_{\ast}\bigl({\cal O}_{{\bf e}_{II}}({\bf e}_{II})\bigr)$. Their ranks differ by the geometric genus $p_g$ of the fiber algebraic surfaces of ${\cal X}\mapsto B$. As before let $ed$ denote the expected algebraic family Seiberg-Witten dimension of the type $II$ class $e_{II}$, $ed= dim_{\bf C}B+p_g+{e_{II}^2-e_{II}\cdot c_1({\bf K}_{{\cal X}/B}) \over 2}$. The following proposition shows that the virtual fundamental class of ${\cal M}_{e_{II}}$, $[{\cal M}_{e_{II}}]_{vir}$, appears naturally within the localized contribution of top Chern class of the bundle ${\bf R}^0\pi_{\ast}\bigl({\cal O}_{nD}(nD) \bigr)\otimes {\bf H}_{II}\oplus {\bf W}'$ along ${\cal M}_{e_{II}}$. It will play an essential role in our residual intersection theory approach in section \ref{subsection; stable}. \begin{prop}\label{prop; local} Let ${\bf W}_{e_{II}}'$ and ${\bf W}'$ be the vector bundles associated with the locally free sheaves ${\cal R}^0\pi_{\ast}\bigl({\cal O}_{{\bf e}_{II}\cap nD} ({\bf e}_{II}+nD)\bigr)\otimes {\cal H}_{II}$ and ${\cal R}^0\pi_{\ast}\bigl({\cal O}_{{\bf e}_{II}\cap nD} (\underline{C}+nD)\bigr)\otimes {\cal H}_{II}$ over ${\cal M}_{e_{II}}$, respectively. Then the localized contribution of top Chern class $$\hskip -.4in \{c_{total}({\bf R}^0\pi_{\ast}\bigl({\cal O}_{nD}(nD) \bigr)\otimes {\cal H}_{II}\oplus {\bf W}')\cap s({\cal M}_{e_{II}}, {\bf P}_B({\bf V}_{e_{II}}))\}_{ed-p_g}= [{\cal M}_{e_{II}}]_{vir}\cap c_{p_g}({\cal R}^2\pi_{\ast}\bigl({\cal O}_{\cal X}\bigr))+\tilde{\eta}.$$ Over here $\tilde{\eta}$ is a cycle class which is a polynomial of the push-forward of monomials in $\underline{C}-e_{II}$, ${\bf e}_{II}$ and $nD$ along ${\bf e}_{II}\mapsto {\cal M}_{e_{II}}$. \end{prop} \noindent Proof of proposition \ref{prop; local}: We first recall that $[M_{e_{II}}]_{vir}= \{c_{total}({\bf W}_{e_{II}}\otimes {\bf H}_{II}|_{{\cal M}_{e_{II}}})\cap s_{total}({\cal M}_{e_{II}}, {\bf P}({\bf V}_{e_{II}}))\}_{ed}\in {\cal A}_{\cdot}({\cal M}_{e_{II}})$, is the localized top Chern class of $\pi_{{\bf P}({\bf V}_{e_{II}})}^{\ast} {\bf W}_{e_{II}}\otimes {\bf H}_{II}$. On the other hand, we have observed from the above discussion that $c_{total}({\bf W}_{e_{II}}')$ is the cap product of $c_{total}({\bf W}_{e_{II}}\otimes {\bf H}_{II})$ and $c_{total}({\bf R}^2\pi_{\ast}{\cal O}_{\cal X})$. So by capping with $s_{total}({\cal M}_{e_{II}}, {\bf P}({\bf V}_{e_{II}}))$ and by taking the degree $ed-p_g$ term, we find $$ \{c_{total}(\bigl({\bf R}^0\pi_{\ast}{\cal O}_{nD}(nD)\otimes {\bf H}_{II}\oplus {\bf W}_{II}'\bigr)|_{{\cal M}_{e_{II}}})\cap s_{total}({\cal M}_{e_{II}}, {\bf P}({\bf V}_{e_{II}}))\}_{ed-p_g}$$ $$=\{c_{total}({\bf W}_{e_{II}}\otimes {\bf H}_{II}|_{{\cal M}_{e_{II}}}) \cap c_{total}({\bf R}^2\pi_{\ast}{\cal O}_{\cal X}) \cap s_{total}({\cal M}_{e_{II}}, {\bf P}({\bf V}_{e_{II}}))\}_{ed-p_g}$$ $$=[{\cal M}_{e_{II}}]_{vir}\cap c_{p_g}({\bf R}^2\pi_{\ast}{\cal O}_{\cal X}),$$ by using that crucial property that the pairing $\{c_{total}({\bf W}_{e_{II}}\otimes {\bf H}_{II} |_{{\cal M}_{e_{II}}})\cap s_{total}({\cal M}_{e_{II}}, {\bf P}({\bf V}_{e_{II}})\}_{ed-k}=0$, and $c_{p_g+k}({\bf R}^2\pi_{\ast}{\cal O}_{\cal X})=0$ for all $k>0$. Then the equality of our proposition follows from applying lemma \ref{lemm; factor}. $\Box$ \begin{rem} If the formal excess base dimension $febd(e_{II}, {\cal X}/B)=0$, then the expected dimension $ed=dim_{\bf C}B+{e_{II}^2-e_{II}\cdot c_1({\bf K}_{{\cal X}/B})\over 2}$. Then the identity in the above proposition should be replaced by $$\hskip -.4in \{c_{total}({\bf R}^0\pi_{\ast}\bigl({\cal O}_{nD}(nD) \bigr)\otimes {\cal H}_{II}\oplus {\bf W}')\cap s({\cal M}_{e_{II}}, {\bf P}_B({\bf V}_{e_{II}}))\}_{ed}= [{\cal M}_{e_{II}}]_{vir}+\tilde{\eta}.$$ \end{rem} \section{The Blowup Construction of Algebraic Family Seiberg-Witten Invariants}\label{section; blowup} In this section we discuss the blowup construction of the family Seiberg-Witten invariant ${\cal AFSW}_{{\cal X}\mapsto B}(1, \underline{C})$ with respect to a finite collection of type $II$ exceptional classes $e_{II; 1}$, $e_{II; 2}$, $\cdots$, $e_{II; p}$, generalizing the blowup and residual intersection theory construction of type $I$ exceptional classes in [Liu6]. One major difference between the theories of type $I$ and type $II$ exceptional classes is that for a type $II$ exceptional class $e_{II}$, the family moduli space of $e_{II}$, ${\cal M}_{e_{II}}=Z(s_{II})$ may not be regular and the cycle class $[{\cal M}_{e_{II}}]_{vir}={\bf Z}(s_{II})$ is typically not equal to $[Z(s_{II})]$. Actually for the canonical algebraic family Kuranishi model of a type $I$ exceptional class $e_i$ we \footnote{see section 6.2. of [Liu5].} have ${\bf V}_{e_i}\cong {\bf C}$, the constant line bundle over $M_n$ and ${\bf P}({\bf C})\cong M_n$. Moreover, the family moduli space of $e_i$, the existence locus of $e_i$ over $M_n$, can be identified with the closure of the admissible stratum $Y(\Gamma_{e_i})$ of the fan-like admissible graph $\Gamma_{e_i}$ (see the graph on page \pageref{fanlike}). As $Y(\Gamma_{e_i})$ is smooth of the expected dimension $dim_{\bf C}M_n+(e_i^2+1)$, $[Y(\Gamma_{e_i})]$ represents the fundamental class of the family moduli space ${\cal M}_{e_i}$. Let ${\cal M}_{e_{II; i}}$ be the family moduli space of $e_{II; i}$ and let $\pi_i:{\cal M}_{e_{II; i}}\mapsto B$ be the canonical projection into $B$. Over the locus $\pi_i({\cal M}_{e_{II; i}})\subset B$ the class $e_{II; i}$ becomes effective and over their intersection $\cap_{1\leq i\leq p}\pi_i({\cal M}_{e_{II; i}})\subset B$ all the type $II$ exceptional classes $e_{II; i}$ become effective simultaneously. \begin{defin}\label{defin; intersect} Define $\cap_{1\leq i\leq p}\pi_i({\cal M}_{e_{II; i}})\subset B$ to be the locus of co-existence of $e_{II; 1}$, $e_{II; 2}$, $\cdots$, $e_{II; p}$. Define ${\cal M}_{e_{II; 1}, \cdots, e_{II; p}}= \times_B^{1\leq i\leq p}{\cal M}_{e_{II; i}}$ to be the moduli space of co-existence of the classes $e_{II; 1}$, $\cdots, e_{II; p}$. \end{defin} Ideally we may expect ${\cal M}_{e_{II; i}}$ to be smooth of the expected\footnote{Assuming $febd(e_{II}, {\cal X}/B)=p_g$.} dimension $dim_{\bf C}B+p_g+{e_{II}^2-c_1({\bf K}_{{\cal X}/B})\cdot e_{II}\over 2}$ and there exists a Zariski open and dense subset of ${\cal M}_{e_{II; i}}$, called the ``interior'' of ${\cal M}_{e_{II; i}}$, parametrizing the irreducible curves representing $e_{II; i}$. Then ${\cal M}_{e_{II; i}}$ can be viewed as the natural compactification of its open and dense ``interior''. Under the idealistic assumption, we ``expect'' that there exists a Zariski-dense open subset of $\times_B^{1\leq i\leq p}{\cal M}_{e_{II; i}}$ which parametrizes tuples of irreducible universal type $II$ curves ${\bf e}_{II; i}$, $1\leq i\leq p$. In the real world the individual ${\cal M}_{e_{II; i}}$ may not be smooth, the intersection $\cap_{1\leq i\leq p}\pi_i({\cal M}_{e_{II; i}})$ or the fiber product ${\cal M}_{e_{II; 1}, \cdots, e_{II; p}}= \times_B^{1\leq i\leq p}{\cal M}_{e_{II; i}}$ is seldom regular. The basic philosophy of family Gromov-Taubes theory is to replace the objects ${\cal M}_{II; i}$ or ${\cal M}_{e_{II; 1}, \cdots, e_{II; p}}$ by the appropriated virtual fundamental classes and interpret the enumeration of the invariants in term of intersection theory [F]. Thanks to the fact that all ${\cal M}_{e_{II; i}}$ are compact, no complicated gluing construction is ever needed. Because the numerical condition $e_{II; i}\cdot \underline{C}<0$ we impose on $\underline{C}$ and $e_{II; i}$, any effective representative of $\underline{C}$ over the ``interior'' of $\cap_{1\leq i\leq p}\pi_i({\cal M}_{e_{II; i}})$ has to break off certain multiples of curves representing $e_{II; i}$, for each $1\leq i\leq p$. So we may write $\underline{C}=(\underline{C}-\sum e_{II; i})+\sum e_{II; i}$ formally. Thus we should be able to attach a family invariant of $\underline{C}-\sum_{1\leq i\leq p}e_{II; i}$ to (the virtual fundamental class) of $\times_B^{i\leq p}{\cal M}_{e_{II; i}}$ using the geometric information of $e_{II; i}$, $1\leq i\leq p$ and express the localized contribution un-ambiguously. In the following, we consider the following general question, \noindent {\bf Question}: Let $\underline{C}$ be an effective curve class over ${\cal X}\mapsto B$ and let $e_{II; 1}, e_{II; 2}, e_{II; 3}, \cdots, e_{II; p}$ be $p$ distinct type $II$ exceptional classes over ${\cal X}\mapsto B$ such that $e_{II; i}\cdot \underline{C}<0$ for all $1\leq i\leq p$, while $e_{II; i}\cdot e_{II; j}\geq 0$ for $i\not=j$. What is the algebraic family Seiberg-Witten invariant attached to the moduli space of co-existence of $e_{II; i}$, $1\leq i\leq p$, $\times_B^{i\leq p}{\cal M}_{e_{II; i}}$? And what is the residual contribution of the algebraic family invariant of $\underline{C}$ away from this moduli space of co-existence of $e_{II; i}$? The resolution of the type $I$ analogue of the above question has been the backbone of the proof of ``universality theorem'' [Liu6]. Conceptually the residual contribution of the family invariant represents the contributions to the family invariants from curves in $\underline{C}$ within the family ${\cal X}\mapsto B$ which are {\bf NOT} decomposed into a union of curves representing $\underline{C}-\sum_{1\leq i\leq p}e_{II; i}$ and $\sum_{1\leq i\leq p}e_{II; i}$, respectively. There are a few important guidelines that we impose, based on the type $I$ theory, developed algebraically in [Liu6]. \noindent {\bf Guideline 1}: We require that the localized (excess) contribution of the family invariant of $\underline{C}$ along $\times_B^{1\leq i\leq p}{\cal M}_{e_{II; i}}$ to be proportional to the virtual fundamental class of ${\cal M}_{e_{II; 1}, \cdots, e_{II; p}}= \times_B^{1\leq i\leq p}{\cal M}_{e_{II; i}}$ and to the virtual fundamental class of ${\cal M}_{\underline{C}-\sum_{i\leq p}e_{II; i}}$. In particular, when either $[{\cal M}_{e_{II; 1}, \cdots, e_{II; p}}]_{vir}$ or $[{\cal M}_{\underline{C}-\sum_{i\leq p}e_{II; i}}]_{vir}$ vanishes, we require the desired localized contribution to vanish as well. \label{guide} \noindent {\bf Guideline 2}: Because type $II$ exceptional classes can behave badly in comparison with their type $I$ siblings, the process of identifying the family invariant attached to $\underline{C}-\sum_{1\leq i\leq p}e_{II; i}$ may be more delicate than the theory of type $I$ exceptional classes. But we expect that the resulting family invariant (see theorem \ref{theo; degenerate} for details) can be reduced to the modified algebraic family Seiberg-Witten \footnote{Consult definition 13 and 14 of [Liu6] for details.} invariant ${\cal AFSW}_{M_{n+1}\times T(M)\mapsto M_n\times T(M)}(c_{total}(\tau_{\Gamma}), C-{\bf M}(E)E- \sum_{e_i\cdot (C-{\bf M}(E)E)<0}e_i)$ when $\underline{C}=C-{\bf M}(E)E$ and $e_{II; i}$ are reduced\footnote{The type $II$ curves satisfying condition $febd(e_{II; i}, {\cal X}/B)=p_g$ have different dimension formulae from the type $I$ curves'. This $p_g$ dimension shift introduces an additional $c_{p_g}({\cal R}^2\pi_{\ast} {\cal O}_{\cal X})$ insertion for each type $II$ class.} to some collections of the type $I$ classes of the universal family $M_{n+1}\mapsto M_n$. \noindent {\bf Guideline 3}: The localized (excess) contribution of the family invariant of $\underline{C}$ along $\times_B^{i\leq p} {\cal M}_{e_{II; i}}$ has to be independent to the algebraic family Kuranishi models chosen for $\underline{C}$, $e_{II; i}$, $1\leq i\leq p$, etc. In particular, it is independent to $n\gg 0$ and the very ample divisor $D\subset {\cal X}$, etc, chosen to define the Kuranishi models. \label{guide4} \noindent {\bf Guideline 4}: The construction we will provide should enable us to generalize to an inductive scheme involving more than one single collection of type $II$ exceptional classes. Because exceptional curves can break up and degenerate within a given family, instead of considering only a single collection of exceptional curves our scheme should work for a whole hierarchy of them. These few guidelines determine the localized (excess) contribution of the family invariant uniquely, as will be shown in theorem \ref{theo; degenerate}. In the following subsection, some basic knowledge in intersection theory [F] is recalled before we move on to the main theorem of the paper. \subsubsection{The Normal Cones and the Fiber Products}\label{subsubsection; cone} Let $X_i\mapsto B$, $1\leq i\leq n$, be $n$ purely $dim_{\bf C}X_i$ dimensional schemes over a smooth variety $B$ and let $Y_i\subset X_i$, $1\leq i\leq n$ be the closed sub-schemes of $X_i$ defined by the zero loci of sections $s_i:X_i\mapsto E_i$ of vector bundles $E_i\mapsto X_i$. Consider the fiber products $\times_{B; i\leq n} Y_i\subset \times_{B; i\leq n}X_i$. In the sub-section we want to review the intersection product refined on $\times_{B; i\leq n} Y_i$, based on Fulton's general construction [F]. Recall that on page 132, definition 8.1.1. of [F], a refined product $x\cdot_f y$ is defined. Let $f:X\mapsto Y$ with $Y$ non-singular and let $p_X:X'\mapsto X$ and $P_Y:Y'\mapsto Y$ be morphisms of schemes. Let $x\in {\cal A}_{\cdot}(X')$, $y\in {\cal A}_{\cdot}(Y')$. Then we can define $\gamma_f:X\mapsto X\times Y$ by $\gamma_{f}(t)=(t, f(t))$ and we have the following commutative diagram, \[ \begin{array}{ccc} X'\times_YY' & \mapsto & X'\times Y'\\ \Big\downarrow & & \Big\downarrow \\ X& \stackrel{\gamma_{f}}{\mapsto} & X\times Y \end{array} \] \begin{defin} \label{defin; fiber} Define $x\cdot_f y=\gamma_f^{!}(x\times y)$. \end{defin} Please consult page 132, proposition 8.1.1. for all the basic properties of $\cdot_f$, including its associativity and commutativity, etc. As usual, we take $[Y]_{vir}={\bf Z}(s_i)$. Our goal is to determine the virtual fundamental class of $\times_B^{i\leq p}[Y_i]_{vir}$. \begin{defin}\label{defin; vir} Define $[\times_B^{1\leq i\leq p}Y_i]_{vir}$ to be the Gysin pull-back $\Delta^{!}({\bf Z}(\oplus s_i))$ of $\oplus E_i\mapsto \times X_i$ by the diagonal morphism $\Delta:B\mapsto B^p$. \end{defin} Based on mathematical induction, we may reduce to the $p=2$ case and prove the following proposition. \begin{prop}\label{prop; product} Let $Y_1\subset X_1\mapsto B$ and $Y_2\subset X_2\mapsto B$ be closed sub-schemes over $B$ defined as the zero loci of the vector bundles $E_i\mapsto X_i$. Then the virtual fundamental class of co-existence $[\times_B^{i\leq p} Y_i]_{vir}$ defined in definition \ref{defin; vir} is equal to $[Y_1]_{vir}\cdot_{id_B} [Y_2]_{vir}$. \end{prop} \noindent Proof of proposition \ref{prop; product}: Firstly consider the Cartesian product $Y_1\times Y_2\subset X_1\times X_2$. It is clear that $C_{Y_1\times Y_2}(X_1\times X_2)=C_{Y_1}X_1\times C_{Y_2}X_2$. The Cartesian products project naturally into $B\times B$ and the fiber product $Y_1\times_B Y_2$ or $X_1\times_B X_2$ can be viewed as the pull-back through $\Delta:B\mapsto B\times B$ of $Y_1\times Y_2\mapsto B\times B$ or $X_1\times X_2\mapsto B\times B$. The virtual fundamental class of $Y_1\times Y_2$, $[Y_1\times Y_2]_{vir}$ is $$\{c_{total}(E_1\oplus E_2|_{Y_1\times Y_2})\cap s_{total} (C_{Y_1\times Y_2}(X_1\times X_2))\}_{\sum dim_{\bf C}X_i-\sum rank_{\bf C}E_i}.$$ We know that $C_{Y_1\times Y_2}(X_1\times X_2)=C_{Y_1}X_1\times C_{Y_2}X_2$. From the following lemma, we can compute its total Segre class. \begin{lemm}\label{lemm; pro} Let $p_1:Y_1\times Y_2\mapsto Y_1$ and $p_2:Y_1\times Y_2\mapsto Y_2$ be the natural projections. Then the projections induce cones $p_1^{\ast}C_{Y_2}X_2$, $p_2^{\ast}C_{Y_1}X_1$ over $Y_1\times Y_2$, the normal cones of $Y_1\times Y_2\subset Y_1\times X_2$ and of $Y_1\times Y_2\subset X_1\times Y_2$. We have the following identities on the total Segre classes, $$s_{total}(C_{Y_1}X_1\times C_{Y_2}X_2)=s_{total}(p_2^{\ast}C_{Y_1}X_1)\cap s_{total}(p_1^{\ast}C_{Y_2}X_2)=s_{total}(C_{Y_1}X_1)\times s_{total}(C_{Y_2}X_2).$$ \end{lemm} The lemma is a generalization of the Whitney sum formula of vector bundles to normal cones. For completeness, we offer a simple proof here. \noindent Proof of lemma \ref{lemm; pro}: If we have either $Y_1=X_1$ or $Y_2=X_2$, the above formula is a trivial identity. Let us assume $Y_i\not= X_i$, for $1\leq i\leq 2$. We blow up $X_1, X_2$ along $Y_1, Y_2$, respectively and denote the resulting schemes by $\tilde{X}_1$, $\tilde{X}_2$, respectively. Let $D_1, D_2$ denote the resulting exceptional divisors. Then we may blow up $\tilde{X}_1\times \tilde{X}_2$ along the codimension two $D_1\times D_2$ and denote the resulting scheme $\tilde{X}_3$ with the exceptional divisor $D_3$. On the other hand, we may blow up $X_1\times X_2$ along $Y_1\times Y_2$ directly and denote the resulting scheme by $\widetilde{X_1\times X_2}$ and denote the exceptional divisor by $D$. Consider the dominated morphism $\tilde{X}_3\mapsto X_1\times X_2$, which maps $D_3$ onto $Y_1\times Y_2$. By the universal property of scheme theoretical blowing up (proposition II.7.14 of [Ha]), the above map factors through $\tilde{X}_3\mapsto \widetilde{X_1\times X_2}$. We have the following commutative diagram, \[ \begin{array}{ccc} \tilde{X}_3 & \mapsto & \tilde{X}_1\times \tilde{X}_2\\ \Big\downarrow & & \Big\downarrow \\ \widetilde{X_1\times X_2} & \mapsto & X_1\times X_2 \\ \end{array} \] By using this commutative diagram, we may push-forward the total Segre class of $C_{D_3}\tilde{X}_3$, $=\sum_{i\geq 0} c_1({\cal O}(-D_3))^i$, to $X_1\times X_2$ along two paths and the results must match. By using the birational invariance of the total Segre classes under proper birational push-forward (page 74, prop. 4.2 of [F]), we conclude that $$s_{total}(C_{Y_1\times Y_2}(X_1\times X_2))=s_{total}(p_2^{\ast}C_{Y_1}X_1) \cap s_{total}(p_1^{\ast}C_{Y_2}X_2).$$ $\Box$ By inserting the above identity into the defining equality of $[Y_1\times Y_2]_{vir}$, we find that $[Y_1\times Y_2]_{vir}=[Y_1]_{vir}\times [Y_2]_{vir}$. Moreover, we may take $X=Y=B$ and $f=id_B:B\mapsto B$, $X'=Y_1$ and $Y'=Y_2$, $x=[Y_1]_{vir}$ and $y=[Y_2]_{vir}$ in definition \ref{defin; fiber}. In this case $\gamma_f=\Delta:B\mapsto B\times B$ and the virtual fundamental class of the co-existence locus $[Y_1\times_B Y_2]_{vir}$ is $$\Delta^{!}([Y_1\times Y_2]_{vir})= \Delta^{!}([Y_1]_{vir}\times [Y_2]_{vir})=[Y_1]_{vir}\times_{id_B} [Y_2]_{vir}.$$ $\Box$ Because the refined intersection product $\cdot_f$ is associative, by mathematical induction it is not hard to see that $[\times_B^{i\leq p} Y_i]_{vir}=\cdot_{id_B}^{1\leq i\leq p}[Y_i]_{vir}$. We have the following simple lemma regarding their push-forwards into the global objects $\in{\cal A}_{\cdot}(B)$. \begin{lemm}\label{lemm; cap} Let $p_{Y_i}:Y_i\mapsto B$ denote the proper projection map from $Y_i$ to the smooth base space $B$. The push-forward image of $[\times_B^{i\leq p}Y_i]_{vir}$ into ${\cal A}_{\cdot}(B)$ (with $\cdot$ being the intersection product on $B$) is the intersection product $p_{Y_1\ast}[Y_1]\cdot p_{Y_2\ast} [Y_2]_{vir}\cdots ]_{vir}\cdot p_{Y_p\ast}[Y_p]_{vir} \in {\cal A}_{\cdot}(B)$. \end{lemm} \noindent Proof of lemma \ref{lemm; cap}: Because $B$ is non-singular, the intersection product $\cdot$ makes ${\cal A}^{\cdot}(B)={\cal A}_{dim_{\bf C}B-\cdot}(B)$ a commutative, graded ring with unit $[B]$. On the other hand, there is a commutative diagram, \[ \begin{array}{ccc} \times_B^{i\leq p}Y_i & \longrightarrow & \times_{i\leq p}Y_i\\ \Big\downarrow\vcenter{ \rlap{$\scriptstyle{\mathrm{\times_{i\leq p}p_{Y_i}|_{\times_B^{i\leq p}Y_i}}}$}}& & \Big\downarrow\vcenter{ \rlap{$\scriptstyle{\mathrm{\times_{i\leq p}p_{Y_i}}}$}} \\ B& \stackrel{\Delta}{\longrightarrow} & B^p \end{array} \] By theorem 6.2. (a) on page 98 of [F], $$(\times_{i\leq p}p_{Y_i}|_{\times_B^{i\leq p}Y_i})_{\ast}\Delta^{!}=\Delta^{!} (\times_{i\leq p}p_{Y_i})_{\ast}.$$ Because $(\times_{i\leq p}p_{Y_i})_{\ast}(\times_{i\leq p}[Y_i]_{vir})= \times_{i\leq p}p_{Y_i\ast}[Y_i]_{vir}$, the lemma follows from example 8.1.9. of [F]. $\Box$ \subsection{The Virtual Fundamental Class of ${\cal M}_{e_{II; 1}, \cdots, e_{II; p}}$ and the Extension of Prop. \ref{prop; local}}\label{subsection; virtual} We apply the discussion in subsection \ref{subsubsection; cone} to the concrete situation of algebraic family Kuranishi models of $e_{II; i}$. Let $e_{II; i}, 1\leq i\leq p$ be $p$ distinct type $II$ exceptional classes over ${\cal X}\mapsto B$. As in sub-section \ref{subsection; type2K}, we can take ${\cal V}_{II; i}$ and ${\cal W}_{II; i}$ to be ${\cal R}^0\pi_{\ast}({\cal O}(nD)\otimes {\cal E}_{e_{II; i}})$ and ${\cal R}^0\pi_{\ast}({\cal O}_{nD}(nD)\otimes {\cal E}_{e_{II; i}})$, respectively. Then $\Phi_{{\bf V}_{II; i}{\bf W}_{II; i}}: {\bf V}_{II; i}\mapsto {\bf W}_{II; i}$ defines the algebraic family Kuranishi model for $e_{II; i}$. We take $X_i={\bf P}_B({\bf V}_{II; i})$ and $Y_i={\cal M}_{e_{II; i}}$. Then $[Y_i]_{vir}$ has to be defined to be the localized top Chern class of $\pi_{{\bf P}({\bf V}_{II; i})}^{\ast}{\bf W}_{II; i}\otimes {\bf H}_{II; i}$, constructed in sub-section \ref{subsection; type2K}. By the general discussion in the preceding subsection \ref{subsubsection; cone}, prop. \ref{prop; product}, we may consider the definition, \begin{defin}\label{defin; coexist} Define the virtual fundamental class $[{\cal M}_{e_{II; 1}, \cdots, e_{II; p}}]_{vir}$ of ${\cal M}_{e_{II; 1}, \cdots, e_{II; p}}= \times_B^{1\leq i\leq p}{\cal M}_{e_{II; i}}$ to be $\cdot_{id_B}^{1\leq i\leq p}[{\cal M}_{e_{II; i}}]_{vir}$. \end{defin} The following proposition is the natural extension of proposition \ref{prop; local} to the $p>1$ case. \begin{prop}\label{prop; moreThanOne} Let ${\cal H}_{II; i}$ be (the restriction of) the hyperplane invertible sheaf of ${\bf P}({\bf V}_{II; i})$ to ${\cal M}_{e_{II; 1}, \cdots, e_{II; p}}$. Let ${\bf e}_{II; i}\mapsto {\cal M}_{e_{II; i}}$ be the universal curve associated to each $e_{II; i}$ with $febd(e_{II; i}, {\cal X}/B)=p_g$. Let ${\bf W}'$ be the vector bundle associated to the locally free sheaf ${\cal R}^0\pi_{\ast}\bigl({\cal O}_{nD\cap \sum_{i\leq p}{\bf e}_{II; i}}(nD+ \underline{C})\bigr)\otimes_{i\leq p} {\cal H}_{II; i}$ over ${\cal M}_{e_{II; 1}, \cdots, e_{II; p}}$. Then the degree $dim_{\bf C}B+ \sum_{1\leq i\leq p}{e_{II; i}^2-c_1({\bf K}_{{\cal X}/B})\cdot e_{II; i}\over 2}$ term of $$\hskip -.5in \{c_{total}(\oplus_{i\leq p} {\bf R}^0\pi_{\ast}\bigl({\cal O}_{nD}(nD)\bigr)\otimes {\cal H}_{II; i}\oplus {\bf W}'|_{{\cal M}_{e_{II; 1}, \cdots, e_{II; p}}})\cap s_{total}({\cal M}_{e_{II; 1}, \cdots, e_{II; p}}, \times_B^{1\leq i\leq p} {\bf P}({\bf V}_{II; i}))\}$$ can be naturally expanded as $$[{\cal M}_{e_{II; 1}, \cdots, e_{II; p}}]_{vir}\cap c_{p_g}^p({\cal R}^2\pi_{\ast}{\cal O}_{\cal X})+\tilde{\eta}.$$ The class $\tilde{\eta}$ is a polynomial (in terms of $\cdot_{id_B}$) of the push-forwards of algebraic expressions in terms of $\underline{C}-\sum_{i\in J}e_{II; i}$, $nD$ and the various $e_{II; i}$, where the index subset $J$ runs through the subsets of $\{1, 2, \cdots, p\}$. \end{prop} The proof of this proposition is based on mathematical induction and proposition \ref{prop; local}. \noindent Sketch of the Proof: Firstly we notice that each ${\cal M}_{e_{II; i}}$ is of expected algebraic family dimension $dim_{\bf C}B+p_g+{e_{II; i}^2-c_1({\bf K}_{{\cal X}/B})\cdot e_{II; i}\over 2}$. So ${\cal M}_{e_{II; 1}, \cdots, e_{II; p}}=\times_B^{1\leq i\leq p} {\cal M}_{e_{II; i}}$ is of expected dimension $dim_{\bf C}B+p_g\cdot p+\sum_{i\leq p} {e_{II; i}^2+c_1({\bf K}_{{\cal X}/B})\cdot e_{II; i}\over 2}$. So the degrees of the formula in the statement of the proposition match. When $p=1$, the above statement is reduced to proposition \ref{prop; local}. For general $p$, we consider the Cartesian product $\times^{1\leq i\leq p}{\bf P}({\bf V}_{II; i})$ and view the fiber product $\times_B^{1\leq i\leq p}{\bf P}({\bf V}_{II; i})$ as its pull-back by the diagonal morphism $\Delta:B\mapsto B^p$. Consider the following sheaf short exact sequence, $$\hskip -.5in {\cal R}^0\pi_{\ast}\bigl({\cal O}_{\sum_{i\leq p-1}{\bf e}_{II; i}\cap nD}( \underline{C}-e_{II; p}+nD)\bigr)\otimes^{i\leq p-1} {\cal H}_{II; i} \mapsto {\cal R}^0\pi_{\ast}\bigl({\cal O}_{\sum_{i\leq p}{\bf e}_{II; i}\cap nD}( \underline{C}+nD)\bigr)\otimes^{i\leq p} {\cal H}_{II; i}$$ $$\mapsto {\cal R}^0\pi_{\ast}\bigl({\cal O}_{{\bf e}_{II; p}\cap nD}(\underline{C}+nD)\bigr) \otimes^{i\leq p} {\cal H}_{II; i}.$$ By our induction hypothesis for $p-1$, applying to the class $\underline{C}'= \underline{C}-e_{II; p}$ and $p-1$ distinct exceptional classes $e_{II; 1}$, $e_{II; 2}$, $\cdots$, $e_{II; p-1}$, and the $p=1$ case (proposition \ref{prop; local}), applying to $\underline{C}'=\underline{C}$ and $e_{II; p}$, using the computation of lemma \ref{lemm; pro} we can write the localized top Chern class as $$\bigl([{\cal M}_{e_{II; 1}, \cdots, e_{II; p-1}}]_{vir}\cap c_{p_g}^{p-1}({\cal R}^2\pi_{\ast}{\cal O}_{\cal X}) +\tilde{\eta}_1 \bigr)\cdot_{id_B}\bigl([{\cal M}_{e_{II; p}}]_{vir}\cap c_{p_g}({\cal R}^2\pi_{\ast}{\cal O}_{\cal X})+\tilde{\eta}_2\bigr),$$ where $\tilde{\eta}_1$ and $\tilde{\eta}_2$ are $\cdot_{id_B}$-polynomial expressions of the push-forwards of $\underline{C}-e_{II; p}-\sum_{i\in I'}e_{II; i}$, $I'\subset \{1, \cdots, p-1\}$ and $\underline{C}-e_{II; p}$ and $nD$, $e_{II; i}$, etc. By a simple calculation, the conclusion follows. $\Box$ \begin{rem}\label{rem; pg} When $febd(e_{II; i}, {\cal X}/B)=0$, the term $c_{p_g}^p({\cal R}^0\pi_{\ast}\bigl({\cal O}_{\cal X}\bigr))$ drops from the above formula in proposition \ref{prop; moreThanOne}. \end{rem} \subsection{The Stabilization of The Kuranishi Model of $\underline{C}$} \label{subsection; stable} Let $(\Phi_{{\bf V}_{\underline{C}}{\bf W}_{\underline{C}}}, {\bf V}_{\underline{C}}, {\bf W}_{\underline{C}})$ be the algebraic family Kuranishi model of $\underline{C}$ defined by adopting some $nD$ with $n\gg 0$ and let $(\Phi_{{\bf V}_{II; i}{\bf W}_{II; i}}, {\bf V}_{II; i}, {\bf W}_{II; i})$ be the algebraic family Kuranishi models of $e_{II; i}$, $1\leq i\leq p$, constructed following the recipe of subsection \ref{subsection; type2K}. Because the family moduli spaces ${\cal M}_{e_{II; i}}$ of the type $II$ classes are embedded in ${\bf P}({\bf V}_{II; i})$ and not in $B$, we have to pull-back the algebraic family Kuranishi model of $\underline{C}$ from $B$ first. \begin{lemm}\label{lemm; sub} Let ${\bf E}$ be a vector bundle over ${\cal T}_B({\cal X})$, then ${\cal T}_B({\cal X})$ can be identified with the subset of ${\bf P}({\bf E}\oplus {\bf C})$ through the embedding induced by the bundle injection ${\bf C}\mapsto {\bf E}\oplus {\bf C}$, and is the zero locus of the canonical section $s$ of $\pi_{{\bf P}({\bf E}\oplus {\bf C})}^{\ast}{\bf E}\otimes {\bf H}$ induced by ${\bf E}\oplus {\bf C}\mapsto {\bf E}$. \end{lemm} \noindent Proof of lemma \ref{lemm; sub}: The cross-section $\sigma: {\cal T}_B({\cal X})\mapsto {\bf P}({\bf E}\oplus {\bf C})$ induced by projectifying ${\bf C}\mapsto {\bf E}\oplus {\bf C}$ is clearly isomorphic to ${\cal T}_B({\cal X})$. On the other hand, the kernel of the bundle projection ${\bf E}\oplus {\bf C}\mapsto {\bf E}$ is exactly the trivial sub-bundle ${\bf C}$. This implies that ${\bf H}^{\ast}\mapsto \pi_{{\bf P}({\bf E}\oplus {\bf C})}^{\ast}{\bf E}$ induced by ${\bf E}\oplus {\bf C}\mapsto {\bf E}$ is injective exactly off $\sigma({\cal T}_B({\cal X}))\subset {\bf P}({\bf E}\oplus {\bf C})$. So the canonical section $s$ of $\pi_{{\bf P}({\bf E}\oplus {\bf C})}^{\ast}{\bf E}\otimes {\bf H}$ vanishes exactly on $\sigma({\cal T}_B{\cal X})$ and it is easy to see that $s$ is a regular section of $\pi_{{\bf P}({\bf E}\oplus {\bf C})}^{\ast}{\bf E}\otimes {\bf H}$. $\Box$ \begin{rem}\label{rem; cutdown} The cycle class of the zero locus $\sigma({\cal T}_B({\cal X}))$, $[\sigma({\cal T}_B({\cal X}))]\in {\cal A}_{\cdot}({\bf P}({\bf E}\oplus {\bf C}))$ is equal to $c_{top}(\pi_{{\bf P}({\bf E}\oplus {\bf C})}^{\ast}{\bf E}\otimes {\bf H})\cap [{\bf P}({\bf E}\oplus {\bf C})]$. \end{rem} \begin{defin}\label{defin; Bp} Denote the fundamental cycle class of the zero cross-section $B\mapsto \times_B^p{\cal T}_B({\cal X})$ to be $[B]_p$. \end{defin} The normal bundle of $[B]_p$ in $\times_B^p{\cal T}_B({\cal X})$ is isomorphic to ${\bf R}^1\pi_{\ast}{\cal O}_{\cal X}^{\oplus p}$. We replace $B$ by the auxiliary space $B'=\times_B^{1\leq i\leq p}{\bf P}({\bf V}_{II; i}\oplus {\bf C})$ and view the original $\times_B^p{\cal T}_B({\cal X}) \subset B'=\times_B^{1\leq i\leq p} {\bf P}({\bf V}_{II; i}\oplus {\bf C})$ as the regular zero locus of the canonical section of the auxiliary obstruction bundle $\oplus_{1\leq i\leq p} \pi_{{\bf P}({\bf V}_{II; i}\oplus {\bf C})}^{\ast}{\bf V}_{II; i}\otimes {\bf H}_{II; i}$. Then by remark \ref{rem; cutdown} and definition \ref{defin; Bp} we have to compensate by inserting both the top Chern class $c_{top}(\oplus_{1\leq i\leq p}\pi_{{\bf P}({\bf V}_{II; i}\oplus {\bf C})}^{\ast} {\bf V}_{II; i}\otimes {\bf H}_{II; i})$ and the $[B]_p$ into the intersection pairing of the family invariant of $\underline{C}$. Correspondingly for these $p$ distinct type $II$ classes $e_{II; i}$, we stabilize their algebraic family Kuranishi models by the trivial line bundle ${\bf C}$ and get $(\Phi_{{\bf V}_{II; i}{\bf W}_{II; i}}\oplus id_{\bf C}, {\bf V}_{II; i}\oplus {\bf C}, {\bf W}_{II; i}\oplus {\bf C})$. By lemma \ref{lemm; stable}, these models are invariant under stabilizations. Then the push-forward image of the virtual fundamental class of ${\cal M}_{e_{II; i}}$ into ${\cal A}_{\cdot}({\bf P}({\bf V}_{II; i}\oplus {\bf C}))$ is equal to $$ c_{top}(\pi_{{\bf P}({\bf V}_{II; i}\oplus {\bf C})}^{\ast}({\bf W}_{II; i} \oplus {\bf C})\otimes {\bf H}_{II; i})\cap [{\bf P}({\bf V}_{II; i}\oplus {\bf C})]$$ $$= c_{top}(\pi_{{\bf P}({\bf V}_{II; i}\oplus {\bf C})}^{\ast}{\bf W}_{II; i}\otimes {\bf H}_{II; i})\cap c_1({\bf H}_{II; i})\cap[{\bf P}({\bf V}_{II; i}\oplus {\bf C})]$$ $$=c_{top}(\pi_{{\bf P}({\bf V}_{II; i}\oplus {\bf C})}^{\ast}{\bf W}_{II; i} \otimes {\bf H}_{II; i})\cap [{\bf P}({\bf V}_{II; i})],$$ \footnote{${\bf P}({\bf V}_{II; i})$ can be viewed as the compactifying divisor at the infinity of ${\bf P}({\bf V}_{II; i}\oplus {\bf C})$.} because $[{\bf P}({\bf V}_{II; i})]=c_1({\bf H}_{II; i})\cap[{\bf P}({\bf V}_{II; i} \oplus {\bf C})]$. We introduce the following short-hand notation which will be used frequently later. \begin{defin}\label{defin; obstruction} Define ${\bf U}_{e_{II; 1}, \cdots, e_{II; p}}$ to be $$\oplus_{1\leq i\leq p}\pi_{\times_B^{1\leq i\leq p} {\bf P}({\bf V}_{II; i}\oplus {\bf C})}^{\ast}({\bf V}_{II; i})\otimes {\bf H}_{II; i}.$$ \end{defin} The family moduli space ${\cal M}_{\underline{C}}$ is embedded in ${\bf P}({\bf V}_{\underline{C}})$ as a projectified abelian cone and it is the zero locus $Z(s_{\underline{C}})$ of a section $s_{\underline{C}}$ of $\pi_{{\bf P}({\bf V}_{\underline{C}})}^{\ast}{\bf W}_{\underline{C}}\otimes {\bf H}$. The algebraic family Seiberg-Witten invariant of $\underline{C}$ is defined to be the integral of the top intersection pairing of $c_{top}(\pi_{{\bf P}({\bf V}_{\underline{C}})}^{\ast} {\bf W}_{\underline{C}}\otimes {\bf H})$ capping with a suitable power of $c_1({\bf H})$. We pull back the algebraic family Kuranishi model $(\Phi_{{\bf V}_{\underline{C}}{\bf W}_{\underline{C}}}, {\bf V}_{\underline{C}}, {\bf W}_{\underline{C}})$ from $B$ to $\times_B^{1\leq i\leq p} {\bf P}({\bf V}_{II; i}\oplus {\bf C})$. To simplify our notation, we skip the pull-back notation and denote the datum of the Kuranishi models by the same symbols. By remark \ref{rem; cutdown}, we have to extend the obstruction bundle from $\pi_{{\bf P}({\bf V}_{\underline{C}})}^{\ast} {\bf W}_{\underline{C}}\otimes {\bf H}$ to $\pi_{{\bf P}({\bf V}_{\underline{C}})}^{\ast} {\bf W}_{\underline{C}}\otimes {\bf H}\oplus {\bf U}_{e_{II; 1}, \cdots, e_{II; p}}$, or equivalently to $\pi_{{\bf P}({\bf V}_{\underline{C}})}^{\ast} {\bf W}_{\underline{C}}\otimes {\bf H}\otimes_{i\leq p}{\bf H}_{II; i} \oplus {\bf U}_{e_{II; 1}, e_{II; 2}, \cdots, e_{II; p}}$. And then insert $[B]_p$ into the intersection pairing. \begin{rem}\label{rem; twist} The above twisting of $\pi_{{\bf P}({\bf V}_{\underline{C}})}^{\ast} {\bf W}_{\underline{C}}\otimes {\bf H}$ by $\otimes_{i\leq p}{\bf H}_{II; i}$ does not affect the family invariant because the embedding $\sigma:{\cal T}_B({\cal X}) \mapsto {\bf P}({\bf V}_{II; i}\oplus {\bf C})$ defined by remark \ref{rem; cutdown} is totally disjoint from the smooth divisor at infinity $\cong {\bf P}({\bf V}_{II; i})$ and the line bundle ${\bf H}_{II; i}$ is trivialized over $\sigma({\cal T}_B({\cal X}))$. \end{rem} On the other hand, the moduli space of co-existence of $e_{II; 1}, \cdots, e_{II; p}$, ${\cal M}_{e_{II; 1}, \cdots, e_{II; p}}$, is a closed sub-scheme of the auxiliary base space $B'=\times_B^{1\leq i\leq p}{\bf P}({\bf V}_{II; i}\oplus {\bf C})$. So ${\cal M}_{\underline{C}} \times_{B'} {\cal M}_{e_{II; 1}, e_{II; 2}, \cdots, e_{II; p}}$ is also a closed sub-scheme of the projective bundle $X={\bf P}({\bf V}_{\underline{C}})$. By section II.7, page 160-161 of [Ha], we may blow up $X={\bf P}_{B'}({\bf V}_{\underline{C}})$ along $Z(s_{\underline{C}}) \times_{B'} {\cal M}_{e_{II; 1}, \cdots, e_{II; p}}={\cal M}_{\underline{C}} \times_{B'}{\cal M}_{e_{II; 1}, \cdots, e_{II; p}}$ and make it into a divisor ${\bf D}$ of the blown up scheme $\tilde{X}$. And the direct application of the residual intersection formula implies that we may rewrite $$ c_{top}(\pi_{\tilde{X}}^{\ast}{\bf W}_{\underline{C}}\otimes {\bf H} \otimes_{j\leq p} {\bf H}_{II; j})= c_{top}(\pi_{\tilde{X}}^{\ast}{\bf W}_{\underline{C}}\otimes {\bf H} \otimes_{j\leq p} {\bf H}_{II; j}\otimes {\cal O}(-{\bf D}))$$ $$+\sum_{1\leq i\leq rank_{\bf C}{\bf W}_{\underline{C}}}(-1)^{i-1} c_{rank_{\bf C} {\bf W}_{\underline{C}}-i}( \pi_{\tilde{X}}^{\ast}{\bf W}_{\underline{C}}\otimes {\bf H}\otimes_{j\leq p}{\bf H}_{II; j})\cap {\bf D}^{i-1}[{\bf D}].$$ And the push-forward of $$\hskip -.2in \sum_{1\leq i\leq rank_{\bf C}{\bf W}_{\underline{C}}}(-1)^{i-1} c_{rank_{\bf C} {\bf W}_{\underline{C}}-i}(\pi_{\tilde{X}}^{\ast}{\bf W}_{\underline{C}}\otimes {\bf H}\otimes_{j\leq p}{\bf H}_{II; j}\oplus {\bf U}_{e_{II; 1}, \cdots, e_{II; p}})$$ $$\cap {\bf D}^{i-1}[{\bf D}]\cap [B]_p\cap c_1({\bf H})^{dim_{\bf C}B+p_g-q+{\underline{C}^2- c_1({\bf K}_{{\cal X}/B})\cdot \underline{C}\over 2}}$$ is the localized contribution of the algebraic family Seiberg-Witten invariant along ${\cal M}_{e_{II; 1}, \cdots, e_{II; p}}$. It is vital to understand: \noindent {\bf Question}: Is the localized contribution of top Chern class the correct ``invariant'' associated to the collection of type $II$ classes $e_{II; 1}$, $\cdots$, $e_{II; p}$? In other words, is the localized contribution of top Chern classes constructed above invariant under deformations of the family ${\cal X}\mapsto B$ or of the Kuranishi models? When the exceptional classes are of type $I$, it has been shown in [Liu5], [Liu6] that the localized contribution of top Chern class can be identified with certain mixed family invariant of $C-{\bf M}(E)E-\sum_{e_i\cdot (C-{\bf M}(E)E)<0}e_i$ and consequentially are known to be topological. On the other hand, the theory for type $II$ curves is more delicate than their type $I$ counterpart as the naively chosen localized contribution may not always be an invariant. To understand what may go wrong, one may consider some deformation of the datum $\Phi_{{\bf V}_{II}{\bf W}_{II}}: {\bf V}_{II}\mapsto {\bf W}_{II}$. In order for it the correct choice, the localized contribution of top Chern class has to be independent to the deformations. Consider the idealistic situation that under the one parameter family of deformations, the family moduli space ${\cal M}_{e_{II}}$ of a single $e_{II}$ (i.e. $p=1$) is deformed into the whole space ${\bf P}({\bf V}_{e_{II}}\oplus {\bf C})$. After such a degeneration\footnote{Geometrically this corresponds to thickening the family moduli space of $e_{II}$ to the whole space, which can be achieved by multiplying the map $\Phi_{{\bf V}_{e_{II}}{\bf W}_{e_{II}}}$ by a constant $t$ and shrink $t$ to zero.}, the family moduli space of $e_{II}$ is apparently not of the expected dimension. It is easy to see that the localized contribution of top Chern class along such a degenerated family moduli space is nothing but the whole localized top Chern class of ${\bf H}\otimes \pi_X^{\ast}{\bf W}_{\underline{C}}$ along ${\cal M}_{\underline{C}}$. On the other hand, for the original well-behaved ${\cal M}_{e_{II}}$ the localized contribution of top Chern class is usually not equal to the whole localized top Chern class of ${\bf H}\otimes \pi_X^{\ast}{\bf W}_{\underline{C}}$ along ${\cal M}_{\underline{C}}$. Therefore we observe in this hypothetical example that the localized contribution of top Chern classes may {\bf not} be invariant to the degenerations. We also realize from this example that the non-invariant nature of the localized contribution of top Chern classes is due to the non-invariant nature of the family moduli space ${\cal M}_{e_{II}}$! \label{special} \noindent {\bf Special Assumption}: In the following, we assume that \noindent ${\cal M}_{\underline{C}-\sum e_{II; i}}\times_B{\cal M}_{e_{II; 1}, \cdots , e_{II; p}}\hookrightarrow {\cal M}_{\underline{C}} \times_B{\cal M}_{e_{II; 1}, \cdots, e_{II; p}}$ induced by adjoining the union of type $II$ curves in $\sum_{i\leq p} e_{II; i}$ to a curve in $\underline{C}-\sum_{i\leq p} e_{II; i}$ has been an isomorphism. This assumption is the analogue of the simplifying assumption of theorem 4. in [Liu5]. The following is the main theorem in this paper, \begin{theo}\label{theo; degenerate} Given the localized contribution of top Chern class $$\hskip -1in \sum_{1\leq i\leq rank_{\bf C}{\bf W}_{\underline{C}}}(-1)^{i-1} c_{rank_{\bf C} {\bf W}_{\underline{C}}-i}(\pi_{\tilde{X}}^{\ast}{\bf W}_{\underline{C}}\otimes {\bf H}\otimes_{j\leq p}{\bf H}_{II; j}\oplus {\bf U}_{e_{II; 1}, \cdots, e_{II; p}})$$ $$\cap {\bf D}^{i-1}[{\bf D}]\cap [B]_p\cap c_1({\bf H})^{dim_{\bf C}B+p_g-q+{\underline{C}^2- c_1({\bf K}_{{\cal X}/B})\cdot \underline{C}\over 2}},$$ then under the above {\bf special assumption}, it can be expanded into an algebraic expression of cycle classes. Among the various terms of the expansion, there is a dominating term proportional to the virtual fundamental class $[{\cal M}_{e_{II; 1}, \cdots, e_{II; p}}]_{vir}$, and is identified to be $${\cal AFSW}_{{\cal X}\mapsto B}((\times_B^{i\leq p} \pi_i)_{\ast}\bigl([{\cal M}_{e_{II; 1}, \cdots, e_{II; p}}]_{vir} \cap [B]_p\bigr)\cap c_{p_g}^p({\cal R}^2\pi_{\ast}{\cal O}_{\cal X}) \cap \tau, \underline{C}-\sum_{1\leq i\leq p}e_{II; i}).$$ It satisfies the crucial invariant properties, (i). The class $\tau\in {\cal A}_{\cdot}({\cal M}_{e_{II; 1}, \cdots, e_{II; p}})$ is independent to $nD$ and is depending on $\underline{C}$ and $e_{II; 1}$, $\cdots$, $e_{II; p}$ only. (ii). When $febd(e_{II; i}, {\cal X}/B)=p_g$, $1\leq i\leq p$, and ${\cal M}_{e_{II; 1}, \cdots, e_{II; p}}$ is smooth of its expected dimension $dim_{\bf C}B+p\cdot p_g+\sum {e_{II; i}^2- c_1({\bf K}_{\cal X}/B)\cdot e_{II; i}\over 2}$, the smooth cycle $[{\cal M}_{e_{II; 1}, \cdots, e_{II; p}}]$ coincide with the virtual fundamental class $[{\cal M}_{e_{II; 1}, \cdots, e_{II; p}}]_{vir}$ and the above mixed invariant can be re-expressed as $${\cal AFSW}_{{\cal X}\times_B{\cal M}_{e_{II; 1}, \cdots, e_{II; p}}\mapsto {\cal M}_{e_{II; 1}, \cdots, e_{II; p}}}([B]_p\cap \tau, \underline{C}-\sum_{1\leq i\leq p}e_{II; i}).$$ (iii). The above expression satisfies guidelines 1-3 listed beginning from page \pageref{guide}. \end{theo} \noindent Proof of theorem \ref{theo; degenerate}: After we push-forward along ${\bf D}\mapsto {\cal M}_{e_{II; 1}, \cdots, e_{II; p}}$, the localized contribution of top Chern classes of $\pi_X^{\ast}{\bf W}_{\underline{C}}\otimes {\bf H}\otimes_{i\leq p} {\bf H}_{II; i}$ along ${\cal M}_{e_{II; 1}, \cdots, e_{II; p}}$ is equal to $\{c_{total}(\pi_X^{\ast}{\bf W}_{\underline{C}}\otimes {\bf H} \otimes_{i\leq p} {\bf H}_{II; i}\oplus {\bf U}_{e_{II; 1}, e_{II; 2}, \cdots, e_{II; p}}) \cap s_{total}({\cal M}_{\underline{C}}\times_B {\cal M}_{e_{II; 1}, \cdots , e_{II; p}}, {\bf P}({\bf V}_{\underline{C}})\times_B B'))\cap [B]_p\}_{dim_{\bf C}B+p_g-q+{\underline{C}^2-\underline{C}\cdot c_1({\bf K}_{{\cal X}/B})\over 2}}$. Assuming that ${\cal M}_{\underline{C}-\sum_{i\leq p}e_{II; i}}\times_B {\cal M}_{e_{II; 1}, \cdots, e_{II; p}}={\cal M}_{\underline{C}}\times_B {\cal M}_{e_{II; 1}, \cdots, e_{II; p}}$, the Segre class of the normal cone of ${\cal M}_{\underline{C}}\times_B{\cal M}_{e_{II; 1}, \cdots, e_{II; p}}$ is the same as the Segre class of ${\cal M}_{\underline{C}-\sum_{i\leq p}e_{II; i}} \times_B{\cal M}_{e_{II; 1}, \cdots, e_{II; p}}\subset {\bf P}({\bf V}_{\underline{C}})$. \noindent Step I: Recall the following short exact sequence\footnote{Check the statement of proposition \ref{prop; moreThanOne} for the definition of ${\bf W}'$.} in subsection \ref{subsection; virtual}, $$0\mapsto {\bf W}_{\underline{C}-\sum_{i\leq p}e_{II; i}}\mapsto {\bf W}_{\underline{C}}\otimes_{i\leq p}{\bf H}_{II; i}\mapsto {\bf W}'\mapsto 0.$$ By applying lemma \ref{lemm; pro} and the discussion in subsection \ref{subsubsection; cone}, we can view the above localized contribution of top Chern class as the Gysin pull-back $\Delta^{!}$ (with $\Delta:B\mapsto B\times B$) from ${\cal A}_{\cdot}( {\cal M}_{\underline{C}-\sum e_{II; i}} \times {\cal M}_{e_{II; 1}, \cdots, e_{II; p}})$. We may rewrite the original localized contribution of top Chern class as the $\Delta^{!}$ pull-back of $$\{c_{total}(\pi_X^{\ast}{\bf W}_{\underline{C}-\sum e_{II; i}}\otimes {\bf H}) \cap s_{total}({\cal M}_{\underline{C}-\sum_{i\leq p}e_{II; i}}, {\bf P}({\bf V}_{\underline{C}-\sum e_{II; i}}))$$ $$\times c_{total}( {\bf W}'\otimes {\bf H})\cap s_{total}({\cal M}_{e_{II; 1}, e_{II; 2}, \cdots, e_{II; p}}, B')\cap c_{total}({\bf U}_{e_{II; 1}, \cdots, e_{II; p}})$$ $$\cap s_{total}({\bf V}'\otimes {\bf H})\cap [B]_p \}_{2dim_{\bf C}B+p_g-q+{\underline{C}^2- c_1({\bf K}_{{\cal X}/B})\cdot \underline{C}\over 2}}.$$ We have used the short exact \footnote{Check the commutative diagram in proposition \ref{prop; compare} for the $p=1$ version.} sequence $0\mapsto {\bf V}_{\underline{C}-\sum e_{II; i}}\mapsto {\bf V}\otimes_{i\leq p} {\bf H}_{II; i}\mapsto {\bf V}'\mapsto 0$ over ${\cal M}_{e_{II; 1}, \cdots, e_{II; p}}$ and the following identity $$ s_{total}({\cal M}_{\underline{C}-\sum e_{II; i}}\times_{B'} {\cal M}_{e_{II; 1}, \cdots, e_{II; p}}, {\bf P}({\bf V}_{\underline{C}}))$$ $$=s_{total}({\cal M}_{\underline{C}-\sum e_{II; i}}\times_{B'} {\cal M}_{e_{II; 1}, \cdots, e_{II; p}}, {\bf P}({\bf V}_{\underline{C} -\sum e_{II; i}}))\cap s_{total}({\bf V}'\otimes {\bf H}).$$ Based on the following identity $$\hskip -.3in dim_{\bf C}B-q+p_g+{(\underline{C}-\sum_{i\leq p} e_{II; i})^2-c_1({\bf K}_{{\cal X}/B}) \cdot (\underline{C}-\sum_{i\leq p} e_{II; i})\over 2}+\sum_{i\leq p} { e_{II; i}^2-c_1({\bf K}_{{\cal X}/B}) \cdot e_{II; i}\over 2}$$ $$=dim_{\bf C}B-q+p_g+{\underline{C}^2- c_1({\bf K}_{{\cal X}/B}) \cdot \underline{C}\over 2}+\{\sum_{i\leq p} (-\underline{C}\cdot e_{II; i}+e_{II; i}^2)+ \sum_{1\leq i<j\leq p}e_{II; i}\cdot e_{II; j} \},$$ on the family dimensions, we may set: \noindent $a_1=dim_{\bf C}B-q+p_g+{(\underline{C}-\sum e_{II; i})^2- c_1({\bf K}_{{\cal X}/B}) \cdot (\underline{C}-\sum e_{II; i})\over 2}$, $a_2=dim_{\bf C}B+\sum {e_{II; i}^2-c_1({\bf K}_{{\cal X}/B}) \cdot e_{II; i}\over 2}$ and $a_3=\sum(-\underline{C}\cdot e_{II; i}+e_{II; i}^2)+ \sum_{1\leq i<j\leq p}e_{II; i}\cdot e_{II; j}$ and focus on the term \footnote{Assume that $a_3$ satisfies $a_3-p\cdot q(M)\geq 0$. Also notice that we have the identity $a_1+a_2-a_3=2dim_{\bf C}B-q(M)+p_g +{\underline{C}^2-c_1({\bf K}_{{\cal X}/B})\cdot \underline{C}\over 2}$.} $$g_{a_1, a_2, a_3}= \{c_{total}(\pi_X^{\ast}{\bf W}_{\underline{C}-\sum e_{II; i}}\otimes {\bf H}) \cap s_{total}({\cal M}_{\underline{C}-\sum_{i\leq p}e_{II; i}}, {\bf P}({\bf V}_{\underline{C}-\sum e_{II; i}}))\}_{a_1}$$ $$\times \{c_{total}(\oplus_{i\leq p}{\bf R}^0\pi_{\ast}\bigl({ \cal O}_{nD}(nD)\bigr)\otimes {\bf H}_{II; i}\oplus {\bf W}'\otimes {\bf H})\cap s_{total}({\cal M}_{e_{II; 1}, e_{II; 2}, \cdots, e_{II; p}}, B')\cap_{i\leq p}c_1({\bf H}_{II; i}) \}_{a_2}$$ $$\cap \{c_{a_3-p\cdot q(M)}({\bf U}_{e_{II; 1}, \cdots, e_{II; p}}-\oplus_{i\leq p}({\bf C}\oplus {\bf R}^0\pi_{\ast}\bigl({ \cal O}_{nD}(nD)\bigr))\otimes {\bf H}_{II; i}-{\bf V}'\otimes {\bf H}) \cap [B]_p\}.$$ The reason to pick this particular combination will be clear momentarily. \noindent (i). By using $\cap_{i\leq p} c_1({\bf H}_{II; i})\cap [B']=[\times_B^{i\leq p}{\bf P}({\bf V}_{II; i})]$, the expression $\{c_{total}(\pi_X^{\ast}{\bf W}_{\underline{C}-\sum e_{II; i}}\otimes {\bf H}) \cap s_{total}({\cal M}_{\underline{C}-\sum_{i\leq p}e_{II; i}}, {\bf P}({\bf V}_{\underline{C}-\sum e_{II; i}}))\}_{a_1}$ is nothing but the virtual fundamental class of ${\cal M}_{\underline{C}-\sum e_{II; i}}$, expressed as the localized top Chern class of $\pi_X^{\ast}{\bf W}_{\underline{C}-\sum e_{II; i}}\otimes {\bf H}$. \noindent (ii). By applying proposition \ref{prop; moreThanOne} to \footnote{Notice that we have added $\oplus_{i\leq p} {\bf C}\otimes {\bf H}_{II; i}$ to our bundle here. It is to match with the additional ${\bf H}_{II; i}$ factor in the stabilized obstruction bundle of $e_{II; i}$, $(\pi_{{\bf P}({\bf V}_{II; i}\oplus {\bf C})}^{\ast} {\bf W}_{II; i}\oplus {\bf C})\otimes {\bf H}_{II; i}$.} $\{c_{total}((\oplus_{i\leq p}{\bf R}^0\pi_{\ast}\bigl({ \cal O}_{nD}(nD)\bigr)\oplus {\bf C})\otimes {\bf H}_{II; i}\oplus {\bf W}'\otimes {\bf H})\cap s_{total}({\cal M}_{e_{II; 1}, e_{II; 2}, \cdots, e_{II; p}}, B')\}_{a_2}$, it can be expanded into an algebraic expression of cycle classes and the leading term is $[{\cal M}_{e_{II; 1}, e_{II; 2}, \cdots, e_{II; p}}]_{vir}\cap c_{p_g}({\cal R}^2\pi_{\ast}{\cal O}_{\cal X})^p$. \noindent Step II: Because both $[{\cal M}_{\underline{C}-\sum e_{II; i}}]_{vir}$ and $[{\cal M}_{e_{II; 1}, \cdots, e_{II; p}}]_{vir}$ are well defined and are independent to $nD$, for the whole expression to be $nD$ independent we evaluate the $nD$ independent term of $$\{c_{a_3-p\cdot q(M)}({\bf U}_{e_{II; 1}, \cdots, e_{II; p}}-\oplus_{i\leq p}({\bf R}^0\pi_{\ast}\bigl({ \cal O}_{nD}(nD)\bigr)\oplus {\bf C}) \otimes {\bf H}_{II; i}-{\bf V}'\otimes {\bf H}) \cap [B]_p\}.$$ Consider the virtual bundle $\omega={\bf U}_{e_{II; 1}, e_{II; 2}, \cdots, e_{II; p}}-{\bf V}'\otimes {\bf H}- \oplus_{i\leq p}({\bf C}\oplus{\bf R}^0\pi_{\ast}\bigl({ \cal O}_{nD}(nD)\bigr))\otimes {\bf H}_{II; i}$. The virtual bundle $\omega$ is of virtual rank $\sum_{i\leq p}\chi({\cal O}_{{\cal X}_b}(e_{II; i}+nD))- \chi({\cal O}_{\sum {\bf e}_{II; i}|_b}(\underline{C}+nD))- p(\chi({\cal O}_{{\cal X}_b}(nD))+q)$ (${\cal X}_b$ is the fiber of a closed $b\in B$) and by surface Riemann-Roch formula it is equal to $$rank(\omega)=-q(M)\cdot p+\sum_{i\leq p}{(e_{II; i})^2}-\underline{C} \cdot (\sum_{i\leq p}e_{II; i})+\sum_{i<j\leq p}e_{II; i}\cdot e_{II; j}=a_3-p\cdot q(M).$$ The expression $rank(\omega)=a_3-(dim_{\bf C} \times_B^p {\cal T}_B{\cal X}-dim_{\bf C}B)$ is $nD$ independent! \begin{defin}\label{defin; noD} Consider the $nD$ independent term of $c_{rank(\omega)}(\omega)$ and expand it as a polynomial of $c_1({\bf H})$, $\sum_{r}\tau_r c_1({\bf H})^r$. Define the $\tau$ class to be the sum of $\tau_r$, $\tau=\sum_{r\leq rank(\omega)} \tau_r$. \end{defin} If we view $c_1({\bf H})$ as a formal variable $z$, then $\tau$ can be viewed as $c_{rank(\omega)}(\omega)|_{z=1}^{n=0}$. Recall (e.g. chapter 15, page 281 of [F]) that for a proper morphism $f$ and a coherent sheaf ${\cal F}$, $f_{\ast}{\cal F}=\sum_{i\geq 0}(-1)^i{\cal R}^if_{\ast}{\cal F}$. The following lemma identifies the $nD$ independent term of $c_{rank(\omega)}(\omega)$. \begin{lemm}\label{lemm; findout} The $nD$ independent term of $c_{rank(\omega)}(\omega)$ is equal to $c_{rank(\omega)}(\oplus_{i\leq p}\pi_{\ast}{\cal O}({\bf e}_{II; i})- \pi_{\ast}{\cal O}_{\sum_{i\leq p}{\bf e}_{II; i}}(\underline{C})\otimes {\bf H})$. \end{lemm} \noindent Proof of lemma \ref{lemm; findout}: When $D$ is very ample and $n\gg 0$, the Serre vanishing implies ${\cal R}^0\pi_{\ast}{\cal O}_{\sum {\bf e}_{II; i}}(\underline{C}+nD) =\pi_{\ast}{\cal O}_{\sum {\bf e}_{II; i}}(\underline{C}+nD)$, etc. This enables us to re-express $\omega$ as the differences of several direct images. Finally we set $n=0$ in the alternative expression of $\omega$. $\Box$ Now we may express the $nD$ independent leading term of $g_{a_1, a_2, a_3}$ as $$[{\cal M}_{\underline{C}-\sum_{i\leq p}e_{II; i}}]_{vir}\times \{[{\cal M}_{e_{II; 1}, \cdots, e_{II; p}}]_{vir} \cap c_{p_g}^p({\cal R}^2\pi_{\ast}{\cal O}_{{\cal X}})\}\cap (\sum_r\tau_r c_1({\bf H})^r).$$ So the dominating term of the original localized top Chern class becomes $$\sum_{r\leq rank(\omega)}\Delta^{!} \{[{\cal M}_{\underline{C}-\sum_{i\leq p}e_{II; i}}]_{vir}\cap c_1({\bf H})^r\times [{\cal M}_{e_{II; 1}, \cdots, e_{II; p}}]_{vir}\cap c_{p_g}^p({\bf R}^2\pi_{\ast} {\cal O}_{\cal X})\cap [B]_p\cap \tau_r\},$$ which is nothing but $$\sum_r\{[{\cal M}_{\underline{C}-\sum_{i\leq p}e_{II; i}}]_{vir}\cap c_1({\bf H})^r\}\cdot_{id_B} \{[{\cal M}_{e_{II; 1}, \cdots, e_{II; p}}]_{vir}\cap c_{p_g}^p ({\bf R}^2\pi_{\ast} {\cal O}_{\cal X})\cap [B]_p\cap \tau_r\}.$$ After we cap with $c_1({\bf H})^{dim_{\bf C}B-q+p_g+{\underline{C}^2- c_1({\bf K}_{{\cal X}/B})\cap \underline{C}\over 2}}$ and push forward the top intersection pairing to a point $pt$, the top intersection pairing is reduced to ${\cal AFSW}_{{\cal X}\mapsto B}((\times_B^{i\leq p}\pi_i){\ast}\{ [{\cal M}_{e_{II; 1}, \cdots, e_{II; p}}]_{vir}\cap [B]_p\cap \tau\} \cap c_{p_g}^p({\bf R}^2\pi_{\ast} {\cal O}_{\cal X}), \underline{C}-\sum_{i\leq p}e_{II; i})$. By our computation, it clearly obeys guideline 1 on page \pageref{guide}. To show that it is compatible with the type $I$ theory in [Liu5], [Liu6], we notice that $[{\cal M}_{e_{II; 1}, \cdots, e_{II; p}}]_{vir}$ is reduced to $\cap_{i\leq p}[Y(\Gamma_{e_{k_i}})]=[Y(\Gamma)]$ when the type $II$ classes are replaced by $e_{k_i}$, $1\leq i\leq p$. On the other hand, we may view $c_1({\bf H})$ as a formal variable $z$, then formally $\tau=c_{rank(\omega)}(\omega)|_{z=1}^{n=0}$. When the type $II$ classes are reduced to the type $I$ classes $e_{k_1}, e_{k_2}, \cdots, e_{k_p}$, the argument of theorem 4 of [Liu5] or proposition 18 of [Liu6] implies that the effect of $c_{rank(\omega)}(\omega)\cap [B]_p|_{n=0}$ upon the fundamental class $[Y(\Gamma)]$ is equal to $c_{rank(\omega)+p\cdot q(M)}({\cal R}^1 \pi_{\ast}{\cal O}_{\sum {\bf e}_{k_i}}( \underline{C})\otimes {\bf H}-\oplus_i {\cal R}^1\pi_{\ast} {\cal O}_{{\bf e}_{k_i}}({\bf e}_{k_i}))$ and it is equivalent to the top Chern class of an explicit vector bundle representative \footnote{By lemma 17 of [Liu6].} of $\tau_{\Gamma}\otimes {\bf H}$. So $\tau= c_{top}(\tau_{\Gamma}\otimes {\bf H})|_{z=1}^{n=0}=c_{total}(\tau_{\Gamma})$. The only defect between the degenerated version of type $II$ theory and the original type $I$ theory is the expression $\cap c_{p_g}^p({\cal R}^2\pi_{\ast}{\cal O}_{\cal X})$. This defect roots at the discrepancy of their dimension formulae and should be discarded when the classes are of type $I$. With this defect removed, the type $II$ contribution for $\underline{C}-\sum e_{II; i}$ is reduced to ${\cal AFSW}_{M_{n+1}\times Y(\Gamma)\mapsto M_n\times Y(\Gamma)}(1, C-{\bf M}(E)E-\sum e_{k_i})$ when we take $\underline{C}=C-{\bf M}(E)E$. The object we have identified is independent to $nD$ because of the $nD-$independence of $[{\cal M}_{\underline{C}-\sum e_{II; i}}]_{vir}$ and $[{\cal M}_{e_{II; 1}, \cdots, e_{II; p}}]_{vir}$ and the $nD-$independence of $\tau$. Thus our construction obeys guidelines 1-3 starting from page \pageref{guide}, and the theorem is proved. $\Box$ \begin{rem}\label{rem; =0} When $febd(e_{II; i}, {\cal X}/B)=0$ for all $e_{II; i}$, the factor $c_{p_g}^p({\cal R}^2\pi_{\ast}{\cal O}_{\cal X})$ disappears from the mixed invariant. The mixed invariant identified in the theorem should be replaced by $${\cal AFSW}_{{\cal X}\mapsto B}((\times_B^{i\leq p} \pi_i)_{\ast}\bigl([{\cal M}_{e_{II; 1}, \cdots, e_{II; p}}]_{vir} \cap [B]_p\bigr)\cap \tau, \underline{C}-\sum_{1\leq i\leq p}e_{II; i}).$$ \end{rem} \begin{rem}\label{rem; tau} From the above discussion at the end of our main theorem, the $\tau$ class defined above is the type $II$ analogue of $c_{total}(\tau_{\Gamma})$ defined for type $I$ theory. Its role is to balance the rank difference between the family obstruction bundles of $\underline{C}$ and of $\underline{C}-\sum_{i\leq p}e_{II; i}$. The main difference from the type $I$ theory is that for $\tau_{\Gamma}$ we can represent it as a vector bundle and $c_{top+l}(\tau_{\Gamma})=0$ for all $l>0$. But $\omega$ is only a virtual vector bundle of virtual rank $-p\cdot q(M) -\sum_{i\leq p} \underline{C}\cdot e_{II; i}+\sum_{i\leq p} e_{II; i}^2+\sum_{i<j}e_{II; i}\cdot e_{II; j}$. As the type $II$ universal curves ${\bf e}_{II; i}$ may behave badly (in the sense described on page \pageref{bizzare}), we do not expect to find an explicit bundle representative of $\omega$ generally. \end{rem} \subsection{The Remark about the Inductive Scheme of Applying Residual Intersection Formula}\label{subsection; inductive} At the end of the whole paper, we sketch the extension of the theory to an inductive scheme upon a whole hierarchy of collection of type $II$ curves and explain how it fits to our guideline 4 on page \pageref{guide4}. As the current discussion is parallel to type $I$ theory in [Liu6], we do not intend to go into the full details here. The reader who wants to get to the detailed arguments can consult [Liu6]. By combining with the main theorem in the current paper, the reader can translate the argument to cover the type $II$ case. There are a few reasons why the theory of type $II$ curves has to be extended beyond a single collection of type $II$ classes. \noindent (1). Exactly parallel to the type $I$ classes, type $II$ curves can break, or degenerate into a union of irreducible curves, while some of them are again type $II$ curves. The degenerated configuration will give additional excess contributions. \noindent (2). The main theorem in the paper has been proved under the {\bf special assumption} that $${\cal M}_{\underline{C}-\sum e_{II; i}}\times_{B'} {\cal M}_{e_{II; 1}, \cdots, e_{II; p}}\hookrightarrow {\cal M}_{\underline{C}}\times_{B'} {\cal M}_{e_{II; 1}, \cdots, e_{II; p}}$$ is isomorphic. In section \ref{subsection; stable} this has been interpreted equivalently as the injectivity of the bundle map $\pi_{ {\bf P}({\bf V}_{\underline{C}})}^{\ast}{\bf W}_{new}\otimes {\bf H}\mapsto \pi_{{\bf P}({\bf V}_{\underline{C}})}^{\ast} {\bf W}_{\underline{C}}\otimes {\bf H}$ over ${\cal M}_{\underline{C}-\sum e_{II; i}}\times_{B'} {\cal M}_{e_{II; 1}, \cdots, e_{II; p}}$. In general, the breaking up of the type $II$ classes into irreducible components may cause the inclusion failing to be isomorphic. In other words, a curve in $\underline{C}$ above the locus of co-existence $\cap_{i\leq p}\pi_i{\cal M}_{e_{II; i}}$ may not factorize into a curve in $\underline{C}-\sum e_{II; i}$ and curves in the sum of $\sum_{i\leq p}e_{II; i}$ when some curve in $e_{II; i}$ fails to be irreducible. Both theories of type $I$ and type $II$ classes make use of residual intersection theory of top Chern classes. The major difference between type $II$ theory and their type $I$ counter-part is that the localized contributions of top Chern classes of type $I$ classes are identifiable to be mixed family invariants, while the localized contribution of top Chern classes of type $II$ exceptional classes are typically non-topological. One major issue we have pointed out is that ${\cal M}_{e_{II; 1}, \cdots, e_{II; p}}$ may not always be regular and the excess invariant contribution within ${\cal M}_{\underline{C}}\times_{B'} {\cal M}_{e_{II; 1}, \cdots, e_{II; p}}$ may depend on the explicit locus ${\cal M}_{e_{II; 1}, \cdots, e_{II; p}}$ rather than the virtual fundamental class $[{\cal M}_{e_{II; 1}, \cdots, e_{II; p}}]_{vir}$. Our main theorem on page \pageref{theo; degenerate} demonstrates that despite the non-topological nature of the localized contribution of top Chern class, it can still be expanded algebraically and the dominating term is of the desired form: $${\cal AFSW}_{{\cal X}\mapsto B}((\times_B^{i\leq p}\pi_i)_{\ast}\bigl( [{\cal M}_{e_{II; 1}, \cdots, e_{II; p}}]_{vir}\bigr)\cap c_{p_g}^p({\cal R}^2\pi_{\ast}{\cal O}_{\cal X})\cap [B]_p\cap \tau, \underline{C}-\sum_{i\leq p} e_{II; i}).$$ This corresponds to the invariant contribution proportional to $[{\cal M}_{\underline{C}-\sum_{i\leq p}e_{II; i}}]_{vir}\cdot_{id_B} [{\cal M}_{e_{II; 1}, \cdots, e_{II; p}}]_{vir}$. In this way, the non-topological nature of the localized contribution of top Chern classes is from the other non-dominating terms from ${\cal M}_{\underline{C}}\times_{B'} {\cal M}_{e_{II; 1}, \cdots, e_{II; p}}$ away from an explicit cycle representative of $[{\cal M}_{\underline{C}}]_{vir}\cdot_{id_B} [{\cal M}_{e_{II; 1}, \cdots, e_{II; p}}]_{vir}$. Unless we impose additional assumptions, we do not expect any vanishing result on these correction terms. By re-grouping the correction terms with the residual contribution of top Chern class $$\int_{\tilde{X}}c_{top}( \pi_{\tilde{X}}^{\ast} {\bf W}_{\underline{C}}\otimes {\bf H}\otimes {\cal O}(-{\bf D})\oplus {\bf U}_{e_{II; 1}, \cdots, e_{II; p}})\cap [B]_p\cap c_1({\bf H})^{dim_{\bf C}B-q+p_g+{\underline{C}^2-c_1({\bf K}_{{\cal X}/B})\cdot \underline{C}\over 2}},$$ their total sum is still a topological invariant! This interpretation allows us to formulate our scheme inductively. \noindent (i). List all the possible finite collections of type $I$ and type $II$ classes satisfying: $\underline{C}\cdot e_{k_i}<0$, $e_{k_i}\cdot e_{k_j}\geq 0$, $i\not=j$, $\underline{C}\cdot e_{II; i}<0$, $e_{II; i}\cdot e_{II; j}\geq 0$, $i\not=j$. These exceptional classes determine exceptional cones in the sense of [Liu4]. \noindent (ii). Define a partial ordering among all such collections based on the inclusions of exceptional cones. The partial ordering encodes whether one particular collection of exceptional classes degenerates into another. In the type $I$ theory, such a partial ordering has been named $\succ$. Check definition 8 of [Liu6] for details. \noindent (iii). Based on the partial ordering, define a linear ordering among the various collections of exceptional classes. This is the analogue of $\models$ defined in [Liu6]. \noindent (iv). Blow up ${\bf P}({\bf V}_{\underline{C}})$ along the various sub-loci ${\cal M}_{\underline{C}}\times_{B'} {\cal M}_{e_{k_1}, \cdots, e_{k_p}; e_{II; 1}, \cdots, e_{II; q}}$ by the reversed linear ordering constructed in (iii). Each time we have to stabilize the family Kuranishi model of $\underline{C}$ following the recipe on page \pageref{subsection; stable}. \noindent (v). From each localized contribution of top Chern class along \noindent ${\cal M}_{\underline{C}}\times_{B'} {\cal M}_{e_{k_1}, \cdots, e_{k_p}; e_{II; 1}, \cdots, e_{II; q}}$, we need to go through the computation in theorem \ref{theo; degenerate} and identify its dominating term with the appropriated mixed family invariant. However, there are two subtle variations here. (v'). The first variation from our main theorem is that under an inductive blowing up procedure, the obstruction bundle $\pi_X^{\ast} {\bf W}_{\underline{C}}\otimes {\bf H}$ has been modified repeatedly. As in the type $I$ case, we use proposition 9 of [Liu6] to deal with this question. The cited proposition demonstrates that the blowing ups performed ahead of the given one (under the reversed linear ordering in (iii).) changes the obstruction bundle in a way which enables us to relate the modified bundle with the bundle $ \pi_{{\bf P}({\bf V}_{\underline{C}})}^{\ast} {\bf W}_{\underline{C}-\sum e_{II; i}}\otimes {\bf H}$, exactly allowing us to drop the {\bf special assumption} on page \pageref{special}. A special partial ordering parallel to $\sqsupset$ in definition 15 of [Liu6] has to be used to analyze the discrepancy between ${\cal M}_{\underline{C}-\sum e_{II; i}}\times_B{\cal M}_{e_{II; 1}, \cdots, e_{II; p}}$ from ${\cal M}_{\underline{C}}\times_B{\cal M}_{e_{II; 1}, \cdots, e_{II; p}}$. (v''). To avoid over-counting, the inductive blowup process (compare with section 5.1. of [Liu6]) incorporates the inclusion-exclusion principle (see page 19-20 of [Liu7] for an elementary explanation) and we identify the dominating terms of the localized contributions to be ``modified'' family invariants. For type $I$ classes, their corresponding modified family invariants have been defined inductively based on a partial ordering $\gg$ (definition 11.) in [Liu6]. Please consult definition 13, 14 of the same paper for the definitions of the type $I$ modified family invariants. For the combinations of type $I$ and type $II$ classes, we can generalize $\gg$ and extend the definition of modified family invariant accordingly. At the end of the inductive procedure, we end up with a modified family invariant ${\cal AFSW}_{{\cal X}\mapsto B}^{\ast}(1, \underline{C})$. It is the subtractions of all the modified family invariants attached to the various collections of type $I/II$ exceptional curves from the original family invariant ${\cal AFSW}_{{\cal X}\mapsto B}(1, \underline{C})$. This modified family invariant resembles the virtual number count of smooth curves in $\underline{C}$ within the given family ${\cal X}\mapsto B$. The above procedure is parallel to the theory of type $I$ exceptional curves. If we apply the above scheme to the type $I$ and type $II$ exceptional classes of the universal families, we can derive the following result, \begin{theo} Given an algebraic surface $M$ and a line bundle $L\mapsto M$. For $\delta\leq {c_1^2(L)+c_1({\bf K}_M)\cdot c_1(L)\over 2}+1$, the ``virtual number of $\delta$-node nodal curves'' in a generic $\delta$ dimensional linear system of $|L|$ is defined. \end{theo} Notice that $\delta!\times$ the above virtual number is interpreted as the equivalence of smooth curves within the universal family $M_{\delta+1}\mapsto M_{\delta}$, represented by some modified family invariant. Unlike the universality theorem which works for $5\delta-1$ very ample $L$, the above result does not guarantee the deformation invariance of these virtual numbers of nodal curves. \subsubsection{The Vanishing Result for $K3$ or $T^4$ and Yau-Zaslow Formula}\label{subsubsection; vanishing} When we apply the residual intersection theory of type $II$ curves to the universal families of $K3$ or $T^4$, we get a surprising vanishing result discussed briefly in [Liu7]. Recall that the universality theorem asserts that for a general algebraic surface $M$ and a $5\delta-1$ very ample $L$, the $\delta$-node nodal curves in a generic $\delta$ dimensional sub-linear system of $|L|$ can be expressed in a degree $\delta$ universal polynomial of $c_1^2(K_M), c_1(K_M)\cdot c_1(L)$, $c_1(L)^2$, $c_2(M)$. When we consider $M=K3$ (or $T^4$), $K_M$ is trivial and the universal polynomial is reduced to a polynomial of $c_1(L)^2$ and $c_2(M)$. On the other hand, for rational nodal curves the self-intersection number $c_1(L)^2$ is constrained by the adjunction formula by $c_1(L)^2=2\delta-2$. So the universal polynomial has been reduced to a degree $\delta$ polynomial of $c_2(M)$ above. On the other hand, the well known Yau-Zaslow formula [YZ] asserts that when we we put all the numbers of $\delta$-node ($\delta\in {\bf N}$) nodal curves into a generating function, it can be identified with the power series $$1+\sum_{\delta\in {\bf N}}n_{\delta}q^{\delta}=\{{1\over \prod_{i\geq 0} (1-q^i)}\}^{c_2(M)}.$$ The following vanishing result implies that the type $II$ exceptional curves within the universal family contribute nothing to the family invariant \noindent ${\cal AFSW}_{M_{n+1}\times \{t_L\}\mapsto M_n\times \{t_L\}}( 1, c_1(L)-\sum 2E_i)$. \begin{theo}\label{theo; zero} The virtual number of $\delta$-node nodal curves in a linear sub-system of $|L|$ on an algebraic $K3$ surface is equal to ${1\over \delta!}{\cal AFSW}_{M_{n+1}\times \{t_L\}\mapsto M_n\times \{t_L\}}^{\ast}(1, c_1(L)-2\sum_{i\leq \delta}E_i)$, the normalized type $I$ modified algebraic family Seiberg-Witten invariant defined in section 5.2., remark 12 of [Liu6]. \end{theo} \noindent Sketch of the Proof of theorem \ref{theo; zero}: We apply the above scheme to both type $I$ and type $II$ exceptional classes on the universal families $M_{n+1}\mapsto M_n$. So the $\delta!\times $ the virtual number of $\delta-$node nodal curves in a linear sub-system of $|L|$ is equal to ${\cal AFSW}_{M_{n+1}\times \{t_L\}\mapsto M_n\times \{t_L\}}( 1, c_1(L)-\sum 2E_i)$-correction terms from both type $I$ and type $II$ exceptional classes. Each correction term is a modified invariant attached to $[{\cal M}_{e_{k_1}, e_{k_2}, \cdots, e_{k_p}; e_{II; 1}, \cdots, e_{II; p'}}]_{vir}$. On the other hand, we may distinguish the collections of exceptional classes into two subsets. The first subset collects all the collections of type $I$ exceptional classes, the second subset collects all the collections not entirely of type $I$ exceptional classes. I.e. it contains all the collections consisting of either type $II$ exceptional classes or consisting of both type $I$ and type $II$ exceptional classes. Independent to the details of the $\tau$ class and $Y(\Gamma)$ or $[{\cal M}_{e_{II; 1}, \cdots, e_{II; p}}]_{vir}$, by theorem \ref{theo; degenerate} the important characteristic of the mixed invariant involving one or more type $II$ curve is that there is an additional insertion\footnote{Recall that $p_g=1$ for $K3$ surfaces.} of $c_1({\cal R}^2\pi_{\ast}{\cal O}_{M_{ \delta+1}})$ to the family invariant. On the other hand, there is the following commutative diagram, \[ \begin{array}{ccc} M_{\delta+1} & \mapsto & M_{\delta}\\ \Big\downarrow & & \Big\downarrow \\ M\times M_{\delta} & \mapsto & M_{\delta} \end{array} \] and the push-forward of $M_{\delta+1}\mapsto M_{\delta}$ factors through $M\times M_{\delta}\mapsto M_{\delta}$. So ${\cal R}^2\pi_{\ast}{\cal O}_{M_{ \delta+1}}$ can be identified with ${\cal O}_{M_{\delta}}\otimes H^2(M, {\cal O}_M)$. In particular, this implies that $c_1({\cal R}^2\pi_{\ast}{\cal O}_{M_{ \delta+1}})=0$. This implies that all the mixed family invariants involving one or more type $II$ classes are identically zero. As all the modified invariants are defined inductively by the differences of the mixed invariants, all the modified family invariants involving type $II$ exceptional curves vanish on the universal families of $K3$. Therefore all the correction terms are from the type $I$ exceptional curves. So $\delta!\times$ the virtual number of nodal curves collapses to the type $I$ modified family invariant $${\cal AFSW}_{M_{n+1}\times \{t_L\}\mapsto M_n\times \{t_L\}}^{\ast}(1, c_1(L)-2\sum_{i\leq \delta}E_i).$$ The theorem is proved. $\Box$ {} \end{document}
arXiv
\begin{document} \begin{abstract} The chromatic threshold $\delta_\chi(H,p)$ of a graph $H$ with respect to the random graph $G(n,p)$ is the infimum over $d > 0$ such that the following holds with high probability: the family of $H$-free graphs $G \subset G(n,p)$ with minimum degree $\delta(G) \ge dpn$ has bounded chromatic number. The study of $\delta_\chi(H) := \delta_\chi(H,1)$ was initiated in 1973 by Erd\H{o}s and Simonovits. Recently $\delta_\chi(H)$ was determined for all graphs $H$. It is known that $\delta_\chi(H,p) = \delta_\chi(H)$ for all fixed $p \in (0,1)$, but that typically $\delta_\chi(H,p) \ne \delta_\chi(H)$ if $p = o(1)$. Here we study the problem for sparse random graphs. We determine $\delta_\chi(H,p)$ for most functions~$p = p(n)$ when $H\in\{K_3,C_5\}$, and also for all graphs $H$ with $\chi(H) \not\in \{3,4\}$. \end{abstract} \title{Chromatic thresholds in sparse random graphs} \section{Introduction} An important recent trend in combinatorics has been the formulation and proof of so-called `sparse random analogues' of many classical extremal and structural results. For example, Kohayakawa, \L uczak and R\"odl~\cite{KLRroth} conjectured almost twenty years ago that Szemer\'edi's theorem on $k$-term arithmetic progressions should hold in a $p$-random subset of $\{1,\ldots,n\}$ if $p \gg n^{-1/(k-1)}$, and that the Erd\H{o}s-Stone theorem should hold in the Erd\H{o}s-R\'enyi random graph $G(n,p)$ if $p \gg n^{-1/m_2(H)}$, where $m_2(H)$ is defined to be the maximum of $\frac{e(F) - 1}{v(F) - 2}$ over all subgraphs $F \subset H$ with $v(F) \ge 3$. The study of these conjectures recently culminated in the extraordinary breakthroughs of Conlon and Gowers~\cite{ConGow} and Schacht~\cite{SchTuran}, who resolved these conjectures (and many others), and in the development of the so-called `hypergraph container method', see~\cite{BMS,ST}. In this paper we study a sparse random analogue of the \emph{chromatic threshold} $\delta_\chi(H)$ of a graph $H$, which is defined to be the infimum over $d > 0$ such that every $H$-free graph on $n$ vertices with minimum degree at least $d n$ has bounded chromatic number. The study of chromatic thresholds was initiated in 1973 by Erd\H{o}s and Simonovits~\cite{ES73}, who were motivated by Erd\H{o}s' famous probabilistic proof~\cite{Erd59} that there exist graphs with arbitrarily high girth and chromatic number. Building on work of (amongst others) \L uczak and Thomass\'e~\cite{LT} and Lyle~\cite{Lyle10}, the chromatic threshold of every graph $H$ was finally determined in~\cite{ChromThresh}, where it was proved that \begin{equation}\label{eq:CT:thm} \delta_\chi(H) \, \in \, \bigg\{ \frac{r-3}{r-2}, \, \frac{2r-5}{2r-3}, \, \frac{r-2}{r-1} \bigg\} \end{equation} for every graph $H$ with chromatic number $\chi(H) = r \ge 2$. For example, $\delta_\chi(K_3) = 1/3$ and $\delta_\chi(C_{2k+1}) = 0$ for every $k \ge 2$, as was first proved by Thomassen~\cite{Thomassen02,Thomassen07}. We will study the following analogue of $\delta_\chi(H)$ in $G(n,p)$. \begin{defn}\label{def:deltachip} Given a graph $H$ and a function $p = p(n) \in [0,1]$, define \begin{align*} & \delta_\chi\big( H, p \big) \, := \, \inf \Big\{ d > 0 \,:\, \text{there exists $C > 0$ such that the following holds} \\ & \hspace{5cm} \text{with high probability: every $H$-free spanning subgraph } \\ & \hspace{6.5cm} G \subset G(n,p) \textup{ with } \delta(G) \ge d pn \textup{ satisfies } \chi(G) \leqslant} \def\ge{\geqslant C \Big\}. \end{align*} We call $\delta_\chi\big(H,p\big)$ the \emph{chromatic threshold of $H$ with respect to $p$}. \end{defn} Note that $\delta_\chi(H) = \delta_\chi(H,1)$, so this definition generalises that of Erd\H{o}s and Simonovits. We emphasise that the constant $C$ is allowed to depend on the graph $H$, the function $p$ and the number $d$, but not on the integer $n$. We also note that if, for some $d$, with high probability there is no spanning $H$-free subgraph of $G(n,p)$ whose average degree exceeds $dpn$, then vacuously we have $\delta_\chi(H,p)\leqslant} \def\ge{\geqslant d$. In a companion paper~\cite{dense} we showed that $\delta_\chi( H, p ) = \delta_\chi(H)$ for all fixed $p > 0$, and made some progress on determining $\delta_\chi( H, p )$ in the case $p = n^{-o(1)}$. Here we begin the investigation of $\delta_\chi( H, p )$ for sparser random graphs. In particular, we will prove the following theorem, which determines $\delta_\chi( H, p )$ precisely for a large class of graphs and essentially all functions $p = p(n)$. We will write $\pi(H)$ for the Tur\'an density $1 - \frac{1}{\chi(H) - 1}$ of a graph $H$. \begin{theorem}\label{thm:classhigh} Let $H$ be a graph with $\chi(H) \not\in \{3,4\}$. Then \begin{equation}\label{eq:thm:classhigh} \delta_{\chi}(H,p) = \left\{ \begin{array}{cll} \delta_\chi(H) & \text{if } & p > 0 \text{ is constant,} \\ \pi(H) & \text{if } & n^{-1/m_2(H)} \ll p \ll 1, \\ 1 & \text{if } & \frac{\log n}{n} \ll p \ll n^{-1/m_2(H)}. \end{array} \right. \end{equation} The same moreover holds for all graphs $H$ with $\chi(H) = 4$ and $m_2(H) \ge 2$. \end{theorem} We remark that~\eqref{eq:thm:classhigh} does \emph{not} hold for all $3$-chromatic graphs; for example, it does not hold when~$H= C_{2k+1}$ for any $k \ge 2$, see Theorems~\ref{thm:C5} and~\ref{thm:Clong} below. We suspect that it does not hold for every $4$-chromatic graph, though we do not have a counter-example, see Proposition~\ref{prop:inf42} and Conjecture~\ref{conj:chi4}. Note that if $p \ll \frac{\log n}{n}$ then $\delta_{\chi}(H,p) = 0$, since with high probability $G(n,p)$ has an isolated vertex, so the condition in the definition holds vacuously. We also remark that the proof of Theorem~\ref{thm:classhigh} uses a general probabilistic lemma (Lemma~\ref{lem:EF}), which appears to be new, and may be of independent interest. For graphs $H$ with $\chi(H) = 3$, it was shown in~\cite{dense} that the situation is significantly more complicated, even in the case $p = n^{-o(1)}$. We will therefore restrict ourselves to studying one specific family of particular interest, namely the odd cycles. (For a brief discussion of more general 3-chromatic graphs, see Section~\ref{sec:open}.) For $K_3$ we will show that the pattern is the same as in Theorem~\ref{thm:classhigh}. \begin{theorem}\label{thm:K3} We have \begin{equation*} \delta_{\chi}(K_3,p) = \left\{ \begin{array}{cll} \tfrac{1}{3} & \text{if } & p > 0 \text{ is constant} \\ \tfrac{1}{2} & \text{if } & n^{-1/2} \ll p \ll 1 \\ 1 & \text{if } & \frac{\log n}{n} \ll p \ll n^{-1/2}. \end{array} \right. \end{equation*} \end{theorem} For $C_5$, on the other hand, the behaviour is more complex. \begin{theorem}\label{thm:C5} We have \begin{equation*} \delta_{\chi}(C_5,p) = \left\{ \begin{array}{cll} 0 & \text{if } & p > 0 \text{ is constant} \\ \tfrac{1}{3} & \text{if } & n^{-1/2} \ll p \ll 1 \\ \tfrac{1}{2} & \text{if } & n^{-3/4} \ll p \ll n^{-1/2} \\ 1 & \text{if } & \frac{\log n}{n} \ll p \ll n^{-3/4}. \end{array} \right. \end{equation*} \end{theorem} We believe that this is the first example of a random analogue of a natural extremal result which has been shown to exhibit several non-trivial phase transitions. Finally, for longer odd cycles we will obtain only the following partial description. \begin{theorem}\label{thm:Clong} For every $k \ge 3$, we have \begin{equation*} \delta_{\chi}(C_{2k+1},p) = \left\{ \begin{array}{cll} 0 & \text{if } & p \gg n^{-1/2} \\ \frac{1}{2} & \text{if } & n^{-(2k-1)/2k} \ll p \ll n^{-(2k-3)/(2k-2)} \\ 1 & \text{if } & \frac{\log n}{n} \ll p \ll n^{-(2k-1)/2k}. \end{array} \right. \end{equation*} \end{theorem} We suspect that $\delta_{\chi}(C_{2k+1},p) = 0$ for all $p \gg n^{-(k-2)/(k-1)}$, see Section~\ref{sec:open}, and that the techniques introduced in this paper could potentially be extended to prove this conjecture. However, we do not know what value to expect for $\delta_{\chi}(C_{2k+1},p)$ in the remaining range. Finally, we remark that Theorem~\ref{thm:Clong} proves a particular case of a much more general conjecture (see~\cite[Conjecture~1.6]{dense}) which states that, in the range $p = n^{-o(1)}$, the 3-chromatic graphs $H$ for which $\delta_\chi(H,p) = 0$ are exactly the `thundercloud-forest graphs', see Definition~\ref{def:cloudforest}. Together with the results of this paper and those in~\cite{dense}, this conjecture (if true) would completely characterise $\delta_\chi(H,p)$ in this range. \subsection*{Organisation of the paper} We begin, in Section~\ref{sec:rob}, by establishing that $G(n,p)$ contains a small subgraph with high girth, chromatic number and expansion (see Proposition~\ref{prop:rob}). This result will be used in the lower bound constructions for Theorems~\ref{thm:classhigh}, \ref{thm:K3}, and \ref{thm:C5}. In Section~\ref{sec:classhigh} we prove Theorem~\ref{thm:classhigh} and construct an infinite family of $4$-chromatic graphs~$H$ with $m_2(H)<2$ (see Proposition~\ref{prop:inf42}). In Section~\ref{sec:lower} we provide the lower bound constructions for Theorems~\ref{thm:K3} and~\ref{thm:C5}. We remark that some of these constructions are given in a more general framework and reused in~\cite{dense}. In Section~\ref{sec:upper} we prove the remaining upper bounds that imply Theorems~\ref{thm:K3}, \ref{thm:C5}, and~\ref{thm:Clong}. We conclude with some open problems in Section~\ref{sec:open}. We remark that we often omit floors and ceilings whenever this does not affect the argument. \section{Small subgraphs of $G(n,p)$ with large girth and good expansion}\label{sec:rob} In this section we will prove the following proposition, which will be a key tool in the proof of Theorem~\ref{thm:classhigh}, and in the lower bound constructions in Theorems~\ref{thm:K3} and~\ref{thm:C5}. \begin{prop}\label{prop:rob} For every $k \in \N$ and $\eps > 0$, and every function $n^{-1/2} \ll p = o(1)$, the following holds with high probability: there exists a subgraph $G \subset G(n,p)$ such that \[v(G) \leqslant} \def\ge{\geqslant \eps / p\,, \qquad \chi(G) \ge k \qquad \text{and} \qquad \textup{girth}(G) \ge k\,,\] and moreover, for every pair $A,B \subset V(G)$ of disjoint vertex sets of size at least $\eps |G|$, the graph $G[A,B]$ contains an edge. \end{prop} Observe that this result would be easy if $v(G)\leqslant} \def\ge{\geqslant\eps/p$ were replaced with $v(G)\leqslant} \def\ge{\geqslant K/p$ for some large constant $K$, because $G(K/p,p)$ with high probability has the desired properties. So the difficulty is to obtain a much smaller subgraph with these properties. Similarly, a random graph on $m:=\eps/p$ vertices with $k^2m$ edges has a subgraph with the desired properties (see Lemma~\ref{lemma:Erdos}). This is denser than our $G(n,p)$ but we will show that with high probability some set of $m$ vertices contains exactly $k^2 m$ edges (see Lemma~\ref{lem:var}). However, this is not enough to conclude that $G(n,p)$ contains a subgraph with the desired properties. Indeed, writing $E_A$ for the event that $A$ contains exactly $k^2 |A|$ edges, and $F$ for the event that the desired subgraph $G$ exists, we just argued that $$\Pr\bigg( \bigcup_{|A| = m} E_A \bigg) = 1 - o(1) \qquad \text{and} \qquad \Pr\big( F \,|\, E_A \big) = 1 - o(1) \quad \text{for every $|A| = m$.}$$ Now suppose that each $E_A$ were the disjoint union of $F$ and $E'_A$, where the $E'_A$ are disjoint events, with $\Pr(E'_A) \ll \Pr(F)$ for each $i$, but $\Pr(F) \ll \sum_{|A| = m} \Pr(E'_A)$. In other words, each event $F|E_A$ is very likely for the same reason. If this were the case then $F$ would be a very unlikely event. In this scenario, however, the random variable $X$ counting the number of $E_A$ which occur is not concentrated near its expectation, that is, $\textup{Var}(X)$ is not small compared to $\Ex[X]^2$. We will show in Lemma~\ref{lem:var} that in reality that is not the case. This combined with the following lemma suffices to prove Proposition~\ref{prop:rob}. We formulate this lemma for a general setup and believe it is likely to have other applications. \begin{lemma}\label{lem:EF} Let $E_1,...,E_t$ and $F$ be events with non-zero probability, and let $X = \sum_{j=1}^t \mathbbm{1}{[E_j]}$. Suppose that for some $\delta\ge0$ we have $\textup{Var}(X) \leqslant} \def\ge{\geqslant \delta \Ex[X]^2$ and $\Pr\big( F \,|\, E_j \big) \ge 1 - \delta$ for each $j\in[t]$. Then $\Pr(F) \ge 1-6\delta$. \end{lemma} \begin{proof} By Chebyshev's inequality, we have $\Pr\big(X\leqslant} \def\ge{\geqslant\tfrac12 \Ex[X]\big)\le4\delta$. Therefore, if we define $$Q \, := \, F^c \cap \big\{ X >\tfrac12 \Ex[X] \big\},$$ it will suffice to prove that $\Pr(Q) \leqslant} \def\ge{\geqslant 2\delta$. So suppose that to the contrary we have $\Pr(Q) > 2\delta$. Observe that \[\sum_{j=1}^t \Pr( E_j \cap Q ) \,=\, \sum_{\omega\in Q} \Pr(\omega) \sum_{j=1}^t \mathbbm{1}{[\omega \in E_j]} \, > \, \Pr(Q)\cdot\tfrac12 \Ex[X] \, > \, \delta \cdot \sum_{j=1}^t \Pr( E_j )\,,\] where the first inequality follows from the definition of $Q$. By the pigeonhole principle, it follows that $\Pr( E_j \cap Q ) > \delta \cdot \Pr(E_j)$ for some $j \in [t]$. But then \[\delta \,<\, \frac{\Pr( E_j \cap Q )}{ \Pr(E_j)} \,=\, \Pr\big( Q \,|\, E_j \big) \, \leqslant} \def\ge{\geqslant \, \Pr\big( F^c \,|\, E_j \big)\,,\] which contradicts our assumption $\Pr\big(F\,|\,E_j\big)\ge 1-\delta$. \end{proof} In order to deduce Proposition~\ref{prop:rob} from Lemma~\ref{lem:EF}, we need to bound the variance of the number of $m$-sets $A$ with exactly $k^2 m$ edges of $G(n,p)$, and show that such a set is likely to contain a graph $G$ as in the statement of the proposition. We begin with the latter claim, which follows from a standard argument, originally due to Erd\H{o}s~\cite{Erd59}. \begin{lemma}\label{lemma:Erdos} For all sufficiently large $k$ the following holds. Let $H$ be chosen uniformly from the set of graphs with $m$ vertices and $k^2 m$ edges. Then, with high probability as $m\to\infty$, there exists a spanning subgraph $G \subset H$ with $$\chi(G) \ge k \qquad \text{and} \qquad \textup{girth}(G) \ge k,$$ and moreover, for every pair $A,B \subset V(H)$ of disjoint vertex sets with $|A|,|B| \ge m / 4k$, the graph $G[A,B]$ contains an edge. \end{lemma} \begin{proof} Given $A$ and $B$ vertex disjoint sets of size exactly $m/4k$, since $e\big( H[A,B] \big)$ is a hypergeometrically distributed random variable with expected value $$|A| \cdot |B| \cdot k^2 m \cdot \binom{m}{2}^{-1} \ge \, \frac{m}{16}\,,$$ we have (see for example~\cite[Theorem~2.10]{JLRbook}) \[\Pr\big(e(H[A,B])\leqslant} \def\ge{\geqslant\tfrac{m}{32}\big)\leqslant} \def\ge{\geqslant e^{-m/200}\,.\] By the union bound, the probability that there exist $A$ and $B$ in $H$ such that $e\big(H[A,B]\big)\leqslant} \def\ge{\geqslant m/32$ is at most \[\binom{m}{m/4k}^2e^{-m/200}\leqslant} \def\ge{\geqslant e^{-m/300}\] where the inequality holds since $k$ is sufficiently large. Now let $Z$ denote the number of cycles in $H$ of length at most $k$. Then \[\Ex[Z] \, \leqslant} \def\ge{\geqslant \, \sum_{\ell = 3}^k m^\ell \binom{\binom{m}{2} - \ell}{k^2 m - \ell} \binom{\binom{m}{2}}{k^2 m}^{-1} \leqslant} \def\ge{\geqslant \, \sum_{\ell = 3}^k m^\ell \bigg( \frac{2k^2}{m-1} \bigg)^\ell \leqslant} \def\ge{\geqslant \, (2k^2)^{k+1}\,.\] By Markov's inequality, it follows that with high probability we have $Z\leqslant} \def\ge{\geqslant\tfrac{m}{64}$, and in particular with high probability $H$ is such that the following holds. By removing one edge from each cycle of length at most $k$, we obtain a subgraph $G$ with $\textup{girth}(G) \ge k$ and such that for every pair $A,B \subset V(H)$ of disjoint vertex sets with $|A|,|B| \ge m / 4k$, the graph $G[A,B]$ contains an edge. Then $G$ has no independent set of size $m/k$, so $\chi(G)\ge k$. \end{proof} \begin{lemma}\label{lem:var} Let $0 < \eps < 1$ and $k \in \N$, and suppose that $p = p(n)$ satisfies $n^{-1/2}\ll p \ll 1$. Let $X$ be the random variable that counts the number of sets $A\subset [n]$ with $|A| = m = \eps / p$ such that $e\big(G(n,p)[A]\big)=k^2 m$. Then $$\textup{Var}( X ) \, \leqslant} \def\ge{\geqslant \, \frac{Cm^2}{n} \cdot \Ex[X]^2,$$ where $C = e^{O( k^4 / \eps)}$. \end{lemma} \begin{proof} Let $s=k^2 m$. Observe that the expectation of $X$ is \begin{equation}\label{eq:varX} \Ex[X]=\binom{n}{m} \binom{\binom{m}{2}}{s} p^s (1 - p)^{\binom{m}{2} - s}\,. \end{equation} We now calculate $\Ex[X^2]$. We need to bound the expected number of pairs $(A,B)$ of sets of size $m$, each with exactly $s = k^2 m$ edges. Let $\ell = |A \cap B|$ denote the size of the intersection of these sets, and let $t = e(A \cap B)$ denote the number of edges contained in both sets. We have \[\Ex\big[ X^2 \big] \, = \, \sum_{\ell = 0}^m \binom{n}{\ell}\binom{n-\ell}{m-\ell}\binom{n-m}{m-\ell} \sum_{t = 0}^{\binom{\ell}{2}} \binom{\binom{\ell}{2} }{t} \binom{\binom{m}{2} - \binom{\ell}{2}}{s - t}^2 p^{2s - t} (1 - p)^{2\binom{m}{2} - \binom{\ell}{ 2} - 2s + t}\,.\] Note that $\binom{n}{m-\ell}\leqslant} \def\ge{\geqslant\big(\frac{m}{n-m}\big)^\ell\binom{n}{m}$, since $2m \leqslant} \def\ge{\geqslant n$, and that $\binom{\binom{m}{2}-\binom{\ell}{2}}{s-t} \leqslant} \def\ge{\geqslant \binom{\binom{m}{2}}{s-t}\leqslant} \def\ge{\geqslant\big(\frac{s}{\binom{m}{2}-s}\big)^t\binom{\binom{m}{2}}{s}$. Thus $$\binom{n-\ell}{m-\ell}\binom{n-m}{m-\ell}\leqslant} \def\ge{\geqslant \binom{n }{ m - \ell}^2 \leqslant} \def\ge{\geqslant \left( \frac{2m}{n} \right)^{2\ell} \binom{n}{ m}^2 \quad \text{and} \quad \binom{\binom{m}{ 2} - \binom{\ell }{ 2} }{ s - t} \leqslant} \def\ge{\geqslant \left( \frac{4s}{m^2} \right)^t \binom{\binom{m }{ 2} }{ s}\,,$$ and hence \begin{align*} \Ex\big[ X^2 \big] & \, \leqslant} \def\ge{\geqslant \, \sum_{\ell = 0}^m \binom{n }{ \ell} \left( \frac{2m}{n} \right)^{2\ell}\binom{n }{ m}^2 \sum_{t = 0}^{\binom{\ell }{ 2}} \binom{\binom{\ell }{2}}{ t} \left( \frac{4s}{m^2} \right)^{2t}\binom{\binom{m }{ 2} }{ s}^2 p^{2s - t} (1 - p)^{2\binom{m}{ 2} - \binom{\ell}{ 2} - 2s + t}\\ & \, \eqByRef{eq:varX} \, \Ex[X]^2\sum_{\ell = 0}^m \binom{n}{ \ell} \left( \frac{2m}{n} \right)^{2\ell} \sum_{t = 0}^{\binom{\ell }{ 2}} \binom{\binom{\ell }{ 2}}{ t} \left( \frac{4s}{m^2} \right)^{2t} p^{ - t} (1 - p)^{ - \binom{\ell }{ 2} + t}\,. \end{align*} Substituting $\binom{n}{\ell} \leqslant} \def\ge{\geqslant \big( \frac{en}{\ell} \big)^\ell$ and $\binom{\binom{\ell}{2}}{t} \leqslant} \def\ge{\geqslant\big( \frac{e\ell^2}{2t} \big)^t$, and recalling that $p\ell \leqslant} \def\ge{\geqslant pm = \eps$, and thus $(1-p)^{-\ell}\leqslant} \def\ge{\geqslant e^{2p\ell}=O(1)$, we have \begin{align*} \Ex[X^2] & \, \leqslant} \def\ge{\geqslant \, \Ex[X]^2\sum_{\ell = 0}^m \bigg(\frac{en}{\ell} \cdot \frac{4m^2}{n^2} \bigg)^{\ell} \sum_{t = 0}^{\binom{\ell }{ 2}} \bigg( \frac{e\ell^2}{2t} \cdot \frac{16s^2}{m^4} \bigg)^{t} p^{ - t} (1 - p)^{ - \binom{\ell }{ 2} + t}\\ & \, \leqslant} \def\ge{\geqslant \, \Ex[X]^2 \sum_{\ell = 0}^m e^{O(\ell)} \bigg(\frac{m^2}{n\ell} \bigg)^{\ell} \sum_{t = 0}^{\binom{\ell}{ 2}} \bigg( \frac{8es^2\ell^2}{m^4tp} \bigg)^{t}\,. \end{align*} Since $(x / t)^t$ is maximised when $t = x / e$, and therefore has maximum $e^{x/e}$, it follows that \[ \Ex[X^2] \, \leqslant} \def\ge{\geqslant \, \Ex[X]^2\sum_{\ell = 0}^m e^{O(\ell)} \bigg(\frac{m^2}{n\ell} \bigg)^{\ell} \exp\bigg(\frac{8s^2\ell^2}{m^4 p} \bigg). \] Now, since $\ell \leqslant} \def\ge{\geqslant m$, $pm = \eps$ and $s = k^2 m$, we obtain $$\Ex[X^2] \, \leqslant} \def\ge{\geqslant \, \Ex[X]^2\sum_{\ell = 0}^m e^{O(\ell)} \bigg(\frac{m^2}{n\ell} \bigg)^{\ell} \exp\bigg(\frac{8k^4 \ell}{\eps} \bigg) \leqslant} \def\ge{\geqslant \, \Ex[X]^2\sum_{\ell = 0}^m \bigg( \frac{C' m^2}{n\ell} \bigg)^{\ell}$$ for some $C' = e^{O( k^4 / \eps)}$. This last sum is bounded above by a geometric series with ratio $C' m^2/n$, which, since $m = \eps/p$ and by our choice of $p$, is smaller than $\tfrac12$. The sum is therefore bounded above by $1 + 2C'm^2 / n$, as desired. \end{proof} We can now easily deduce Proposition~\ref{prop:rob}. \begin{proof}[Proof of Proposition~\ref{prop:rob}] Given $\eps>0$, we can assume $k\ge4/\eps$ is sufficiently large. Set $m = \eps / p$, define $E_A$ to be the event that $G(n,p)[A]$ contains exactly $k^2|A|$ edges, and let $F$ denote the event that there exists a subgraph $G \subset G(n,p)$ such that $v(G) \leqslant} \def\ge{\geqslant \eps / p$, $\chi(G) \ge k$ and $\textup{girth}(G) \ge k$, and moreover, for every pair $A,B \subset V(G)$ of disjoint vertex sets of size at least $\eps |G|$, the graph $G[A,B]$ contains an edge. We are required to prove that $\Pr(F) = 1 - o(1)$. But this follows from Lemma~\ref{lem:EF} since the conditions of this lemma are satisfied for some $\delta=o(1)$. Indeed, since $n^{-1/2} \ll p = o(1)$, we have $\textup{Var}(X)=o\big( \Ex[X]^2\big)$ by Lemma~\ref{lem:var}, where $X = \sum_{|A| = m} E_A$, and by Lemma~\ref{lemma:Erdos} we have $\Pr\big( F \,|\, E_A \big) = 1 - o(1)$. \end{proof} \section{Proof of Theorem~\ref{thm:classhigh}}\label{sec:classhigh} To prove Theorem~\ref{thm:classhigh}, we will use some known results. The first is the so-called sparse random Erd\H{o}s--Stone theorem, which was originally conjectured by Kohayakawa, \L uczak and R\"odl~\cite{KLR}, and proved by Conlon and Gowers~\cite{ConGow} (for strictly balanced graphs $H$) and Schacht~\cite{SchTuran} (in general). \begin{theorem}[Conlon and Gowers, Schacht]\label{thm:CGS} For every graph $H$, every $\gamma > 0$, and every $p \gg n^{-1/m_2(H)}$, the following holds with high probability. For every $H$-free subgraph $G \subset G(n,p)$, we have $$e(G) \, \leqslant} \def\ge{\geqslant \, \bigg( 1 - \frac{1}{\chi(H)-1} + \gamma \bigg) p\binom{n }{ 2}.$$ In particular, $\delta_\chi(H,p) \leqslant} \def\ge{\geqslant \pi(H)$. \end{theorem} The bound on~$p$ in Theorem~\ref{thm:CGS} is sharp, as is shown by the following proposition. \begin{prop}\label{prop:verysmallp} For every graph $H$, every $\gamma > 0$, and every $\tfrac{\log n}{n} \ll p \ll n^{-1/m_2(H)}$, the following holds with high probability. There exists an $H$-free spanning subgraph $G \subset G(n,p)$ with \[\delta(G) \ge \big( 1 - 2\gamma \big) pn \qquad \text{and} \qquad \chi(G)\ge\tfrac12\gamma^{-1/2}\,.\] In particular, $\delta_\chi(H,p) = 1$. \end{prop} For the proof we need the following lemma, which is a straightforward consequence of the celebrated `polynomial concentration inequality' of Kim and Vu~\cite{KV}\footnote{An anonymous referee pointed out to us that one can avoid the polynomial concentration machinery by using a very nice trick of Krivelevich~\cite{KriLarge}.}. Recall that a graph $F$ is said to be \emph{$2$-balanced} if $\frac{e(F')-1}{v(F')-2} \leqslant} \def\ge{\geqslant \frac{e(F)-1}{v(F)-2}$ for each subgraph $F' \subsetneq F$ with $v(F') \ge 3$. Given a graph $F$ and a vertex $v$, let $X_F(v)$ denote the random variable that counts the number of copies of $F$ in $G(n,p)$ that contain $v$. \begin{lemma}\label{lem:Fconc} Let $F$ be a $2$-balanced graph with $m_2(F) > 1$, and suppose that $p = p(n)$ satisfies $\tfrac{(\log n)^{2e(F)+5}}{n} \leqslant} \def\ge{\geqslant p\leqslant} \def\ge{\geqslant n^{-1/m_2(F)}$. Then \begin{equation}\label{eq:Fconc} \Pr\bigg( X_F(v) \ge \Ex\big[ X_F(v) \big] + \frac{pn}{\log n} \bigg) \, \leqslant} \def\ge{\geqslant \, \frac{1}{n^2} \end{equation} for all sufficiently large $n$. \end{lemma} In order to prove Lemma~\ref{lem:Fconc}, we will need to recall the main theorem of~\cite{KV}. For each set $S \subset E(K_n)$, let $E_F(S,v)$ denote the conditional expectation, given that $S \subset E\big( G(n,p) \big)$, of the number of copies of $F$ in $G(n,p)$ which contain $v$ and use all the edges in $S$. Let $E'_F(v) = \max_{S\neq\emptyset}E_F(S,v)$, and $E_F(v) = \max\big\{ E'_F(v), \, \Ex[X_F(v)] \big\}$. The main theorem of~\cite{KV} implies\footnote{To obtain~\eqref{eq:KV}, we apply the general theorem stated in~\cite{KV} to the polynomial corresponding to the random variable $X_F(v)$, with $\lambda = 2 e(F) \log \binom n 2$.} that \begin{equation}\label{eq:KV} \Pr\Big( X_F(v) \ge \Ex\big[ X_F(v) \big] + (\log n)^{e(F)+1} \big( E_F(v)E'_F(v) \big)^{1/2} \Big) \, \leqslant} \def\ge{\geqslant \, n^{-2e(F)} \end{equation} if $n$ is sufficiently large. \begin{proof}[Proof of Lemma~\ref{lem:Fconc}] In order to apply~\eqref{eq:KV} we need to bound $E_F(S,v)$ from above for each non-empty set of edges $S$. We claim that, for each such $S$, we have \begin{equation}\label{eq:EFSv:bound} E_F(S,v) \, \leqslant} \def\ge{\geqslant \, pn(\log n)^{-2e(F)-4}. \end{equation} We now justify this claim. Given a non-empty set $S$ of edges of $K_n$, let $T$ be the set of vertices incident to the edges $S$. Observe that \begin{equation}\label{eq:Fcheck} E_F(S,v) \leqslant} \def\ge{\geqslant v(F)^{|T|}n^{v(F)-|T|}p^{e(F)-|S|}\,. \end{equation} Suppose first that $|T|=v(F)$. Then $E_F(S,v) \leqslant} \def\ge{\geqslant v(F)^{|T|} = O(1)$, from which~\eqref{eq:EFSv:bound} follows since $p \ge \tfrac{(\log n)^{2e(F)+5}}{n}$. On the other hand, since $F$ is $2$-balanced, if $2 \leqslant} \def\ge{\geqslant |T| < v(F)$ then we have $|S| \leqslant} \def\ge{\geqslant m_2(F) \big( |T| - 2 \big) + 1$. Since $e(F) = m_2(F)\big( v(F) - 2 \big) + 1$ and $m_2(F) > 1$, it follows that $$e(F) - |S| \, = \, m_2(F)\big( v(F) - |T| - 1 \big) + 1 + c$$ for some $c > 0$, and hence, from~\eqref{eq:Fcheck}, that $$E_F(S,v) \leqslant} \def\ge{\geqslant v(F)^{|T|} p^{1+ c} n \cdot \big( p^{m_2(F)} n \big)^{v(F) - |T| - 1} \, \leqslant} \def\ge{\geqslant \, pn (\log n)^{-2e(F)-4}$$ if $n$ is sufficiently large, by our choice of $p$. This proves~\eqref{eq:EFSv:bound}. Now, to deduce~\eqref{eq:Fconc}, simply note that $\Ex[X_F(v)] \leqslant} \def\ge{\geqslant p^{e(F)} n^{v(F)-1} \leqslant} \def\ge{\geqslant pn$, since $p \leqslant} \def\ge{\geqslant n^{-1/m_2(F)}$ and $F$ is $2$-balanced. Thus, by~\eqref{eq:EFSv:bound}, we have $$E_F(v)E'_F(v) \, \leqslant} \def\ge{\geqslant \, (pn)^2 (\log n)^{-2e(F)-4},$$ and hence, by~\eqref{eq:KV}, we have $$\Pr\bigg( X_F(v) \ge \Ex\big[ X_F(v) \big] + \frac{pn}{\log n} \bigg) \, \leqslant} \def\ge{\geqslant \, n^{-2e(F)},$$ as required. \end{proof} We can now easily deduce Proposition~\ref{prop:verysmallp}. \begin{proof}[Proof of Proposition~\ref{prop:verysmallp}] Suppose first that $p \ge \tfrac{(\log n)^{2e(H) + 5}}{n}$. In this case let $F \subset H$ be a subgraph of $H$ with $\frac{e(F) - 1}{v(F) - 2} = m_2(H)$, and note that $F$ is 2-balanced, and that $m_2(F) = m_2(H) > 1$, since otherwise the statement is vacuous. Since $p \ll n^{-1/m_2(H)}$ implies that $\Ex\big[ X_F(v) \big] = o(pn)$, it follows by Lemma~\ref{lem:Fconc} that, with high probability, every vertex of $G(n,p)$ lies in at most $\gamma p n$ copies of $F$. On the other hand, if $p \leqslant} \def\ge{\geqslant \tfrac{(\log n)^{O(1)}}{n}$, then we may take $F$ to be an arbitrary cycle in $H$. (If $H$ is a forest then $m_2(H) \leqslant} \def\ge{\geqslant 1$ and again the proposition holds vacuously.) To estimate the number of pairs of copies of $F$ in $G(n,p)$ both containing $v$, observe that if two such copies of $F$ intersect in a set of edges $S$ and vertices $T$, then $T$ contains $v$ and thus we have $|S|\leqslant} \def\ge{\geqslant|T|-1$. Furthermore, $e(F)=v(F)$, and so the expected number of pairs of copies of $F$ in $G(n,p)$, both containing $v$, is at most $$\sum_{S \subsetneq E(F)} O\Big( p^{2e(F) - |S|} n^{2v(F) - |T| - 1} \Big) \, \leqslant} \def\ge{\geqslant \, \frac{(\log n)^{O(1)}}{n^2}\,.$$ Hence, by Markov's inequality, with high probability every vertex of $G(n,p)$ lies in at most one copy of $F$. Now, by Chernoff's inequality, $G(n,p)$ has minimum degree at least $(1-\gamma)pn$, and each $X \subset V\big(G(n,p)\big)$ with $|X| \ge\gamma n$ contains more than $p|X|^2 / 4$ edges. If all three of these likely events occur, then we can construct the desired graph $G \subset G(n,p)$ simply by deleting one edge from each copy of $F$ in $G(n,p)$. To see that $G$ has the required properties, observe first that $G$ is $F$-free, and therefore $H$-free. Next, note that we have deleted at most $\gamma p n$ edges at each vertex of $G(n,p)$, so we have $\delta(G)\ge(1-2\gamma)pn$, as claimed. Finally, if $|X|\ge 2\sqrt{\gamma}n$ then, since at most $\gamma p n^2$ edges of $G(n,p)$ are deleted to obtain $G$, and $X$ contains more than $\gamma p n^2$ edges of $G(n,p)$, the set $X$ is not independent in $G$. It follows that $\chi(G)\ge \tfrac12\gamma^{-1/2}$ as desired. \end{proof} For constant~$p$, on the other hand, the chromatic threshold was determined in~\cite{dense}. \begin{theorem}\label{thm:pconst} For each constant $p > 0$ and graph $H$, we have $\delta_\chi(H,p) = \delta_\chi(H)$. \end{theorem} This together with the following lower bound on $\delta_\chi(H,p)$ establishes Theorem~\ref{thm:classhigh}. \begin{theorem}\label{thm:sparsetop} Let $r \ge 4$, $s \in \mathbb{N}$ and $\gamma > 0$. If $p = p(n)$ satisfies $n^{-1/2} \, \ll \, p \, \ll \, 1$ as $n \to \infty$, then with high probability $G(n,p)$ contains a spanning subgraph $G$ with $$\chi(G) \ge s \qquad \text{and} \qquad \delta(G) \ge \bigg( \frac{r-2}{r-1} - \gamma \bigg) pn,$$ such that every $s$-vertex subgraph of $G$ is $(r-1)$-colourable. In particular, $\delta_\chi(H,p) \ge \pi(H)$ for every graph $H$ with $\chi(H) \ge 4$. \end{theorem} Before proving this theorem, we first spell out the deduction of Theorem~\ref{thm:classhigh}. \begin{proof}[Proof of Theorem~\ref{thm:classhigh}] If $p > 0$ is constant then the result follows from Theorem~\ref{thm:pconst}, and if $\frac{\log n}{n} \ll p \ll n^{-1/m_2(H)}$ then it follows from Proposition~\ref{prop:verysmallp}. Moreover, the upper bound in the range $p \gg n^{-1/m_2(H)}$ follows by Theorem~\ref{thm:CGS}. It remains to show that $\delta_\chi(H,p) \ge \pi(H)$ for every $n^{-1/m_2(H)} \ll p \ll 1$. If $H$ is bipartite then this is immediate, since $\pi(H) = 0$, so let us assume that $\chi(H) \ge 4$ and $m_2(H) \ge 2$. (Note that $m_2(H) > 2$ for every graph $H$ with $\chi(H) \ge 5$. Indeed, if $m_2(H) \leqslant} \def\ge{\geqslant 2$ then $e(F) \leqslant} \def\ge{\geqslant 2v(F) - 3$ for every subgraph $F \subset H$ with more than one vertex. In particular, this implies that every subgraph of $H$ has a vertex of degree at most 3, and hence $\chi(H) \leqslant} \def\ge{\geqslant 4$.) But now $n^{-1/m_2(H)} \ge n^{-1/2}$, and so the claimed bound follows from Theorem~\ref{thm:sparsetop}. \end{proof} Now we will use Proposition~\ref{prop:rob} to prove Theorem~\ref{thm:sparsetop}. \begin{proof}[Proof of Theorem~\ref{thm:sparsetop}] To construct $G$, we first partition the vertex set into sets $X$ and $Y$, with $|X| = n/(r-1)$, and expose the edges of $G(n,p)$ contained in $X$. By Proposition~\ref{prop:rob}, with high probability, there exists a subgraph $F$ on at most $\eps / p$ vertices with girth and chromatic number both at least $s+1$. We fix one such $F$. We next expose the edges of $G(n,p)$ between $V(F)$ and $Y$, and let $I_u = N(u)\cap Y$ for each $u \in V(F)$. Set $$V_1 = \big( X \setminus V(F) \big) \cup \bigcup_{u \in V(F)} I_u$$ and let $V_2 \cup \dots \cup V_{r-1}$ be an arbitrary equipartition of $Y \setminus \bigcup_{u \in V(F)} I_u$. Note that we have not yet revealed any pair between any $V_i$ and $V_j$ with $i \ne j$. We reveal them now. We are now ready to define $G$ to be the spanning subgraph of $G(n,p)$ with edge set $$E(F) \cup \big\{ uv : u \in V(F), \, v \in I_u \big\} \cup \big\{ uv \in E(G(n,p)) : u \in V_i, \, v \in V_j, \, i \ne j \big\}.$$ We claim that, with high probability, $G$ has the desired properties. Indeed, note first that $\chi(G) \ge \chi(F) > s$. Next we (implicitly) use several Chernoff bounds to show that~$G$ has sufficiently high minimum degree. Indeed, since $p \gg n^{-1/2}$ and $v(F) \leqslant} \def\ge{\geqslant \eps / p$, with high probability we have $| \bigcup_{u \in V(F)} I_u | \leqslant} \def\ge{\geqslant pn \cdot v(F) \leqslant} \def\ge{\geqslant \eps n$, which implies $|V_i|\ge \frac{n}{r-1}-\eps n$. So with high probability every vertex $u\in V_i$ has at least $\big( \frac{r-2}{r-1} - \gamma \big) p n$ neighbours in $\bigcup_{j\neq i} V_j$ and every vertex $u\in V(F)$ has at least $\big( \frac{r-2}{r-1} - \gamma \big) p n$ neighbours in $Y$ (because the edges between $V(F)$ and~$Y$ were only revealed after fixing~$F$), as required. Finally, we claim that $\chi\big( G[W] \big) \leqslant} \def\ge{\geqslant r-1$ for every $s$-vertex subset $W \subset V(G)$. To show this, first colour $W \cap V_i$ with colour $i$ for each $1 \leqslant} \def\ge{\geqslant i \leqslant} \def\ge{\geqslant r-1$. Now, the remaining vertices $W \cap V(F)$ induce a forest in $G$, because $F$ has girth greater than $s$, and their neighbours in $G$ are all in $V_1$, by construction. We can therefore complete a proper colouring of $G[W]$ using only colours $2$ and $3$ inside $F$. Since $r \ge 4$, this completes the proof. \end{proof} We finish this section by noting that Theorem~\ref{thm:classhigh} does not cover all graphs of chromatic number four. \begin{prop}\label{prop:inf42} There exist infinitely many graphs $H$ with $\chi(H) = 4$ and $m_2(H) < 2$. \end{prop} \begin{proof} We will construct a graph $G_0$, which we call a \emph{gadget} (see Figure~\ref{fig:Simon}), with 12 vertices and 19 edges, and which satisfies the following key property: there are (non-adjacent) vertices $x,y \in V(G_0)$ such that in any proper 3-colouring of $G_0$ the vertices $x$ and $y$ receive different colours. In other words, if we replace an edge $xy$ of a larger graph $G$ by a gadget on $xy$, then (when 3-colouring $G$) the gadget plays the same role as the edge. \begin{figure} \caption{the gadget~$G_0$} \label{fig:Simon} \end{figure} To define the gadget, set $V(G_0) = \{x,y\} \cup \{u_i, v_i : 1 \leqslant} \def\ge{\geqslant i \leqslant} \def\ge{\geqslant 5\}$ and \begin{align*} & E(G_0) \, = \, \big\{ u_1u_2, u_2u_3, u_3u_4, u_4u_5, u_5u_1 \big\} \cup \big\{ v_1v_2, v_2v_3, v_3v_4, v_4v_5, v_5v_1 \big\} \\ & \hspace{4cm} \cup \big\{ xu_1, xu_3, xv_1, xv_3 \big\} \cup \big\{ yu_2, yu_4, yv_2, yv_4 \big\} \cup \big\{ u_5v_5 \big\}. \end{align*} Now, let $c \colon V(G_0) \to \{1,2,3\}$ be a proper colouring, and suppose that $c(x) = c(y) = 1$. Then none of the vertices of $\{ u_i, v_i : 1 \leqslant} \def\ge{\geqslant i \leqslant} \def\ge{\geqslant 4 \}$ receives colour 1, and therefore it follows that $c(u_5) = c(v_5) = 1$, a contradiction. Now, given a graph $H$ with $\chi(H) = 4$, we can construct a graph $H'$ with $$\chi(H') = 4, \qquad v(H') = v(H) + 10 \qquad \text{and} \qquad e(H') = e(H) + 18,$$ simply by replacing any edge of $H$ by a copy of the gadget. Moreover, if $m_2(H) < 2$ then one easily checks that $m_2(H') < 2$. Thus, in order to construct the claimed infinite family of graphs, it suffices to construct a single example. In order to do so, consider the graph $H$ obtained by replacing every edge $xy$ of $K_4$ by a copy of the gadget. The resulting graph has $6\cdot 12-4\cdot 2=64$ vertices and $6\cdot 19=114$ edges. Moreover, it may be easily checked that $m_2(H) < 2$, as required. \end{proof} \section{Lower bounds for Theorems~\ref{thm:K3} and~\ref{thm:C5}} \label{sec:lower} By Theorem~\ref{thm:pconst} we have that $\delta_\chi(K_3,p)=\frac13$ and $\delta_\chi(C_5,p)=0$ for any constant $p>0$, and by Proposition~\ref{prop:verysmallp} we have $\delta_\chi(H,p) = 1$ whenever $\tfrac{\log n}{n} \ll p \ll n^{-1/m_2(H)}$. In this section we will provide three constructions (see Propositions~\ref{prop:cloud:lower}, \ref{prop:thundercloud:lower} and~\ref{prop:C5:simon}) which imply the remaining lower bounds for $\delta_\chi(K_3,p)$ and $\delta_\chi(C_5,p)$ in Theorems~\ref{thm:K3} and~\ref{thm:C5}. However, since these constructions are also needed in~\cite{dense}, we shall, instead of giving them for $H=K_3$ and $H=C_5$, provide them in the following more general setting. \begin{defn}\label{def:cloudforest} A graph $H$ is a \emph{cloud-forest graph} if there is an independent set $I \subset V(H)$ (the \emph{cloud}) such that $V(H) \setminus I$ induces a forest $F$, the only edges from $I$ to $F$ go to leaves or isolated vertices of $F$, and no two adjacent leaves in $F$ send edges to $I$. Moreover, $H$ is a \emph{thundercloud-forest graph} if there is a cloud $I \subset V(H)$, which witnesses that $H$ is a cloud-forest graph, such that every odd cycle in $H$ uses at least two vertices of~$I$. \end{defn} Note that $K_3$ is not a cloud-forest graph, $C_5$ is a cloud-forest but not a thundercloud-forest graph, and $C_{2k+1}$ is a thundercloud-forest graph for every $k \ge 3$. The following proposition implies $\delta_\chi(K_3,p)\ge\frac12$ for $n^{-1/2} \ll p(n) = o(1)$. \begin{prop}\label{prop:cloud:lower} Let $H$ be a graph with $\chi(H) = 3$, and suppose that $n^{-1/2} \ll p(n) = o(1)$. If $H$ is not a cloud-forest graph, then $\delta_{\chi}(H,p) \ge \frac{1}{2}$. \end{prop} Similarly, the next proposition implies $\delta_\chi(C_5,p)\ge\frac13$ for $n^{-1/2} \ll p(n) = o(1)$. \begin{prop}\label{prop:thundercloud:lower} Let $H$ be a graph with $\chi(H) = 3$, and suppose that $n^{-1/2} \ll p(n) = o(1)$. If $H$ is not a thundercloud-forest graph, then $\delta_{\chi}(H,p) \ge \frac{1}{3}$. \end{prop} The constructions that prove these propositions are similar to (though somewhat more complicated than) that in the proof of Theorem~\ref{thm:sparsetop} and rely, again, on Proposition~\ref{prop:rob}. \begin{proof}[Proof of Proposition~\ref{prop:cloud:lower}] We will show that, for every $s \in \N$ and $\gamma > 0$, with high probability $G(n,p)$ contains a spanning subgraph $G$ with $$\chi(G) \ge s \qquad \text{and} \qquad \delta(G) \ge\bigg( \frac{1}{2} - \gamma \bigg)pn$$ such that every $s$-vertex subgraph of $G$ is a cloud-forest graph. This fact implies that $\delta_{\chi}(H,p) \ge 1/2$ for every graph $H$ that is not a cloud-forest graph. The construction of $G$ is similar to that in the proof of Theorem~\ref{thm:sparsetop}, except that we will need to ensure that the neighbourhoods (in $G$) of the vertices of $F$ are disjoint. To be precise, let us partition the vertex set into sets $X$ and $Y$, with $|X| = |Y| = n/2$, and expose the edges of $G(n,p)$ contained in $X$. With high probability, we obtain (by Proposition~\ref{prop:rob}) a subgraph $F$ on at most $\eps / p$ vertices with girth and chromatic number both at least $s+1$. Next, expose the edges of $G(n,p)$ between the vertices of $F$ and $Y$, and for each $u \in V(F)$, define $$I_u \, = \, \Big\{ v \in Y : N(v) \cap V(F) = \{ u \} \Big\},$$ and note that the sets $I_u$ are pairwise disjoint. Observe also that $|I_u|$ is a binomial random variable with expected size $|Y| \cdot p(1-p)^{v(F)-1} \ge \big( 1/2 - \eps \big) pn$, and therefore with high probability $|I_u| \ge (1/2 - \gamma) pn$ for every $u \in V(F)$. Now let~$G$ be the graph with edge set $$E(F) \cup \big\{ uv : u \in V(F), \, v \in I_u \big\} \cup \big\{ uv \in E(G(n,p)) : u \in V_1, \, v \in V_2 \big\},$$ where $$V_1 = \big( X \setminus V(F) \big) \cup \bigcup_{u \in V(F)} I_u \qquad \textup{and} \qquad V_2 := Y \setminus \bigcup_{u \in V(F)} I_u.$$ Observe that $\chi(G) \ge \chi(F) > s$, and that $\delta(G) \ge \big( \frac{1}{2} - \gamma \big) p n$ with high probability. It remains to show that for any set $W \subset V(G)$ of at most $s$ vertices, $G[W]$ is a cloud-forest graph. To do so, we must find an independent set $I \subset W$ such that $G[W \setminus I]$ is a forest, and the only edges from $I$ to $W \setminus I$ go to non-adjacent leaves or isolated vertices of this forest. We claim that this holds for $I = W \cap V_2$, which is an independent set by the definition of~$G$. Indeed, observe first that $G[W\setminus I]$ is a forest, since $W\setminus I\subset V(F)\cup V_1$, the girth of $F$ is greater than $s$, and since each vertex of $V_1$ has at most one neighbour in $V(F)\cup V_1$. Now, every edge from $I$ meets $W \setminus I$ in $V_1$, and (as just noted) every vertex of $W \cap V_1$ is either an isolated vertex of $G[W\setminus I]$, or a leaf. Since $V_1$ is an independent set in $G$, all of these leaves are non-adjacent, and hence $G[W]$ is a cloud-forest graph, as required. \end{proof} In order to prove Proposition~\ref{prop:thundercloud:lower}, we will make use of the following construction of {\L}uczak and Thomass\'e~\cite{LT}, see also~\cite[Theorem~14]{ChromThresh}. \begin{lemma}[{\L}uczak and Thomass\'e]\label{lem:acycstruct} For each $s \in \N$ and $\gamma > 0$, there exists a graph $H$ with $$\delta(H) \ge \bigg( \frac{1}{3} - \gamma \bigg) \cdot |H|,$$ and a partition $V(H)=A \cup B \cup C$ with the following properties: \begin{itemize} \item[$(a)$] $|A| \leqslant} \def\ge{\geqslant \gamma |H|$ and $\chi\big( H[A] \big) \ge s$. \item[$(b)$] $B$ and $C$ are independent sets in $H$. \item[$(c)$] $H[A,C]$ is empty and $H[B,C]$ is complete. \item[$(d)$] Every odd cycle in $H$ on $s$ or fewer vertices uses at least two vertices of $B$. \end{itemize} \end{lemma} \begin{proof}[Proof of Proposition~\ref{prop:thundercloud:lower}] We will show that, for every $s \in \N$ and $\gamma > 0$, with high probability $G(n,p)$ contains a spanning subgraph $G$ with $$\chi(G) \ge s \qquad \text{and} \qquad \delta(G) \ge\bigg( \frac{1}{3} - 3\gamma \bigg)pn$$ such that every $s$-vertex subgraph of $G$ is a thundercloud-forest graph. The existence of such a graph $G$ implies that the chromatic threshold with respect to $p$ of every graph that is not a thundercloud-forest graph is at least $1/3$, as required. We choose $\eps>0$ sufficiently small, and partition the vertex set into sets $X$ and $Y$, with $|X|=2n/3$ and $|Y|=n/3$. As before, we reveal the edges within $X$ and use Proposition~\ref{prop:rob} to find a subgraph $F$ with chromatic number and girth greater than $s$, on $\eps/p$ vertices, such that any two disjoint subsets of $V(F)$ of size at least $\eps|V(F)|$ have an edge joining them. As before, we reveal the edges from $V(F)$ to $Y$ and for each $u\in V(F)$ let $I_u\subset Y$ be the vertices of $Y$ whose unique neighbour in $V(F)$ is $u$. Now set, slightly differently than before, $$V_1 = X \setminus V(F) \qquad \text{and} \qquad V_2 = Y \setminus \bigcup_{u \in V(F)} I_u.$$ Let $H$ be the graph whose existence is guaranteed by Lemma~\ref{lem:acycstruct}, and $V(H) = A \cup B \cup C$ be the partition for which properties~$(a)$--$(d)$ hold. Let $$\phi \colon V(F) \cup V_1 \to A\cup B$$ be a function which maps $v(F) / |A|$ vertices of $F$ to each element of $A$ and $|V_1|/|B|$ vertices of $V_1$ to each element of $B$.\footnote{It is easy to see that we can adjust slightly the sizes of the sets so that all of these are integers. Alternatively, we may allow some elements to receive an extra vertex.} We are now ready to define $G$. It is the spanning subgraph of $G(n,p)$ with edge set \begin{multline*} \Big\{ uv \in E(F) : \phi(u)\phi(v) \in E(H) \Big\} \cup \Big\{ uv : u \in V(F), v \in I_u \Big\} \\ \cup \Big\{ vw \in E(G(n,p)) : v \in I_u \text{ for some } u \in V(F), \, w \in V_1, \, \phi(u)\phi(w) \in E(H) \Big\} \\ \cup \Big\{ uv \in E(G(n,p)) : u \in V_1, v \in V_2 \Big\}\,. \end{multline*} We claim that, with high probability, $$\delta(G) \, \ge \, \bigg( \frac{1}{3} - 3\gamma \bigg) p n.$$ Indeed, similarly as before, by several applications of a Chernoff bound, with high probability, every vertex $u \in V(F)$ has at least $|I_u|\ge\big( \frac{1}{3} - \gamma \big)pn$ neighbours, every vertex of $V_1$ has at least $\big(\tfrac13 - \gamma\big)pn$ neighbours in $V_2$, and every vertex of $V_2$ has at least $\big(\tfrac23 - \gamma\big)pn$ neighbours in $V_1$. Finally, if $u \in V(F)$ then, since $\delta(H) \ge \big( \frac13 - \gamma\big) v(H)$ and $|A|\leqslant} \def\ge{\geqslant\gamma|H|$ (and in~$H$ all neighbours of $\phi(u)$ are in $A\cup B$), we have $$\big| \big\{ w \in V_1 : \phi(u)\phi(w) \in E(H) \big\} \big| \, \ge \, \bigg( \frac13 - 2\gamma\bigg) n,$$ and hence every vertex $v \in I_u$ has at least $\big( \frac13 - 3\gamma \big)pn$ neighbours $w \in V_1$ such that $\phi(u)\phi(w) \in E(H)$. We next claim that $\chi(G) \ge \chi\big( G[V(F)] \big) \ge s$. Indeed, suppose that we colour $G[V(F)]$ with fewer than $s$ colours, and consider the colouring of $H[A]$ in which the vertex $a \in A$ receives a most common colour in $\phi^{-1}(a)$. This is a colouring of $H[A]$ with fewer than $s$ colours, so by Lemma~\ref{lem:acycstruct} there is a monochromatic edge $aa'$ of $H[A]$. Let $c$ be the colour of $a$ and $a'$, and observe that there exist sets $Z \subset \phi^{-1}(a)$ and $Z' \subset \phi^{-1}(a')$ of colour $c$, each of size at least $v(F)/(s|A|) > \eps v(F)$. Thus, since~$F$ is the subgraph obtained from Proposition~\ref{prop:rob}, the graph $F[Z,Z']$ contains an edge, and since $aa' \in E(H)$, this edge is also present in $G$. It follows that our colouring of $G$ is not proper, and so $\chi\big( G[V(F)] \big) \ge s$, as claimed. Finally, we claim that $G[W]$ is a thundercloud-forest graph for every set $W \subset V(G)$ with $|W| = s$. To prove this, we need to show that there exists a cloud $I \subset W$ which witnesses that $H$ is a cloud-forest graph, and is such that every odd cycle in $G[W]$ uses at least two vertices of $I$. We will show that this is the case for the set $$I = W \cap V_1.$$ Note first that $I$ is an independent set in $G[W]$, since $V_1$ is an independent set in $G$. Next, observe that $G[W \setminus I]$ is a forest, since the girth of $F$ is greater than $s$, and since each vertex of $W\setminus \big(I\cup V(F)\big)$ is either in $V_2$ and hence isolated in $G[W\setminus I]$, or is in some $I_u$ so has at most~$u$ as a neighbour in $G[W\setminus I]$. Moreover, in the latter case $u$ is not adjacent to any vertex of $I$, so that neighbours of $I$ are non-adjacent leaves of the forest $G[W\setminus I]$, and thus~$I$ is a cloud. Finally, we need to check that every odd cycle in $G[W]$ uses at least two vertices of $I$, so let $S$ be an odd cycle in $G[W]$. Note first that $S \not\subset V(F) \cup \bigcup_u I_u$ since $F$ has girth greater than $s$ and each vertex of $\bigcup_u I_u$ has only one neighbour in $V(F)$, and that every cycle containing a vertex of $V_2$ uses at least two vertices of $V_1$. Thus the only remaining case we need to rule out is that $S$ consists of one vertex $x \in V_1$, whose neighbours in $S$ are the vertices $y,z \in \bigcup_u I_u$, and a path $P$ with an even number of vertices in $F$ from $u$ to $v$, where $y \in I_u$ and $z \in I_v$. Since $xy$ and $xz$ and $P$ are all edges of $G$, it follows that $\phi(u)\phi(x)$, $\phi(v)\phi(x)$ and $\phi(P)$ are all edges of $H$, which gives us a circuit of odd length in $H$. This circuit contains an odd cycle, which must (by Lemma~\ref{lem:acycstruct}) use at least two vertices of $B$, a contradiction (since $\phi^{-1}(B) = V_1$). This proves that $G[W]$ is indeed a thundercloud-forest graph, as required. \end{proof} We end the section with a slightly different (and easier) construction, which implies the remaining lower bounds in Theorems~\ref{thm:C5} and~\ref{thm:Clong}. \begin{prop}\label{prop:C5:simon} Let $k \ge 2$, and suppose that $\frac{(\log n)^{2}}{n} \leqslant} \def\ge{\geqslant p(n) \ll n^{-(2k-3)/(2k-2)}$. Then $$\delta_{\chi}(C_{2k+1},p) \ge \frac{1}{2}.$$ \end{prop} \begin{proof}[Proof of Proposition~\ref{prop:C5:simon}] Let $\omega=\omega(n)$ be any function tending to infinity sufficiently slowly as $n \to \infty$ and assume that \begin{equation}\label{eq:C5p} \frac{(\log n)^{2}}{n} \, \leqslant} \def\ge{\geqslant \, p = p(n) \, \leqslant} \def\ge{\geqslant \, \frac{1}{\omega^3} \cdot n^{-(2k-3)/(2k-2)}. \end{equation} Given $s \in \N$ and $\gamma > 0$, we construct, with high probability, a $C_{2k+1}$-free spanning subgraph $G \subset G(n,p)$ with $\chi(G) \ge s$ and $\delta(G) \ge \big( \frac{1}{2} - 2\gamma \big) pn$ as follows. Partition the vertices into sets $X$ and $Y$ with $|X| = |Y| = n/2$, and expose first the edges of $G(n,p)$ contained in a subset~$X' \subset X$ of size $\omega/p$. We claim that, with high probability, there exists a subgraph $F$ of $G(n,p)$ in $X'$ with maximum degree at most $2\omega$, girth at least $3k$, and chromatic number at least $\log\log\omega$. Indeed, this follows simply by removing vertices of degree greater than $2\omega$, and a vertex from each cycle of length at most~$3k$. To spell out the details, observe that the expected number of independent sets in~$X'$ of size $|X'| / \sqrt{\log\omega}$ tends to zero as $n \to \infty$, the expected number of cycles in~$X'$ of length at most $3k$ is at most $3k\omega^{3k}$, and the expected number of vertices of degree greater than $2\omega$ is (by the Chernoff bound) at most $e^{-\omega/3} |X'|$. Applying Markov's inequality with each of these estimates in turn, we see that with high probability there are no independent sets in $X'$ of size $|X'|/\sqrt{\log\omega}$, the number of cycles of length at most $3k$ is less than $3k\omega^{4k}$, and the number of vertices of degree greater than $2\omega$ is at most $e^{-\omega/6}|X'|$. If each of these events occurs, then, since $\omega$ grows sufficiently slowly, the number of vertices of $X'$ we must remove in order to remove all vertices of degree greater than $2\omega$ and cycles of length at most $3k$ is less than $|X'|/2$ for all large $n$. Now any graph with at least $|X'|/2$ vertices and independence number less than $|X'|/\sqrt{\log\omega}$ has chromatic number at least $\log\log n$, so we obtain the desired $F$. We now give the pairs of vertices lying between $X$ and $Y$ an order $\tau$, in which the pairs with one end in $V(F)$ come first, but which is otherwise arbitrary. We obtain a spanning subgraph $G$ of $G(n,p)$ by the following process. We start with $E(G_0)=E(F)$, and then for each $1\leqslant} \def\ge{\geqslant i\leqslant} \def\ge{\geqslant |X||Y|$ we define $G_i$ as follows. Let $xy$ be the $i$th edge in $\tau$. If $xy \in E\big( G(n,p) \big)$, and $G_{i-1}\cup \{xy\}$ does not contain a copy of $C_{2k+1}$, and $\deg_{G_{i-1}\cup\{xy\}}\big(y,V(F)\big)\leqslant} \def\ge{\geqslant 2\omega$, then we set $G_i=G_{i-1}\cup\{xy\}$. Otherwise we set $G_i=G_{i-1}$. We let $G=G_{|X||Y|}$. We claim that $G$ is the desired graph. Indeed, $G_0$ certainly contains no copy of $C_{2k+1}$, and has the desired chromatic number, so by construction the same is true of $G$. It remains to show that with high probability $\delta(G)\ge\big(\tfrac12 - 2\gamma\big)pn$. We will bound the degrees of vertices in $Y$, then in $X \setminus V(F)$, and finally in $V(F)$. Observe first that, by Chernoff's inequality (and since $p\ge\tfrac{(\log n)^2}{n}$), the following events holds with high probability, where $N(x)$ is the neighbourhood in $G(n,p)$ of the vertex $x$: \begin{itemize} \item[$(a)$] $|N(x) \cap Y| \in \big(\tfrac12 \pm \gamma\big)pn$ for each $x \in X$, \item[$(b)$] $|N(y) \cap X| \in \big(\tfrac12 \pm \gamma\big)pn$ for each $y \in Y$, and \item[$(c)$] there are at most $e^{-\omega/10}n$ vertices $y\in Y$ such that $|N(y) \cap V(F)| \ge 2\omega$. \end{itemize} Let us assume that each of these likely events holds. We claim that, for each $y \in Y$, the number of edges $xy$ of $G(n,p)$, with $x\in X$, which are not present in $G$ is stochastically dominated by $\textup{Bin}(\omega/p,p) + \textup{Bin}\big(8k\omega^2(pn)^{2k-2},p\big)$. Indeed, the number of edges of $G(n,p)$ between $y$ and $V(F)$ is dominated by $\textup{Bin}(\omega/p,p)$, and at worst we remove all such edges. The number of paths leaving $y$ of length $2k$ is at most $8k\omega^2(pn)^{2k-2}$, since in constructing such a path we never have more than $pn$ choices for the next vertex, and at some point we must see a sequence of a vertex in $Y$ followed by two vertices in $F$, and we have at most $2\omega$ choices for each vertex of $F$. Not all of these paths need be in any given $G_{i-1}$, but since there are certainly not more, since the $G_j$ form a nested sequence of graphs, and since the event that the $i$th edge of $\tau$ appears in $G(n,p)$ is independent of $G_{i-1}$, the claimed stochastic domination follows. By Chernoff's inequality and the union bound, and by our choice of $p$, it follows that, with high probability, for each $y \in Y$ there are at most $$\gamma p n \, \ge \, 2\omega\log n + 16k \omega^2 p(pn)^{2k-2}$$ edges of $G(n,p)$ incident to $y$ not present in $G$, as required. The proof is almost the same for vertices in $X\setminus V(F)$. Indeed, given $x \in X\setminus V(F)$, it follows exactly as above that the number of edges $xy$ of $G(n,p)$, with $y \in Y$, which are not present in $G$ is stochastically dominated by $\textup{Bin}\big(8k\omega^2(pn)^{2k-2},p\big)$. (Note that in this case edges are only removed if they complete a copy of $C_{2k+1}$.) As before, it follows that, with high probability, at most $\gamma p n$ edges of $G(n,p)$ incident to $x$ are not included in $G$ for each $x \in X\setminus V(F)$. Finally, for each $x \in V(F)$ we claim that the number of edges $xy$ of $G(n,p)$, with $y \in Y$, which are not present in $G$ is stochastically dominated by $\textup{Bin}\big(e^{-\omega/10}n,p\big) + \textup{Bin}\big((2\omega pn)^k,p\big)$. The first term here counts edges $xy$ which are in $G(n,p)$ but not added to $G$ because $y$ had too many neighbours in $V(F)$. The set of vertices in $Y$ whose $G_{i-1}$-neighbourhood in $V(F)$ has size $\lfloor2\omega\rfloor$ is increasing in $i$. Since the probability that a given vertex of $Y$ has at least $\lfloor2\omega\rfloor$ $G(n,p)$-neighbours in $V(F)$ is by Chernoff's inequality at most $e^{-\omega/6}$, by Markov's inequality with high probability the total number of such vertices is at most $e^{-\omega/10}n$. Now since the event that the $i$th edge $xy$ of $\tau$ appears in $G(n,p)$ is independent of $G_{i-1}$, the number of edges $xy$ not included in $G$ because $y$ has too many neighbours in $V(F)$ is stochastically dominated by $\textup{Bin}\big(e^{-\omega/10}n,p\big)$. Similarly, assuming that $xy$ is the $i$th edge of $\tau$, the number of vertices $y \in Y$ such that there is a path of length $2k$ from $x$ to $y$ in $G_{i-1}$ is at most $(2\omega)^k(pn)^k$, since (by the definition of $\tau$) every edge of such a path must be incident to $F$. As before, this implies the claimed stochastic domination. By Chernoff's inequality and the union bound, and by our choice of $p$, it follows that, with high probability, for each $x \in V(F)$ there are at most $$\gamma p n \, \ge \, e^{-\omega/20} pn + (4\omega)^k p^{k+1} n^k$$ edges of $G(n,p)$ incident to $x$ not present in $G$, as required. \end{proof} \section{Upper bounds for Theorems~\ref{thm:K3}, \ref{thm:C5}, and~\ref{thm:Clong}} \label{sec:upper} In this section we will determine $\delta_\chi(K_3,p)$ and $\delta_\chi(C_5,p)$ for almost all functions $p = p(n)$, and $\delta_\chi(C_{2k+1},p)$ for a certain range of $p$. Indeed, for $K_3$ the pieces are all already in place. \begin{proof}[Proof of Theorem~\ref{thm:K3}] If $p > 0$ is constant then the result follows from Theorem~\ref{thm:pconst} and the fact that $\delta_\chi(K_3) = 1/3$. Suppose next that $n^{-1/2} \ll p(n) = o(1)$, and recall that, by Theorem~\ref{thm:CGS}, we have $\delta_\chi(H,p) \leqslant} \def\ge{\geqslant 1/2$. On the other hand, recall that~$K_3$ is not a cloud-forest graph, so, by Proposition~\ref{prop:cloud:lower}, we have $\delta_\chi(H,p) = 1/2$ in this range. Finally, if $\tfrac{\log n}{n} \ll p \ll n^{-1/2}$ then it follows from Proposition~\ref{prop:verysmallp} that $\delta_\chi(H,p) = 1$, as required. \end{proof} Next, we turn to the case $H = C_5$. Recall that $C_5$ is not a thundercloud-forest graph. We have therefore already proved the following bounds: \begin{equation*} \delta_{\chi}(C_5,p) \left\{ \begin{array}{cll} = 0 & \text{if } & p > 0 \text{ is constant, by Theorem~\ref{thm:pconst}, and since $\delta_\chi(C_5) = 0$,} \\ \ge \tfrac{1}{3} & \text{if } & n^{-1/2} \ll p \ll 1, \text{ by Proposition~\ref{prop:thundercloud:lower},} \\ = \tfrac{1}{2} & \text{if } & n^{-3/4} \ll p \ll n^{-1/2}, \text{ by Theorem~\ref{thm:CGS} and Proposition~\ref{prop:C5:simon},} \\ = 1 & \text{if } & \frac{\log n}{n} \ll p \ll n^{-3/4}, \text{ by Proposition~\ref{prop:verysmallp}.} \end{array} \right. \end{equation*} It therefore only remains to prove that $\delta_{\chi}(C_5,p) \leqslant} \def\ge{\geqslant 1/3$ for all $n^{-1/2} \ll p = o(1)$. We remark that, under the stronger assumption $p = n^{-o(1)}$, this result follows from a general result (for all cloud-forest graphs) proved in~\cite{dense}; the proof of the following result is similar, but significantly simpler. \begin{prop}\label{prop:C5:thirdupper} $\delta_{\chi}(C_5,p) \leqslant} \def\ge{\geqslant 1/3$ for every function $n^{-1/2} \ll p = o(1)$. \end{prop} To prove Proposition~\ref{prop:C5:thirdupper} we will use the sparse minimum degree form of Szemer\'edi's Regularity Lemma. First recall the following definitions. \begin{defn} [$(\eps,p)$-regular pairs and partitions, the reduced graph] Given a graph $G$ and $\eps, p > 0$, a pair of vertex sets $(A,B)$ is said to be \emph{$(\eps,p)$-regular} if $$\bigg| \frac{e\big( G[A,B] \big)}{|A| |B|} - \frac{e\big( G[X,Y] \big)}{|X| |Y|} \bigg| \, < \, \eps p$$ for every $X \subset A$ and $Y \subset B$ with $|X| \ge \eps |A|$ and $|Y| \ge \eps |B|$. A partition $V(G) = V_0 \cup V_1 \cup \ldots \cup V_k$ is said to be $(\eps,p)$-regular if $|V_0| \leqslant} \def\ge{\geqslant \eps n$, $|V_1| = \ldots = |V_k|$, and at most $\eps k^2$ of the pairs $(V_i,V_j)$ with $1 \leqslant} \def\ge{\geqslant i < j \leqslant} \def\ge{\geqslant k$ are not $(\eps,p)$-regular. The \emph{$(\eps,d,p)$-reduced graph} of an $(\eps,p)$-regular partition is the graph $R$ with vertex set $V(R) = \{1,\ldots,k\}$ and edge set $$E(R) \, = \, \big\{ ij : (V_i,V_j) \text{ is an $(\eps,p)$-regular pair and } e\big( G[V_i,V_j] \big) \ge dp|V_i||V_j|| \big\}.$$ \end{defn} We will use the following form of the Regularity Lemma, see~\cite[Lemma~4.4]{BipartBU} for a proof. Note that although the statement there only guarantees lower-regularity of pairs in the partition, the proof explicitly gives an $(\eps,p)$-regular partition. \begin{SzRLdeg} Let $\delta,d,\eps > 0$, $k_0 \in \N$ and $p = p(n) \gg (\log n)^4 / n$. There exists $k_1 = k_1(\delta,\eps,k_0) \in \N$ such that the following holds with high probability. If $G \subset G(n,p)$ has minimum degree $\delta(G) \ge \delta pn$, then there is an $(\eps,d,p)$-regular partition of $G$ into $k$ parts, where $k_0\leqslant} \def\ge{\geqslant k\leqslant} \def\ge{\geqslant k_1$, whose $(\eps,d,p)$-reduced graph $R$ has minimum degree at least $(\delta-d-\eps)k$. \end{SzRLdeg} In the proof of Proposition~\ref{prop:C5:thirdupper} we will use the following fact, which is an easy consequence of Chernoff's inequality: if $p \gg n^{-1/2}$, then with high probability there are $\big( 1 + o(1) \big) p|U||V|$ edges between $U$ and $V$ for every pair of sets $(U,V)$ with $|U| = \Omega(pn)$ and $|V| = \Omega(n)$. \begin{proof}[Proof of Proposition~\ref{prop:C5:thirdupper}] Let $\gamma > 0$, and let $G$ be a $C_5$-free spanning subgraph of $G(n,p)$ with $$\delta(G) \ge \bigg( \frac{1}{3} + 2\gamma \bigg) pn.$$ Applying the sparse minimum degree form of Szemer\'edi's regularity lemma to $G$, with $k_0 = 1 / \gamma$, $d = \gamma / 10$ and $\eps > 0$ sufficiently small, we obtain a partition $V(G) = V_0 \cup V_1 \cup \cdots \cup V_k$, such that the $(\eps,d,p)$-reduced graph $R$ satisfies \begin{equation}\label{eq:deltaR} \delta(R) \ge \bigg( \frac{1}{3} + \gamma \bigg) k. \end{equation} Given any vertex $v$, we define the \emph{robust second neighbourhood} $N^*_2(v)$ of $v$ in $G$ to be the set of vertices $w$ in $G$ such that $v$ and $w$ have at least $dp^2n$ common neighbours in $G$. We define \begin{equation}\label{def:Xi:NR2} X_i := \Big\{ v \in V(G) : \big| N^*_2(v) \cap V_i \big| \ge \big( \tfrac12 + d \big) |V_i| \Big\} \end{equation} for each $i \in [k]$, and set $X_0:=V(G)\setminus\big(X_1\cup\dots\cup X_k\big)$. We will show that $X_i$ is an independent set for each $1 \leqslant} \def\ge{\geqslant i \leqslant} \def\ge{\geqslant k$, and that $X_0 = \emptyset$. This implies that $\chi(G)$ is bounded (by a constant depending on~$\gamma$) as required. Indeed, let us first fix $1 \leqslant} \def\ge{\geqslant i \leqslant} \def\ge{\geqslant k$, and suppose that $uv \in E(G[X_i])$. Note that, by definition of~$X_i$, the intersection $N^*_2(u)\cap N^*_2(v)$ is non-empty, so let $w \in N^*_2(u) \cap N^*_2(v)$. It follows that $$\min\big\{ |N(u)\cap N(w)|, \, |N(v)\cap N(w)| \big\} \, \ge \, dp^2n \, > \, 4,$$ since $p \gg n^{-1/2}$, and therefore there exist distinct vertices $u'$ and $v'$ such that $$u' \in N(u) \cap N(w) \qquad \text{and} \qquad v' \in N(v) \cap N(w).$$ Since $uv$ is an edge of $G$, it follows that $uu'wv'v$ is a copy of $C_5$ in $G$, which is a contradiction. To show that $X_0$ is empty, we will use the following claim. \noindent \textbf{Claim:} For each $u \in X_0$, there exists an $(\eps,d,p)$-regular pair $(V_i,V_j)$ such that $$|N^*_2(u)\cap V_i| \ge \eps |V_i| \qquad \textup{and} \qquad |N^*_2(u) \cap V_j| \ge \eps |V_j|.$$ \begin{claimproof}[Proof of claim] The claim follows from some straightforward edge-counting, together with the minimum degree conditions. Indeed, recalling that $\delta(G) \ge \big( \frac{1}{3} + 2\gamma \big) pn$, let $U \subset N(u)$ be an arbitrary subset of size $|U| = pn/3$, and observe that (with high probability) there are at least $p^2 n^2 / 9$ edges incident to $U$, since $e(G[U]) = o(p^2n^2)$ (by Chernoff's inequality) and each vertex in $U$ has at least $\big(\frac{1}{3} + 2\gamma \big) pn$ neighbours. Now, if $|N^*_2(u)\cap V_i| \leqslant} \def\ge{\geqslant \eps |V_i|$, then there are (with high probability) at most $$2p \cdot |U| \cdot \eps|V_i| + dp^2 n \cdot |V_i| \, \leqslant} \def\ge{\geqslant \, 2d p^2 n\cdot |V_i|$$ edges between $V_i$ and $U$, so almost all of the edges incident to $U$ go to sets $V_i$ with $|N^*_2(u)\cap V_i| \ge \eps |V_i|$. Moreover, since $u \in X_0$, there are (with high probability) at most $$p|U| \cdot \bigg( \frac{1}{2} + 2d \bigg) |V_i| + dp^2 n \cdot |V_i| \, \leqslant} \def\ge{\geqslant \, \bigg( \frac{1}{6} + 2d \bigg) p^2 n |V_i|$$ edges from $V_i$ to $U$ for every $i \in [k]$. It follows that there must be at least $$\bigg( \frac{2}{3} - 9d \bigg) k$$ indices $i \in [k]$ such that $|N^*_2(u)\cap V_i| \ge \eps |V_i|$. Since $\delta(R) \ge \big( \frac{1}{3} + \gamma \big) k$, it follows that there is an edge of $R$ between two such indices, as required. \end{claimproof} Now, suppose (for a contradiction) that there exists a vertex $u \in X_0$, and let $(V_i,V_j)$ be the pair given by the claim. By the definition of $(\eps,d,p)$-regularity, there exist vertices $v \in N^*_2(u)\cap V_i$ and $w \in N^*_2(u)\cap V_j$ such that $vw$ is an edge of $G$, and (by the definition of the robust second neighbourhood) there exist distinct vertices $v'$ and $w'$ in the common neighbourhoods of $u$ and $v$, and of $u$ and $w$, respectively. But then $uv'vww'$ forms a copy of $C_5$ in $G$, and so we have the desired contradiction. \end{proof} Finally, let us prove Theorem~\ref{thm:Clong}. It follows from Proposition~\ref{prop:verysmallp} that $\delta_\chi(C_{2k+1},p) = 1$ whenever $\tfrac{\log n}{n} \ll p \ll n^{-(2k-1)/2k}$, and from Theorem~\ref{thm:CGS} and Proposition~\ref{prop:C5:simon} that $\delta_\chi(C_{2k+1},p) = 1/2$ for every function $n^{-(2k-1)/2k} \ll p \ll n^{-(2k-3)/(2k-2)}$. Thus, to deduce the theorem it will suffice to prove the following proposition. \begin{prop}\label{prop:longcycles} $\delta_\chi(C_{2\ell+1},p) = 0$ for every $\ell \ge 3$ and $n^{-1/2} \ll p = o(1)$. \end{prop} \begin{proof} Let $\gamma > 0$, and let $G$ be a $C_{2\ell+1}$-free spanning subgraph of $G(n,p)$ with $$\delta(G) \ge 2\gamma pn.$$ Applying the sparse minimum degree form of Szemer\'edi's regularity lemma to $G$, we obtain a partition $V(G) = V_0 \cup V_1 \cup \cdots \cup V_k$, such that the reduced graph $R$ satisfies $\delta(R) \ge \gamma k$. Define \begin{equation}\label{def:Xi:NR} X_i := \Big\{ v \in V(G) : \big| N^*_2(v) \cap V_i \big| \ge d |V_i| \Big\} \end{equation} for each $i \in [k]$, where $N^*_2(v)$ is as defined in the proof of Proposition~\ref{prop:C5:thirdupper}. Set $X_0:=V(G)\setminus\big(X_1\cup\dots\cup X_k\big)$. We will again show that $X_i$ is an independent set for each $1 \leqslant} \def\ge{\geqslant i \leqslant} \def\ge{\geqslant k$, and that $X_0 = \emptyset$. This time it is relatively easy to see that $X_0 = \emptyset$, since if $v \in X_0$ then $|N^*_2(v)| \leqslant} \def\ge{\geqslant 2dn$, which implies that there are (with high probability) at most $$2p \cdot |U| \cdot 2dn + dp^2 n \cdot n \, = \, \big( 8\gamma d + d \big) p^2 n^2 \, < \, \gamma^2 p^2 n^2$$ edges incident to a set $U \subset N(v)$ with $|U| = 2\gamma pn$, contradicting our assumption on the minimum degree of $G$. Therefore, let us suppose that $uv \in E(G)$ for some $u,v \in X_i$, and observe that $N_2^*(u) \cap V_i$ and $N_2^*(v) \cap V_i$ both have size at least $d |V_i|$. Let $V_j$ be such that $(V_i,V_j)$ is an $(\eps,d,p)$-regular pair in $G$ (this exists because $\delta(R) > 0$). We claim that there exists a path of length $2\ell-4$ in $G[V_i,V_j]$ from $N_2^*(u) \cap V_i$ to $N_2^*(v) \cap V_i$ which uses neither $u$ nor $v$. To find the desired path, we first construct a sequence of sets $(B_1,\ldots, B_{2\ell-3})$ as follows. Set $B_1 = N_2^*(u)$, and define $B_t$ (for each $2 \leqslant} \def\ge{\geqslant t \leqslant} \def\ge{\geqslant 2\ell - 3$) to be the set of vertices of $G$ with at least $2\ell$ neighbours in $B_{t-1}$. Observe that $$|B_{2t+1} \cap V_i| \ge (1 - \eps) |V_i| \qquad \textup{and} \qquad |B_{2t} \cap V_j| \ge (1 - \eps) |V_j|$$ for every $1 \leqslant} \def\ge{\geqslant t \leqslant} \def\ge{\geqslant \ell - 2$, since the first time this fails we obtain a contradiction to the definition of an $(\eps,d,p)$-regular pair. Thus $$|B_{2\ell-3} \cap N_2^*(v)| \, \ge \, |N_2^*(v) \cap V_i| - \eps |V_i| \, \ge \, (d - \eps) |V_i| \, > \, 2\ell.$$ We can now choose the vertices $(w_1,\ldots,w_{2\ell-3})$ of the path greedily, one by one, with $w_t \in B_t$, avoiding all previously-chosen vertices and $u$ and $v$. Finally we choose vertices $u'$ and $v'$ (not so far used) in the common neighbourhoods of $u$ and $w_1$ and $v$ and $w_{2\ell-3}$ respectively, to complete a $(2\ell+1)$-vertex cycle. But we assumed that the graph $G$ is $C_{2\ell+1}$-free, so this contradiction completes the proof of the proposition. \end{proof} \section{Open problems} \label{sec:open} A large number of interesting questions about $\delta_\chi(H,p)$ remain wide open, and we hope that the work in this paper will inspire further research on the topic. In this section we will mention just a few of (what seem to us) the most natural open problems. We begin with the following (somewhat tentative) conjecture, which says that the hypothesis of Theorem~\ref{thm:classhigh} cannot be weakened to $\chi(H) \ne 3$. \begin{conj}\label{conj:chi4} There exists a graph $H$ with $\chi(H) = 4$ and a function $p = p (n)$ with $$n^{-1/m_2(H)} \ll p \ll n^{-1/2}$$ such that $\delta_\chi(H,p) < \pi(H)$. \end{conj} We remark that, as far as we know, this might even be true for \emph{every} graph $H$ with $\chi(H) = 4$ and $m_2(H) < 2$ and every function $p$ in this range. It would therefore also be interesting to give a counter-example to this stronger statement. In the case $\chi(H) = 3$, we have barely begun to understand the behaviour of $\delta_\chi(H,p)$. An important next step would be to resolve the problem for all odd cycles. \begin{prob} Determine $\delta_\chi(C_{2k+1},p)$ for $k \ge 3$ in the range $n^{-(2k-3)/(2k-2)} \ll p \ll n^{-1/2}$. \end{prob} We suspect that $\delta_\chi(C_{2k+1},p) = 0$ for a much wider range of $p$ than those we considered in Theorem~\ref{thm:Clong}; in particular, it seems plausible that by considering robust $t$\textsuperscript{th} neighbourhoods for $2 \leqslant} \def\ge{\geqslant t \leqslant} \def\ge{\geqslant k-1$ (rather than just second neighbourhoods) the proof of Proposition~\ref{prop:longcycles} could be modified to prove this for all $p \gg n^{-(k-2)/(k-1)}$. Nevertheless, a significant gap remains, and we are not sure what to expect in the range $n^{-(2k-3)/(2k-2)} \ll p \ll n^{-(k-2)/(k-1)}$. For more general 3-chromatic graphs, it follows from Theorem~\ref{thm:CGS} and Proposition~\ref{prop:cloud:lower} that if $H$ is not a cloud-forest graph and $p = p(n)$ satisfies \begin{equation}\label{eq:boundsonpforclouds} \max\big\{ n^{-1/m_2(H)}, n^{-1/2} \big\} \, \ll \, p \, \ll \, 1, \end{equation} then $\delta_{\chi}(H,p) = 1/2$. For cloud-forest graphs that are not thundercloud-forest we can at present only determine $\delta_{\chi}(H,p)$ in a smaller range: it follows from Proposition~\ref{prop:thundercloud:lower} and~\cite[Proposition~4.1]{dense} that $\delta_{\chi}(H,p) = 1/3$ if $p = o(1)$ and $p = n^{-o(1)}$. In terms of lower bounds, we have $\delta_\chi(H,p)\ge 1/3$ in the range~\eqref{eq:boundsonpforclouds} by Proposition~\ref{prop:thundercloud:lower}, and $\delta_\chi(H,p)=1$ when $p\ll n^{-1/m_2(H)}$ by Proposition~\ref{prop:verysmallp}; if $m_2(H)>2$ then these ranges overlap. In this case it is possible that these lower bounds are correct, though we cannot prove it. When $m_2(H)\leqslant} \def\ge{\geqslant 2$ we do not know what to expect in the gap between the two ranges. \begin{qu} Let $H$ be a cloud-forest graph, and suppose that $p = p(n)$ satisfies~\eqref{eq:boundsonpforclouds}. Is $\delta_\chi(H,p) = 1/3$ whenever $H$ is a not a thundercloud-forest graph? \end{qu} Recall that for thundercloud-forest graphs, we do not even know how to determine $\delta_\chi(H,p)$ in the range $p = n^{-o(1)}$, see~\cite[Conjecture~1.6]{dense}. Finally, we do not know how to determine the behaviour of $\delta_\chi(H,p)$ around the threshold $p = n^{-1/m_2(H)}$. \begin{prob} Determine $\delta_\chi(H,p)$ for $p = c n^{-1/m_2(H)}$. \end{prob} The construction used to prove Proposition~\ref{prop:verysmallp} (that is, randomly remove one edge from each copy of $H$ in $G(n,p)$) can easily be extended to give a lower bound in the range $(\pi(H),1)$ for all sufficiently small $c > 0$. It would be interesting to understand how the extremal graph transitions (as $c$ increases) from a random-like graph into a Tur\'an-like graph. \end{document}
arXiv
Login | Create Sort by: Relevance Date Users's collections Twitter Group by: Day Week Month Year All time Based on the idea and the provided source code of Andrej Karpathy (arxiv-sanity) RATIR Followup of LIGO/Virgo Gravitational Wave Events (1706.03898) V. Zach Golkhou, Nathaniel R. Butler, Robert Strausbaugh, Eleonora Troja, Alexander Kutyrev, William H. Lee, Carlos G. Román-Zúñiga, Alan M. Watson March 14, 2018 astro-ph.IM, astro-ph.HE Recently we have witnessed the first multi-messenger detection of colliding neutron stars through Gravitational Waves (GWs) and Electromagnetic (EM) waves (GW170817), thanks to the joint efforts of LIGO/Virgo and Space/Ground-based telescopes. In this paper, we report on the RATIR followup observation strategies and show the results for the trigger G194575. This trigger is not of astrophysical interest; however, is of great interests to the robust design of a followup engine to explore large sky error regions. We discuss the development of an image-subtraction pipeline for the 6-color, optical/NIR imaging camera RATIR. Considering a two band ($i$ and $r$) campaign in the Fall of 2015, we find that the requirement of simultaneous detection in both bands leads to a factor $\sim$10 reduction in false alarm rate, which can be further reduced using additional bands. We also show that the performance of our proposed algorithm is robust to fluctuating observing conditions, maintaining a low false alarm rate with a modest decrease in system efficiency that can be overcome utilizing repeat visits. Expanding our pipeline to search for either optical or NIR detections (3 or more bands), considering separately the optical $riZ$ and NIR $YJH$ bands, should result in a false alarm rate $\approx 1\%$ and an efficiency $\approx 90\%$. RATIR's simultaneous optical/NIR observations are expected to yield about one candidate transient in the vast 100 $\mathrm{deg^2}$ LIGO error region for prioritized followup with larger aperture telescopes. GitHub 0 A Neutron Star Binary Merger Model for GW170817/GRB170817a/SSS17a (1710.05453) Ariadna Murguia-Berthier, Enrico Ramirez-Ruiz, Charles D. Kilpatrick, Ryan J. Foley, Daniel Kasen, William H. Lee, Anthony L. Piro, David A. Coulter, Maria R. Drout, Barry F. Madore, Benjamin J. Shappee, Yen-Chen Pan, J. Xavier Prochaska, Armin Rest, César Rojas-Bravo, Matthew R. Siebert, Joshua D. Simon Oct. 16, 2017 astro-ph.HE The merging neutron star gravitational wave event GW170817 has been observed throughout the entire electromagnetic spectrum from radio waves to $\gamma$-rays. The resulting energetics, variability, and light curves are shown to be consistent with GW170817 originating from the merger of two neutron stars, in all likelihood followed by the prompt gravitational collapse of the massive remnant. The available $\gamma$-ray, X-ray and radio data provide a clear probe for the nature of the relativistic ejecta and the non-thermal processes occurring within, while the ultraviolet, optical and infrared emission are shown to probe material torn during the merger and subsequently heated by the decay of freshly synthesized $r$-process material. The simplest hypothesis that the non-thermal emission is due to a low-luminosity short $\gamma$-ray burst (sGRB) seems to agree with the present data. While low luminosity sGRBs might be common, we show here that the collective prompt and multi-wavelength observations are also consistent with a typical, powerful sGRB seen off-axis. Detailed follow-up observations are thus essential before we can place stringent constraints on the nature of the relativistic ejecta in GW170817. High-energy emission as signature of magnetic field amplification in Neutron Star Mergers (1701.01184) Nissim Fraija, William H. Lee, Peter Veres, Rodolfo Barniol Duran Jan. 5, 2017 astro-ph.HE The merger of a binary neutron star system is suggested as the central engine of short gamma-ray bursts (sGRBs). For the merger process, simulations predict that magnetic field is amplified beyond magnetar field strength by Kelvin-Helmholtz instabilities. With the Large Area Telescope (LAT), bursts have been detected that show a temporally extended component in coincidence with a short-lasting peak at the end of the prompt phase. We show that the presence of these LAT components in a sGRB could provide evidence of magnetic field amplification in the neutron star merger. ALMA and RATIR observations of GRB131030A (1612.06481) Kuiyun Huang, Yuji Urata, Satoko Takahashi, Myungshin Im, Po-Chieh Yu, Changes Choi, Nathaniel Butler, Alan M. Watson, Alexander Kutyrev, William H. Lee, Chris Klein, Ori D. Fox, Owen Littlejohns, Nino Cucchiara, Eleonora Troja, Jesús González, Michael G. Richer, Carlos Román-Zúñiga, Josh Bloom, J.Xavier Prochaska, Neil Gehrels, Harvey Moseley, Leonid Georgiev, José A. de Diego, Enrico Ramirez Ruiz Dec. 20, 2016 astro-ph.HE We report on the first open-use based Atacama Large Millimeter/submm Array (ALMA) 345-GHz observation for the late afterglow phase of GRB131030A. The ALMA observation constrained a deep limit at 17.1 d for the afterglow and host galaxy. We also identified a faint submillimeter source (ALMAJ2300-0522) near the GRB131030A position. The deep limit at 345 GHz and multifrequency observations obtained using {\it Swift} and RATIR yielded forward shock modeling with a two-dimensional relativistic hydrodynamic jet simulation and described X-ray excess in the afterglow. The excess was inconsistent with the synchrotron self-inverse Compton radiation from the forward shock. The host galaxy of GRB131030A and optical counterpart of ALMAJ2300-0522 were also identified in the SUBARU image. Based on the deep ALMA limit for the host galaxy, the 3-$\sigma$ upper limits of IR luminosity and the star formation rate (SFR) is estimated as $L_{IR}<1.11\times10^{11} L_{\odot}$ and SFR$<18.7$ ($M_{\odot}$~yr$^{-1}$), respectively. Although the separation angle from the burst location (3.5 arcsec) was rather large, ALMAJ2300-0522 may be one component of the GRB131030A host galaxy, according to previous host galaxy cases. The Properties of Short gamma-ray burst Jets Triggered by neutron star mergers (1609.04828) Ariadna Murguia-Berthier, Enrico Ramirez-Ruiz, Gabriela Montes, Fabio De Colle, Luciano Rezzolla, Stephan Rosswog, Kentaro Takami, Albino Perego, William H. Lee Sept. 15, 2016 gr-qc, astro-ph.HE The most popular model for short gamma-ray bursts (sGRBs) involves the coalescence of binary neutron stars. Because the progenitor is actually hidden from view, we must consider under which circumstances such merging systems are capable of producing a successful sGRB. Soon after coalescence, winds are launched from the merger remnant. In this paper, we use realistic wind profiles derived from global merger simulations in order to investigate the interaction of sGRB jets with these winds using numerical simulations. We analyze the conditions for which these axisymmetric winds permit relativistic jets to breakout and produce a sGRB. We find that jets with luminosities comparable to those observed in sGRBs are only successful when their half-opening angles are below ~20{\deg}. This jet collimation mechanism leads to a simple physical interpretation of the luminosities and opening angles inferred for sGRBs. If wide, low luminosity jets are observed, they might be indicative of a different progenitor avenue such as the merger of a neutron star with a black hole. We also use the observed durations of sGRB to place constraints on the lifetime of the wind phase, which is determined by the time it takes the jet to breakout. In all cases we find that the derived limits argue against completely stable remnants for binary neutron star mergers that produce sGRBs. Modeling the early afterglow in the short and hard GRB 090510 (1608.01420) Aug. 4, 2016 astro-ph.HE The bright, short and hard GRB 090510 was detected by all instruments aboard Fermi and Swift satellites. The multiwavelength observations of this burst presented similar features with the Fermi-LAT-detected gamma-ray bursts. In the framework of the external shock model of early afterglow, a leptonic scenario that evolves in a homogeneous medium is proposed to revisit GRB 090510 and explain the multiwavelength light curve observations presented in this burst. These observations are consistent with the evolution of a jet before and after the jet break. The long-lasting LAT, X-ray and optical fluxes are explained in the synchrotron emission from the adiabatic forward shock. Synchrotron self-Compton emission from the reverse shock is consistent with the bright LAT peak provided that progenitor environment is entrained with strong magnetic fields. It could provide compelling evidence of magnetic field amplification in the neutron star merger. DDOTI: the deca-degree optical transient imager (1606.00695) Alan M. Watson, William H. Lee, Eleonora Troja, Carlos G. Román-Zúñiga, Nathaniel R. Butler, Alexander S. Kutyrev, Neil A. Gehrels, Fernando Ángeles, Stéphane Basa, Pierre-Eric Blanc, Michel Boër, Jose A. de Diego, Alejandro S. Farah, Liliana Figueroa, Yilen Gómez Maqueo Chew, Alain Klotz, Fernando Quirós, Maurico Reyes-Ruíz, Jaime Ruíz-Diáz-Soto, Pierre Thierry, Silvio Tinoco June 2, 2016 astro-ph.IM DDOTI will be a wide-field robotic imager consisting of six 28-cm telescopes with prime focus CCDs mounted on a common equatorial mount. Each telescope will have a field of view of 12 square degrees, will have 2 arcsec pixels, and will reach a 10-sigma limiting magnitude in 60 seconds of r = 18.7 in dark time and r = 18.0 in bright time. The set of six will provide an instantaneous field of view of about 72 square degrees. DDOTI uses commercial components almost entirely. The first DDOTI will be installed at the Observatorio Astron\'omico Nacional in Sierra San Pedro Mart\'ir, Baja California, M\'exico in early 2017. The main science goals of DDOTI are the localization of the optical transients associated with GRBs detected by the GBM instrument on the Fermi satellite and with gravitational-wave transients. DDOTI will also be used for studies of AGN and YSO variability and to determine the occurrence of hot Jupiters. The principal advantage of DDOTI compared to other similar projects is cost: a single DDOTI installation costs only about US$500,000. This makes it possible to contemplate a global network of DDOTI installations. Such geographic diversity would give earlier access and a higher localization rate. We are actively exploring this option. An Intermediate Type Ia Supernova Between Normal And Super-Chandrasekhar (1601.00686) Yi Cao, J. Johansson, Peter E. Nugent, A. Goobar, Jakob Nordin, S. R. Kulkarni, S. Bradley Cenko, Ori Fox, Mansi M. Kasliwal, C. Fremling, R. Amanullah, E. Y. Hsiao, D. A. Perley, Brian D. Bue, Frank J. Masci, William H. Lee, Nicolas Chotard May 17, 2016 astro-ph.CO, astro-ph.GA, astro-ph.SR, astro-ph.HE In this paper, we report observations of a peculiar Type Ia supernova iPTF13asv (a.k.a., SN2013cv) from the onset of the explosion to months after its peak. The early-phase spectra of iPTF13asv show absence of iron absorption, indicating that synthesized iron elements are confined to low-velocity regions of the ejecta, which, in turn, implies a stratified ejecta structure along the line of sight. Our analysis of iPTF13asv's light curves and spectra shows that it is an intermediate case between normal and super-Chandrasekhar events. On the one hand, its light curve shape (B-band $\Delta m_{15}=1.03\pm0.01$) and overall spectral features resemble those of normal Type Ia supernovae. On the other hand, similar to super-Chandrasekhar events, it shows large peak optical and UV luminosity ($M_B=-19.84\,\rm{mag}$, $M_{uvm2}=-15.5\,\rm{mag}$) a relatively low but almost constant \ion{Si}{2} velocities of about $10,000\,\rm{km}\,\rm{s}^{-1}$, and persistent carbon absorption in the spectra. We estimate a $^{56}$Ni mass of $0.81^{+0.10}_{-0.18}M_\odot$ and a total ejecta mass of $1.59^{+0.45}_{-0.12}M_\odot$. The large ejecta mass of iPTF13asv and its stratified ejecta structure together seemingly favor a double-degenerate origin. Transport and mixing of r-process elements in neutron star binary merger blast waves (1601.05808) Gabriela Montes, Enrico Ramirez-Ruiz, Jill Naiman, Sijing Shen, William H. Lee Jan. 21, 2016 astro-ph.GA, astro-ph.SR, astro-ph.HE The r-process nuclei are robustly synthesized in the material ejected during a neutron star binary merger (NSBM), as tidal torques transport angular momentum and energy through the outer Lagrange point in the form of a vast tidal tail. If NSBM are indeed solely responsible for the solar system r- process abundances, a galaxy like our own would require to host a few NSBM per million years, with each event ejecting, on average, about 5x10^{-2} M_sun of r-process material. Because the ejecta velocities in the tidal tail are significantly larger than in ordinary supernovae, NSBM deposit a comparable amount of energy into the interstellar medium (ISM). In contrast to extensive efforts studying spherical models for supernova remnant evolution, calculations quantifying the impact of NSBM ejecta in the ISM have been lacking. To better understand their evolution in a cosmological context, we perform a suite of three-dimensional hydrodynamic simulations with optically-thin radiative cooling of isolated NSBM ejecta expanding in environments with conditions adopted from Milky Way-like galaxy simulations. Although the remnant morphology is highly complex at early times, the subsequent radiative evolution that results from thermal instability in atomic gas is remarkably similar to that of a standard supernova blast wave. This implies that sub-resolution supernova feedback models can be used in galaxy-scale simulations that are unable to resolve the key evolutionary phases of NSBM blast waves. Among other quantities, we examine the radius, time, mass and kinetic energy content of the NSBM remnant at shell formation as well as the momentum injected to the ISM. We find that the shell formation epoch is attained when the swept-up mass is about 10^3 M_sun, at this point the mass fraction of r-process material is drastically enhanced up to two orders of magnitude in relation to a solar metallicity ISM. Astroclimatic Characterization of Vallecitos: A candidate site for the Cherenkov Telescope Array at San Pedro Martir (1601.02383) Gagik Tovmassian, Mercedes-Stephania Hernandez, Jose Luis Ochoa, Jean-Pierre Ernenwein, Dusan Mandat, Miroslav Pech, Ilse Plauchu Frayn, Enrique Colorado, Jose Manuel Murillo, Urania Cesena, Benjamin Garcia, William H. Lee, Tomasz Bulik, Markus Garczarczyk, Christian Fruck, Heide Costantini, Marek Cieslar, Taylor Aune, Stephane Vincent, John Carr, Natalia Serre, Petr Janecek, Dennis Haefner Jan. 11, 2016 astro-ph.IM We conducted an 18 month long study of the weather conditions of the Vallecitos, a proposed site in Mexico to harbor the northern array of the Cherenkov Telescope Array (CTA). It is located in Sierra de San Pedro Martir (SPM) a few kilometers away from Observatorio Astron\'omico Nacional. The study is based on data collected by the ATMOSCOPE, a multi-sensor instrument measuring the weather and sky conditions, which was commissioned and built by the CTA Consortium. Additionally, we compare the weather conditions of the optical observatory at SPM to the Vallecitos regarding temperature, humidity, and wind distributions. It appears that the excellent conditions at the optical observatory benefit from the presence of microclimate established in the Vallecitos. Modeling the early multiwavelength emission in GRB130427A (1601.01264) Nissim Fraija, William H. Lee, Péter Veres One of the most powerful gamma-ray bursts, GRB 130427A was swiftly detected from GeV $\gamma$-rays to optical wavelengths. In the GeV band, the Large Area Telescope (LAT) on board the Fermi Gamma-Ray Space Telescope observed the highest-energy photon ever recorded of 95 GeV, and a bright peak in the early phase followed by emission temporally extended for more than 20 hours. In the optical band, a bright flash with a magnitude of $7.03\pm 0.03$ in the time interval from 9.31 s to 19.31 s after the trigger was reported by RAPTOR in r-band. We study the origin of the GeV $\gamma$-ray emission, using the multiwavelength observation detected in X-ray and optical bands. The origin of the temporally extended LAT, X-ray and optical flux is naturally interpreted as synchrotron radiation and the 95-GeV photon and the integral flux upper limits placed by the HAWC observatory are consistent with synchrotron self-Compton from an adiabatic forward shock propagating into the stellar wind of its progenitor. The extreme LAT peak and the bright optical flash are explained through synchrotron self-Compton and synchrotron emission from the reverse shock, respectively, when the ejecta evolves in thick-shell regime and carries a significant magnetic field. The early afterglow and magnetized ejecta present in GRB 110731A (1508.02130) Nissim Fraija, William H. Lee Aug. 10, 2015 astro-ph.HE One of the most energetic gamma-ray bursts GRB 110731A, was observed from optical to GeV energy range by Fermi and Swift Observatories, and by the MOA and GROND optical telescopes. The multiwavelength observations over different epochs (from trigger time to more than 800 s) showed that the spectral energy distribution was better fitted by a wind afterglow model. We present a leptonic model based on an early afterglow that evolves in a stellar wind to describe the multiwavelength light curves observations. In particular, the origin of the LAT emission is explained through the superposition of synchrotron radiation from the forward shock and synchrotron self-Compton emission from the reverse shock. The bulk Lorentz factor required in this model is $\Gamma\simeq520$ and the result suggests that the ejecta must be magnetized. iPTF14yb: The First Discovery of a GRB Afterglow Independent of a High-Energy Trigger (1504.00673) S. Bradley Cenko, Alex L. Urban, Daniel A. Perley, Assaf Horesh, Alessandra Corsi, Derek B. Fox, Yi Cao, Mansi M. Kasliwal, Amy Lien, Iair Arcavi, Joshua S. Bloom, Nat R. Butler, Antonino Cucchiara, Jose A. de Diego, Alexei V. Filippenko, Avishay Gal-Yam, Neil Gehrels, Leonid Georgiev, J. Jesus Gonzalez, John F. Graham, Jochen Greiner, D. Alexander Kann, Christopher R. Klein, Fabian Knust, S. R. Kulkarni, Alexander Kutyrev, Russ Laher, William H. Lee, Peter E. Nugent, J. Xavier Prochaska, Enrico Ramirez-Ruiz, Michael G. Richer, Adam Rubin, Yuji Urata, Karla Varela, Alan M. Watson, Przemek R. Wozniak April 2, 2015 astro-ph.HE We report here the discovery by the Intermediate Palomar Transient Factory (iPTF) of iPTF14yb, a luminous ($M_{r}\approx-27.8$ mag), cosmological (redshift 1.9733), rapidly fading optical transient. We demonstrate, based on probabilistic arguments and a comparison with the broader population, that iPTF14yb is the optical afterglow of the long-duration gamma-ray burst GRB 140226A. This marks the first unambiguous discovery of a GRB afterglow prior to (and thus entirely independent of) an associated high-energy trigger. We estimate the rate of iPTF14yb-like sources (i.e., cosmologically distant relativistic explosions) based on iPTF observations, inferring an all-sky value of $\Re_{\mathrm{rel}}=610$ yr$^{-1}$ (68% confidence interval of 110-2000 yr$^{-1}$). Our derived rate is consistent (within the large uncertainty) with the all-sky rate of on-axis GRBs derived by the Swift satellite. Finally, we briefly discuss the implications of the nondetection to date of bona fide "orphan" afterglows (i.e., those lacking detectable high-energy emission) on GRB beaming and the degree of baryon loading in these relativistic jets. Compact Stellar Binary Assembly in the First Nuclear Star Clusters and r-Process Synthesis in the Early Universe (1410.3467) Enrico Ramirez-Ruiz, Michele Trenti, Morgan MacLeod, Luke F. Roberts, William H. Lee, Martha I. Saladino-Rosas March 21, 2015 astro-ph.CO, astro-ph.GA Investigations of elemental abundances in the ancient and most metal deficient stars are extremely important because they serve as tests of variable nucleosynthesis pathways and can provide critical inferences of the type of stars that lived and died before them. The presence of r-process elements in a handful of carbon-enhanced metal-poor (CEMP-r) stars, which are assumed to be closely connected to the chemical yield from the first stars, is hard to reconcile with standard neutron star mergers. Here we show that the production rate of dynamically assembled compact binaries in high-z nuclear star clusters can attain a sufficient high value to be a potential viable source of heavy r-process material in CEMP-r stars. The predicted frequency of such events in the early Galaxy, much lower than the frequency of Type II supernovae but with significantly higher mass ejected per event, can naturally lead to a high level of scatter of Eu as observed in CEMP-r stars. On the Nature of Type Ia-CSM Supernovae: Optical and Near-Infrared Spectra of SN 2012ca and SN 2013dn (1408.6239) Ori D. Fox, Alexei V. Filippenko, H. Jacob Borish, Melissa Graham, William H. Lee, Jerod Parrent, John Wilson Aug. 26, 2014 astro-ph.CO, astro-ph.SR, astro-ph.HE A growing subset of Type Ia supernovae (SNe Ia) show evidence for unexpected interaction with a dense circumstellar medium (SNe Ia-CSM). The precise nature of the progenitor, however, remains debated owing to spectral ambiguities arising from a strong contribution from the CSM interaction. Late-time spectra offer potential insight if the post-shock cold, dense shell becomes sufficiently thin and/or the ejecta begin to cross the reverse shock. To date, few high-quality spectra of this kind exist. Here we report on the late-time optical and infrared spectra of the SNe~Ia-CSM 2012ca and 2013dn. These SNe Ia-CSM spectra exhibit low [Fe III]/[Fe II] ratios and strong [Ca II] at late epochs. Such characteristics are reminiscent of the super-Chandrasekhar-mass (SC) candidate SN 2009dc, for which these features suggested a low-ionisation state due to high densities, although the broad Fe features admittedly show similarities to the blue "quasi-continuum" observed in some core-collapse SNe Ibn and IIn. Neither SN 2012ca nor any of the other SNe Ia-CSM show evidence for broad oxygen, carbon, or magnesium in their spectra. Similar to the interacting Type IIn SN 2005ip, a number of high-ionisation lines are identified in SN 2012ca, including [S III], [Ar III], [Ar X], [Fe VIII], [Fe X], and possibly [Fe XI]. The total bolometric energy output does not exceed 10^51 erg, but does require a large kinetic-to-radiative conversion efficiency. All of these observations taken together suggest that SNe Ia-CSM are more consistent with a thermonuclear explosion than a core-collapse event, although detailed radiative transfer models are certainly necessary to confirm these results. The Type IIb Supernova 2013df and Its Cool Supergiant Progenitor (1312.3984) Schuyler D. Van Dyk, WeiKang Zheng, Ori D. Fox, S. Bradley Cenko, Kelsey I. Clubb, Alexei V. Filippenko, Ryan J. Foley, Adam A. Miller, Nathan Smith, Patrick L. Kelly, William H. Lee, Sagi Ben-Ami, Avishay Gal-Yam Dec. 13, 2013 astro-ph.SR, astro-ph.HE We have obtained early-time photometry and spectroscopy of Supernova (SN) 2013df in NGC 4414. The SN is clearly of Type IIb, with notable similarities to SN 1993J. From its luminosity at secondary maximum light, it appears that less $^{56}$Ni ($\lesssim 0.06\ M_{\odot}$) was synthesized in the SN 2013df explosion than was the case for the SNe IIb 1993J, 2008ax, and 2011dh. Based on a comparison of the light curves, the SN 2013df progenitor must have been more extended in radius prior to explosion than the progenitor of SN 1993J. The total extinction for SN 2013df is estimated to be $A_V=0.30$ mag. The metallicity at the SN location is likely to be solar. We have conducted Hubble Space Telescope (HST) Target of Opportunity observations of the SN with the Wide Field Camera 3, and from a precise comparison of these new observations to archival HST observations of the host galaxy obtained 14 years prior to explosion, we have identified the progenitor of SN 2013df to be a yellow supergiant, somewhat hotter than a red supergiant progenitor for a normal Type II-Plateau SN. From its observed spectral energy distribution, assuming that the light is dominated by one star, the progenitor had effective temperature $T_{\rm eff} = 4250 \pm 100$ K and a bolometric luminosity $L_{\rm bol}=10^{4.94 \pm 0.06}\ L_{\odot}$. This leads to an effective radius $R_{\rm eff} = 545 \pm 65\ R_{\odot}$. The star likely had an initial mass in the range of 13 to 17 $M_{\odot}$; however, if it was a member of an interacting binary system, detailed modeling of the system is required to estimate this mass more accurately. The progenitor star of SN 2013df appears to have been relatively similar to the progenitor of SN 1993J. SN 2000cx and SN 2013bh: Extremely Rare, Nearly Twin Type Ia Supernovae (1307.3555) Jeffrey M. Silverman, Jozsef Vinko, Mansi M. Kasliwal, Ori D. Fox, Yi Cao, Joel Johansson, Daniel A. Perley, David Tal, J. Craig Wheeler, Rahman Amanullah, Iair Arcavi, Joshua S. Bloom, Avishay Gal-Yam, Ariel Goobar, Shrinivas R. Kulkarni, Russ Laher, William H. Lee, G. H. Marion, Peter E. Nugent, Isaac Shivvers Aug. 28, 2013 astro-ph.CO, astro-ph.SR The Type Ia supernova (SN Ia) SN 2000cx was one of the most peculiar transients ever discovered, with a rise to maximum brightness typical of a SN Ia, but a slower decline and a higher photospheric temperature. Thirteen years later SN 2013bh (aka iPTF13abc), a near identical twin, was discovered and we obtained optical and near-IR photometry and low-resolution optical spectroscopy from discovery until about 1 month past r-band maximum brightness. The spectra of both objects show iron-group elements (Co II, Ni II, Fe II, Fe III, and high-velocity features [HVFs] of Ti II), intermediate-mass elements (Si II, Si III, and S II), and separate normal velocity features (~12000 km/s) and HVFs (~24000 km/s) of Ca II. Persistent absorption from Fe III and Si III, along with the colour evolution, imply high blackbody temperatures for SNe 2013bh and 2000cx (~12000 K). Both objects lack narrow Na I D absorption and exploded in the outskirts of their hosts, indicating that the SN environments were relatively free of interstellar or circumstellar material and may imply that the progenitors came from a relatively old and low-metallicity stellar population. Models of SN 2000cx, seemingly applicable to SN 2013bh, imply the production of up to ~1 M_Sun of Ni-56 and (4.3-5.5)e-3 M_Sun of fast-moving Ca ejecta. Cooling-induced structure formation and evolution in collapsars (1307.2339) Aldo Batta, William H. Lee July 11, 2013 astro-ph.HE The collapse of massive rotating stellar cores and the associated accretion onto the newborn compact object is thought to power long gamma ray bursts (GRBs). The physical scale and dynamics of the accretion disk are initially set by the angular momentum distribution in the progenitor, and the physical conditions make neutrino emission the main cooling agent in the flow. The formation and evolution of structure in these disks is potentially very relevant for the energy release and its time variability, which ultimately imprint on the observed GRB properties. To begin to characterize these, taking into account the three dimensional nature of the problem, we have carried out an initial set of calculations of the collapse of rotating polytropic cores in three dimensions, making use of a pseudo-relativistic potential and a simplified cooling prescription. We focus on the effects of self gravity and cooling on the overall morphology and evolution of the flow for a given rotation rate in the context of the collapsar model. For the typical cooling times expected in such a scenario we observe the appearance of strong instabilities on the cooling time scale following disk formation, which modulate the properties of the flow. Such instabilities, and the interaction they produce between the disk and the central object lead to significant variability in the obtained mass accretion and energy loss rates, which will likely translate into variations in the power of the relativistic outflow that ultimately results in a GRB. Hypercritical Accretion onto a Newborn Neutron Star and Magnetic Field Submergence (1212.0464) Cristian G. Bernal, Dany Page, William H. Lee Dec. 3, 2012 astro-ph.HE We present magnetohydrodynamic numerical simulations of the late post-supernova hypercritical accretion to understand its effect on the magnetic field of the new-born neutron star. We consider as an example the case of a magnetic field loop protruding from the star's surface. The accreting matter is assumed to be non magnetized and, due to the high accretion rate, matter pressure dominates over magnetic pressure. We find that an accretion envelope develops very rapidly and once it becomes convectively stable the magnetic field is easily buried and pushed into the newly forming neutron star crust. However, for low enough accretion rates the accretion envelope remains convective for an extended period of time and only partial submergence of the magnetic field occurs due to a residual field that is maintained at the interface between the forming crust and the convective envelope. In this latter case, the outcome should be a weakly magnetized neutron star with a likely complicated field geometry. In our simulations we find the transition from total to partial submergence to occur around dotM ~ 10 M_sun/yr. Back-diffusion of the submerged magnetic field toward the surface, and the resulting growth of the dipolar component, may result in a delayed switch-on of a pulsar on time-scales of centuries to millenia. Electromagnetic Transients Powered by Nuclear Decay in the Tidal Tails of Coalescing Compact Binaries (1104.5504) Luke F. Roberts, Dan Kasen, William H. Lee, Enrico Ramirez-Ruiz April 28, 2011 astro-ph.HE The possibility that long tidal tails formed during compact object mergers may power optical transients through the decay of freshly synthesized r-process material is investigated. Precise modeling of the merger dynamics allows for a realistic determination of the thermodynamic conditions in the ejected debris. The results of hydrodynamic and full nuclear network calculations are combined to calculate the resultant r-process abundances and the heating of the material by their decays. The subsequent homologous structure is mapped into a radiative transfer code to synthesize emergent model light curves and determine how their properties (variability and color evolution) depend on the mass ratio and orientation of the merging binary. The radiation emanating from the ejected debris, though less spectacular than a typical supernova, should be observable in transient surveys and we estimate the associated detection rates. The case for (or against) compact object mergers as the progenitors of short gamma-ray bursts can be tested if such electromagnetic transients are detected (or not) in coincidence with some bursts, although they may be obscured by on-axis afterglows. A relativistic jetted outburst from a massive black hole fed by a tidally disrupted star (1104.3257) Joshua S. Bloom, Dimitrios Giannios, Brian D. Metzger, S. Bradley Cenko, Daniel A. Perley, Nathaniel R. Butler, Nial R. Tanvir, Andrew J. Levan, Paul T. O' Brien, Linda E. Strubbe, Fabio De Colle, Enrico Ramirez-Ruiz, William H. Lee, Sergei Nayakshin, Eliot Quataert, Andrew R. King, Antonino Cucchiara, James Guillochon, Geoffrey C. Bower, Andrew S. Fruchter, Adam N. Morgan, Alexander J. van der Horst April 16, 2011 astro-ph.CO, astro-ph.HE While gas accretion onto some massive black holes (MBHs) at the centers of galaxies actively powers luminous emission, the vast majority of MBHs are considered dormant. Occasionally, a star passing too near a MBH is torn apart by gravitational forces, leading to a bright panchromatic tidal disruption flare (TDF). While the high-energy transient Swift J164449.3+573451 ("Sw 1644+57") initially displayed none of the theoretically anticipated (nor previously observed) TDF characteristics, we show that the observations (Levan et al. 2011) suggest a sudden accretion event onto a central MBH of mass ~10^6-10^7 solar masses. We find evidence for a mildly relativistic outflow, jet collimation, and a spectrum characterized by synchrotron and inverse Compton processes; this leads to a natural analogy of Sw 1644+57 with a smaller-scale blazar. The phenomenologically novel Sw 1644+57 thus connects the study of TDFs and active galaxies, opening a new vista on disk-jet interactions in BHs and magnetic field generation and transport in accretion systems. Characterizing the time variability in magnetized neutrino--cooled accretion disks: signatures of the gamma-ray burst central engine (1011.5515) Augusto Carballido, William H. Lee Nov. 24, 2010 astro-ph.HE The central engine of Gamma Ray Bursts is hidden from direct probing with photons mainly due to the high densities involved. Inferences on their properties are thus made from their cosmological setting, energetics, low-energy counterparts and variability. If GRBs are powered by hypercritical accretion onto compact objects, on small spatial scales the flow will exhibit fluctuations, which could in principle be reflected in the power output of the central engine and ultimately in the high energy prompt emission. Here we address this issue by characterizing the variability in neutrino cooled accretion flows through local shearing box simulations with magnetic fields, and then convolving them on a global scale with large scale dynamical simulations of accretion disks. The resulting signature is characteristic, and sensitive to the details of the cooling mechanism, providing in principle a discriminant for GRB central engine properties. Short gamma-ray bursts from dynamically-assembled compact binaries in globular clusters: pathways, rates, hydrodynamics and cosmological setting (0909.2884) William H. Lee, Glenn van de Ven We present a detailed assessment of the dynamical pathways leading to the coalescence of compact objects in Globular Clusters (GCs) and Short Gamma-Ray Burst (SGRB) production. We consider primordial binaries, dynamically formed binaries (through tidal two-body and three-body exchange interactions) and direct impacts of compact objects (WD/NS/BH). We show that if the primordial binary fraction is small, close encounters dominate the production rate of coalescing compact systems. We find that the two dominant channels are the interaction of field NSs with dynamically formed binaries, and two-body encounters. We then estimate the redshift distribution and host galaxy demographics of SGRB progenitors, and find that GCs can provide a significant contribution to the overall observed rate. We have carried out hydrodynamical modeling of evolution of close stellar encounters with WD/NS/BH, and show that there is no problem in accounting for the energy budget of a typical SGRB. The particulars of each encounter are variable and lead to interesting diversity: the encounter characteristics are dependent on the impact parameter, in contrast to the merger scenario; the nature of the compact star itself can produce very different outcomes; the presence of tidal tails in which material falls back onto the central object at later times is a robust feature of these calculations, with the mass involved being larger than for binary mergers. It is thus possible to account generically in this scenario for a prompt episode of energy release, as well as for activity many dynamical time scales later (abridged). Hypercritical accretion onto a magnetized neutron star surface: a numerical approach (1006.3003) Cristian Giovanny Bernal, William H. Lee, Dany Page (Instituto de Astronomia, UNAM) June 15, 2010 astro-ph.HE The properties of a new-born neutron star, produced in a core-collapse supernova, can be strongly affected by the possible late fallback which occurs several hours after the explosion. This accretion occurs in the regime dominated by neutrino cooling, explored initially in this context by Chevalier (1989). Here we revisit this approach in a 1D spherically symmetric model and carry out numerical simulations in 2D in an accretion column onto a neutron star considering detailed microphysics, neutrino cooling and the presence of magnetic fields in ideal MHD. We compare our numerical results to the analytic solutions and explore how the purely hydrodynamical as well as the MHD solutions differ from them, and begin to explore how this may affect the appearance of the remnant as a typical radio pulsar. An upper limit to the central density of dark matter haloes from consistency with the presence of massive central black holes (1002.0553) X. Hernandez, William H. Lee Feb. 2, 2010 astro-ph.CO, astro-ph.GA We study the growth rates of massive black holes in the centres of galaxies from accretion of dark matter from their surrounding haloes. By considering only the accretion due to dark matter particles on orbits unbound to the central black hole, we obtain a firm lower limit to the resulting accretion rate. We find that a runaway accretion regime occurs on a timescale which depends on the three characteristic parameters of the problem: the initial mass of the black hole, and the volume density and velocity dispersion of the dark matter particles in its vicinity. An analytical treatment of the accretion rate yields results implying that for the largest black hole masses inferred from QSO studies ($>10^{9} M_{\odot}$), the runaway regime would be reached on time scales which are shorter than the lifetimes of the haloes in question for central dark matter densities in excess of $250 M_{\odot}$pc$^{-3}$. Since reaching runaway accretion would strongly distort the host dark matter halo, the inferences of QSO black holes in this mass range lead to an upper limit on the central dark matter densities of their host haloes of $\rho_{0} < 250 M_{\odot} $pc$^{-3}$. This limit scales inversely with the assumed central black hole mass. However, thinking of dark matter profiles as universal across galactic populations, as cosmological studies imply, we obtain a firm upper limit for the central density of dark matter in such structures.
CommonCrawl
Tensor product of modules In mathematics, the tensor product of modules is a construction that allows arguments about bilinear maps (e.g. multiplication) to be carried out in terms of linear maps. The module construction is analogous to the construction of the tensor product of vector spaces, but can be carried out for a pair of modules over a commutative ring resulting in a third module, and also for a pair of a right-module and a left-module over any ring, with result an abelian group. Tensor products are important in areas of abstract algebra, homological algebra, algebraic topology, algebraic geometry, operator algebras and noncommutative geometry. The universal property of the tensor product of vector spaces extends to more general situations in abstract algebra. The tensor product of an algebra and a module can be used for extension of scalars. For a commutative ring, the tensor product of modules can be iterated to form the tensor algebra of a module, allowing one to define multiplication in the module in a universal way. Balanced product For a ring R, a right R-module M, a left R-module N, and an abelian group G, a map φ: M × N → G is said to be R-balanced, R-middle-linear or an R-balanced product if for all m, m′ in M, n, n′ in N, and r in R the following hold:[1]: 126  ${\begin{aligned}\varphi (m,n+n')&=\varphi (m,n)+\varphi (m,n')&&{\text{Dl}}_{\varphi }\\\varphi (m+m',n)&=\varphi (m,n)+\varphi (m',n)&&{\text{Dr}}_{\varphi }\\\varphi (m\cdot r,n)&=\varphi (m,r\cdot n)&&{\text{A}}_{\varphi }\\\end{aligned}}$ The set of all such balanced products over R from M × N to G is denoted by LR(M, N; G). If φ, ψ are balanced products, then each of the operations φ + ψ and −φ defined pointwise is a balanced product. This turns the set LR(M, N; G) into an abelian group. For M and N fixed, the map G ↦ LR(M, N; G) is a functor from the category of abelian groups to itself. The morphism part is given by mapping a group homomorphism g : G → G′ to the function φ ↦ g ∘ φ, which goes from LR(M, N; G) to LR(M, N; G′). Remarks 1. Properties (Dl) and (Dr) express biadditivity of φ, which may be regarded as distributivity of φ over addition. 2. Property (A) resembles some associative property of φ. 3. Every ring R is an R-bimodule. So the ring multiplication (r, r′) ↦ r ⋅ r′ in R is an R-balanced product R × R → R. Definition For a ring R, a right R-module M, a left R-module N, the tensor product over R $M\otimes _{R}N$ is an abelian group together with a balanced product (as defined above) $\otimes :M\times N\to M\otimes _{R}N$ which is universal in the following sense:[2] For every abelian group G and every balanced product $f:M\times N\to G$ there is a unique group homomorphism ${\tilde {f}}:M\otimes _{R}N\to G$ such that ${\tilde {f}}\circ \otimes =f.$ As with all universal properties, the above property defines the tensor product uniquely up to a unique isomorphism: any other abelian group and balanced product with the same properties will be isomorphic to M ⊗R N and ⊗. Indeed, the mapping ⊗ is called canonical, or more explicitly: the canonical mapping (or balanced product) of the tensor product.[3] The definition does not prove the existence of M ⊗R N; see below for a construction. The tensor product can also be defined as a representing object for the functor G → LR(M,N;G); explicitly, this means there is a natural isomorphism: ${\begin{cases}\operatorname {Hom} _{\mathbb {Z} }(M\otimes _{R}N,G)\simeq \operatorname {L} _{R}(M,N;G)\\g\mapsto g\circ \otimes \end{cases}}$ This is a succinct way of stating the universal mapping property given above. (If a priori one is given this natural isomorphism, then $\otimes $ can be recovered by taking $G=M\otimes _{R}N$ and then mapping the identity map.) Similarly, given the natural identification $\operatorname {L} _{R}(M,N;G)=\operatorname {Hom} _{R}(M,\operatorname {Hom} _{\mathbb {Z} }(N,G))$,[4] one can also define M ⊗R N by the formula $\operatorname {Hom} _{\mathbb {Z} }(M\otimes _{R}N,G)\simeq \operatorname {Hom} _{R}(M,\operatorname {Hom} _{\mathbb {Z} }(N,G)).$ This is known as the tensor-hom adjunction; see also § Properties. For each x in M, y in N, one writes x ⊗ y for the image of (x, y) under the canonical map $\otimes :M\times N\to M\otimes _{R}N$. It is often called a pure tensor. Strictly speaking, the correct notation would be x ⊗R y but it is conventional to drop R here. Then, immediately from the definition, there are relations: x ⊗ (y + y′) = x ⊗ y + x ⊗ y′(Dl⊗) (x + x′) ⊗ y = x ⊗ y + x′ ⊗ y(Dr⊗) (x ⋅ r) ⊗ y = x ⊗ (r ⋅ y)(A⊗) The universal property of a tensor product has the following important consequence: Proposition — Every element of $M\otimes _{R}N$ can be written, non-uniquely, as $\sum _{i}x_{i}\otimes y_{i},\,x_{i}\in M,y_{i}\in N.$ In other words, the image of $\otimes $ generates $M\otimes _{R}N$. Furthermore, if f is a function defined on elements $x\otimes y$ with values in an abelian group G, then f extends uniquely to the homomorphism defined on the whole $M\otimes _{R}N$ if and only if $f(x\otimes y)$ is $\mathbb {Z} $-bilinear in x and y. Proof: For the first statement, let L be the subgroup of $M\otimes _{R}N$ generated by elements of the form in question, $Q=(M\otimes _{R}N)/L$ and q the quotient map to Q. We have: $0=q\circ \otimes $ as well as $0=0\circ \otimes $. Hence, by the uniqueness part of the universal property, q = 0. The second statement is because to define a module homomorphism, it is enough to define it on the generating set of the module. $\square $ Application of the universal property of tensor products Determining whether a tensor product of modules is zero In practice, it is sometimes more difficult to show that a tensor product of R-modules $M\otimes _{R}N$ is nonzero than it is to show that it is 0. The universal property gives a convenient way for checking this. To check that a tensor product $M\otimes _{R}N$ is nonzero, one can construct an R-bilinear map $f:M\times N\rightarrow G$ to an abelian group $G$ such that $f(m,n)\neq 0$. This works because if $m\otimes n=0$, then $f(m,n)={\bar {f}}(m\otimes n)={\bar {(f)}}(0)=0$. For example, to see that $\mathbb {Z} /p\mathbb {Z} \otimes _{Z}\mathbb {Z} /p\mathbb {Z} $, is nonzero, take $G$ to be $\mathbb {Z} /p\mathbb {Z} $ and $(m,n)\mapsto mn$. This says that the pure tensors $m\otimes n\neq 0$ as long as $mn$ is nonzero in $\mathbb {Z} /p\mathbb {Z} $. For equivalent modules The proposition says that one can work with explicit elements of the tensor products instead of invoking the universal property directly each time. This is very convenient in practice. For example, if R is commutative and the left and right actions by R on modules are considered to be equivalent, then $M\otimes _{R}N$ can naturally be furnished with the R-scalar multiplication by extending $r\cdot (x\otimes y):=(r\cdot x)\otimes y=x\otimes (r\cdot y)$ to the whole $M\otimes _{R}N$ by the previous proposition (strictly speaking, what is needed is a bimodule structure not commutativity; see a paragraph below). Equipped with this R-module structure, $M\otimes _{R}N$ satisfies a universal property similar to the above: for any R-module G, there is a natural isomorphism: ${\begin{cases}\operatorname {Hom} _{R}(M\otimes _{R}N,G)\simeq \{R{\text{-bilinear maps }}M\times N\to G\},\\g\mapsto g\circ \otimes \end{cases}}$ If R is not necessarily commutative but if M has a left action by a ring S (for example, R), then $M\otimes _{R}N$ can be given the left S-module structure, like above, by the formula $s\cdot (x\otimes y):=(s\cdot x)\otimes y.$ Analogously, if N has a right action by a ring S, then $M\otimes _{R}N$ becomes a right S-module. Tensor product of linear maps and a change of base ring Given linear maps $f:M\to M'$ of right modules over a ring R and $g:N\to N'$ of left modules, there is a unique group homomorphism ${\begin{cases}f\otimes g:M\otimes _{R}N\to M'\otimes _{R}N'\\x\otimes y\mapsto f(x)\otimes g(y)\end{cases}}$ The construction has a consequence that tensoring is a functor: each right R-module M determines the functor $M\otimes _{R}-:R{\text{-Mod}}\longrightarrow {\text{Ab}}$ from the category of left modules to the category of abelian groups that sends N to M ⊗ N and a module homomorphism f to the group homomorphism 1 ⊗ f. If $f:R\to S$ is a ring homomorphism and if M is a right S-module and N a left S-module, then there is the canonical surjective homomorphism: $M\otimes _{R}N\to M\otimes _{S}N$ induced by $M\times N{\overset {\otimes _{S}}{\longrightarrow }}M\otimes _{S}N.$ [5] The resulting map is surjective since pure tensors x ⊗ y generate the whole module. In particular, taking R to be $\mathbb {Z} $ this shows every tensor product of modules is a quotient of a tensor product of abelian groups. See also: Tensor product § Tensor product of linear maps Several modules (This section need to be updated. For now, see § Properties for the more general discussion.) It is possible to extend the definition to a tensor product of any number of modules over the same commutative ring. For example, the universal property of M1 ⊗ M2 ⊗ M3 is that each trilinear map on M1 × M2 × M3 → Z corresponds to a unique linear map M1 ⊗ M2 ⊗ M3 → Z. The binary tensor product is associative: (M1 ⊗ M2) ⊗ M3 is naturally isomorphic to M1 ⊗ (M2 ⊗ M3). The tensor product of three modules defined by the universal property of trilinear maps is isomorphic to both of these iterated tensor products. Properties Modules over general rings Let R1, R2, R3, R be rings, not necessarily commutative. • For an R1-R2-bimodule M12 and a left R2-module M20, $M_{12}\otimes _{R_{2}}M_{20}$ is a left R1-module. • For a right R2-module M02 and an R2-R3-bimodule M23, $M_{02}\otimes _{R_{2}}M_{23}$ is a right R3-module. • (associativity) For a right R1-module M01, an R1-R2-bimodule M12, and a left R2-module M20 we have:[6] $\left(M_{01}\otimes _{R_{1}}M_{12}\right)\otimes _{R_{2}}M_{20}=M_{01}\otimes _{R_{1}}\left(M_{12}\otimes _{R_{2}}M_{20}\right).$ • Since R is an R-R-bimodule, we have $R\otimes _{R}R=R$ with the ring multiplication $mn=:m\otimes _{R}n$ as its canonical balanced product. Modules over commutative rings Let R be a commutative ring, and M, N and P be R-modules. Then • (identity) $R\otimes _{R}M=M.$ • (associativity) $(M\otimes _{R}N)\otimes _{R}P=M\otimes _{R}(N\otimes _{R}P).$[7] Thus $M\otimes _{R}N\otimes _{R}P$ is well-defined. • (symmetry) $M\otimes _{R}N=N\otimes _{R}M.$ In fact, for any permutation σ of the set {1, ..., n}, there is a unique isomorphism: ${\begin{cases}M_{1}\otimes _{R}\cdots \otimes _{R}M_{n}\longrightarrow M_{\sigma (1)}\otimes _{R}\cdots \otimes _{R}M_{\sigma (n)}\\x_{1}\otimes \cdots \otimes x_{n}\longmapsto x_{\sigma (1)}\otimes \cdots \otimes x_{\sigma (n)}\end{cases}}$ • (distributive property) $M\otimes _{R}(N\oplus P)=(M\otimes _{R}N)\oplus (M\otimes _{R}P).$ In fact, $M\otimes _{R}\left(\bigoplus \nolimits _{i\in I}N_{i}\right)=\bigoplus \nolimits _{i\in I}\left(M\otimes _{R}N_{i}\right),$ for an index set I of arbitrary cardinality. • (commutes with finite product) for any finitely many $N_{i}$, $M\otimes _{R}\prod _{i=1}^{n}N_{i}=\prod _{i=1}^{n}M\otimes _{R}N_{i}.$ • (commutes with localization) for any multiplicatively closed subset S of R, $S^{-1}(M\otimes _{R}N)=S^{-1}M\otimes _{S^{-1}R}S^{-1}N$ as $S^{-1}R$-module. Since $S^{-1}R$ is an R-algebra and $S^{-1}-=S^{-1}R\otimes _{R}-$, this is a special case of: • (commutes with base extension) If S is an R-algebra, writing $-_{S}=S\otimes _{R}-$, $(M\otimes _{R}N)_{S}=M_{S}\otimes _{S}N_{S};$ [8] cf. § Extension of scalars. • (commutes with direct limit) for any direct system of R-modules Mi, $(\varinjlim M_{i})\otimes _{R}N=\varinjlim (M_{i}\otimes _{R}N).$ • (tensoring is right exact) if $0\to N'{\overset {f}{\to }}N{\overset {g}{\to }}N''\to 0$ is an exact sequence of R-modules, then $M\otimes _{R}N'{\overset {1\otimes f}{\to }}M\otimes _{R}N{\overset {1\otimes g}{\to }}M\otimes _{R}N''\to 0$ is an exact sequence of R-modules, where $(1\otimes f)(x\otimes y)=x\otimes f(y).$ This is a consequence of: • (adjoint relation) $\operatorname {Hom} _{R}(M\otimes _{R}N,P)=\operatorname {Hom} _{R}(M,\operatorname {Hom} _{R}(N,P))$. • (tensor-hom relation) there is a canonical R-linear map: $\operatorname {Hom} _{R}(M,N)\otimes P\to \operatorname {Hom} _{R}(M,N\otimes P),$ which is an isomorphism if either M or P is a finitely generated projective module (see § As linearity-preserving maps for the non-commutative case);[9] more generally, there is a canonical R-linear map: $\operatorname {Hom} _{R}(M,N)\otimes \operatorname {Hom} _{R}(M',N')\to \operatorname {Hom} _{R}(M\otimes M',N\otimes N')$ which is an isomorphism if either $(M,N)$ or $(M,M')$ is a pair of finitely generated projective modules. To give a practical example, suppose M, N are free modules with bases $e_{i},i\in I$ and $f_{j},j\in J$. Then M is the direct sum $M=\bigoplus _{i\in I}Re_{i}$ and the same for N. By the distributive property, one has: $M\otimes _{R}N=\bigoplus _{i,j}R(e_{i}\otimes f_{j});$ i.e., $e_{i}\otimes f_{j},\,i\in I,j\in J$ are the R-basis of $M\otimes _{R}N$. Even if M is not free, a free presentation of M can be used to compute tensor products. The tensor product, in general, does not commute with inverse limit: on the one hand, $\mathbb {Q} \otimes _{\mathbb {Z} }\mathbb {Z} /p^{n}=0$ (cf. "examples"). On the other hand, $\left(\varprojlim \mathbb {Z} /p^{n}\right)\otimes _{\mathbb {Z} }\mathbb {Q} =\mathbb {Z} _{p}\otimes _{\mathbb {Z} }\mathbb {Q} =\mathbb {Z} _{p}\left[p^{-1}\right]=\mathbb {Q} _{p}$ where $\mathbb {Z} _{p},\mathbb {Q} _{p}$ are the ring of p-adic integers and the field of p-adic numbers. See also "profinite integer" for an example in the similar spirit. If R is not commutative, the order of tensor products could matter in the following way: we "use up" the right action of M and the left action of N to form the tensor product $M\otimes _{R}N$; in particular, $N\otimes _{R}M$ would not even be defined. If M, N are bi-modules, then $M\otimes _{R}N$ has the left action coming from the left action of M and the right action coming from the right action of N; those actions need not be the same as the left and right actions of $N\otimes _{R}M$. The associativity holds more generally for non-commutative rings: if M is a right R-module, N a (R, S)-module and P a left S-module, then $(M\otimes _{R}N)\otimes _{S}P=M\otimes _{R}(N\otimes _{S}P)$ as abelian group. The general form of adjoint relation of tensor products says: if R is not necessarily commutative, M is a right R-module, N is a (R, S)-module, P is a right S-module, then as abelian group[10] $\operatorname {Hom} _{S}(M\otimes _{R}N,P)=\operatorname {Hom} _{R}(M,\operatorname {Hom} _{S}(N,P)),\,f\mapsto f'$ where $f'$ is given by $f'(x)(y)=f(x\otimes y).$ See also: Tensor-hom adjunction Tensor product of an R-module with the fraction field Let R be an integral domain with fraction field K. • For any R-module M, $K\otimes _{R}M\cong K\otimes _{R}(M/M_{\operatorname {tor} })$ as R-modules, where $M_{\operatorname {tor} }$ is the torsion submodule of M. • If M is a torsion R-module then $K\otimes _{R}M=0$ and if M is not a torsion module then $K\otimes _{R}M\neq 0$. • If N is a submodule of M such that $M/N$ is a torsion module then $K\otimes _{R}N\cong K\otimes _{R}M$ as R-modules by $x\otimes n\mapsto x\otimes n$. • In $K\otimes _{R}M$, $x\otimes m=0$ if and only if $x=0$ or $m\in M_{\operatorname {tor} }$. In particular, $M_{\operatorname {tor} }=\operatorname {ker} (M\to K\otimes _{R}M)$ where $m\mapsto 1\otimes m$. • $K\otimes _{R}M\cong M_{(0)}$ where $M_{(0)}$ is the localization of the module $M$ at the prime ideal $(0)$ (i.e., the localization with respect to the nonzero elements). Extension of scalars Main article: Extension of scalars See also: Weil restriction The adjoint relation in the general form has an important special case: for any R-algebra S, M a right R-module, P a right S-module, using $\operatorname {Hom} _{S}(S,-)=-$, we have the natural isomorphism: $\operatorname {Hom} _{S}(M\otimes _{R}S,P)=\operatorname {Hom} _{R}(M,\operatorname {Res} _{R}(P)).$ This says that the functor $-\otimes _{R}S$ is a left adjoint to the forgetful functor $\operatorname {Res} _{R}$, which restricts an S-action to an R-action. Because of this, $-\otimes _{R}S$ is often called the extension of scalars from R to S. In the representation theory, when R, S are group algebras, the above relation becomes the Frobenius reciprocity. Examples • $R^{n}\otimes _{R}S=S^{n},$ for any R-algebra S (i.e., a free module remains free after extending scalars.) • For a commutative ring $R$ and a commutative R-algebra S, we have: $S\otimes _{R}R[x_{1},\dots ,x_{n}]=S[x_{1},\dots ,x_{n}];$ in fact, more generally, $S\otimes _{R}(R[x_{1},\dots ,x_{n}]/I)=S[x_{1},\dots ,x_{n}]/IS[x_{1},\dots ,x_{n}],$ where $I$ is an ideal. • Using $\mathbb {C} =\mathbb {R} [x]/(x^{2}+1),$ the previous example and the Chinese remainder theorem, we have as rings $\mathbb {C} \otimes _{\mathbb {R} }\mathbb {C} =\mathbb {C} [x]/(x^{2}+1)=\mathbb {C} [x]/(x+i)\times \mathbb {C} [x]/(x-i)=\mathbb {C} ^{2}.$ This gives an example when a tensor product is a direct product. • $\mathbb {R} \otimes _{\mathbb {Z} }\mathbb {Z} [i]=\mathbb {R} [i]=\mathbb {C} .$ Examples The structure of a tensor product of quite ordinary modules may be unpredictable. Let G be an abelian group in which every element has finite order (that is G is a torsion abelian group; for example G can be a finite abelian group or $\mathbb {Q} /\mathbb {Z} $). Then:[11] $\mathbb {Q} \otimes _{\mathbb {Z} }G=0.$ Indeed, any $x\in \mathbb {Q} \otimes _{\mathbb {Z} }G$ is of the form $x=\sum _{i}r_{i}\otimes g_{i},\qquad r_{i}\in \mathbb {Q} ,g_{i}\in G.$ If $n_{i}$ is the order of $g_{i}$, then we compute: $x=\sum (r_{i}/n_{i})n_{i}\otimes g_{i}=\sum r_{i}/n_{i}\otimes n_{i}g_{i}=0.$ Similarly, one sees $\mathbb {Q} /\mathbb {Z} \otimes _{\mathbb {Z} }\mathbb {Q} /\mathbb {Z} =0.$ Here are some identities useful for calculation: Let R be a commutative ring, I, J ideals, M, N R-modules. Then 1. $R/I\otimes _{R}M=M/IM$. If M is flat, $IM=I\otimes _{R}M$.[proof 1] 2. $M/IM\otimes _{R/I}N/IN=M\otimes _{R}N\otimes _{R}R/I$ (because tensoring commutes with base extensions) 3. $R/I\otimes _{R}R/J=R/(I+J)$.[proof 2] Example: If G is an abelian group, $G\otimes _{\mathbb {Z} }\mathbb {Z} /n=G/nG$; this follows from 1. Example: $\mathbb {Z} /n\otimes _{\mathbb {Z} }\mathbb {Z} /m=\mathbb {Z} /{\gcd(n,m)}$; this follows from 3. In particular, for distinct prime numbers p, q, $\mathbb {Z} /p\mathbb {Z} \otimes \mathbb {Z} /q\mathbb {Z} =0.$ Tensor products can be applied to control the order of elements of groups. Let G be an abelian group. Then the multiples of 2 in $G\otimes \mathbb {Z} /2\mathbb {Z} $ are zero. Example: Let $\mu _{n}$ be the group of n-th roots of unity. It is a cyclic group and cyclic groups are classified by orders. Thus, non-canonically, $\mu _{n}\approx \mathbb {Z} /n$ and thus, when g is the gcd of n and m, $\mu _{n}\otimes _{\mathbb {Z} }\mu _{m}\approx \mu _{g}.$ Example: Consider $\mathbb {Q} \otimes _{\mathbb {Z} }\mathbb {Q} .$ Since $\mathbb {Q} \otimes _{\mathbb {Q} }\mathbb {Q} $ is obtained from $\mathbb {Q} \otimes _{\mathbb {Z} }\mathbb {Q} $ by imposing $\mathbb {Q} $-linearity on the middle, we have the surjection $\mathbb {Q} \otimes _{\mathbb {Z} }\mathbb {Q} \to \mathbb {Q} \otimes _{\mathbb {Q} }\mathbb {Q} $ whose kernel is generated by elements of the form ${r \over s}x\otimes y-x\otimes {r \over s}y$ where r, s, x, u are integers and s is nonzero. Since ${r \over s}x\otimes y={r \over s}x\otimes {s \over s}y=x\otimes {r \over s}y,$ the kernel actually vanishes; hence, $\mathbb {Q} \otimes _{\mathbb {Z} }\mathbb {Q} =\mathbb {Q} \otimes _{\mathbb {Q} }\mathbb {Q} =\mathbb {Q} .$ However, consider $\mathbb {C} \otimes _{\mathbb {R} }\mathbb {C} $ and $\mathbb {C} \otimes _{\mathbb {C} }\mathbb {C} $. As $\mathbb {R} $-vector space, $\mathbb {C} \otimes _{\mathbb {R} }\mathbb {C} $ has dimension 4, but $\mathbb {C} \otimes _{\mathbb {C} }\mathbb {C} $ has dimension 2. Thus, $\mathbb {C} \otimes _{\mathbb {R} }\mathbb {C} $ and $\mathbb {C} \otimes _{\mathbb {C} }\mathbb {C} $ are not isomorphic. Example: We propose to compare $\mathbb {R} \otimes _{\mathbb {Z} }\mathbb {R} $ and $\mathbb {R} \otimes _{\mathbb {R} }\mathbb {R} $. Like in the previous example, we have: $\mathbb {R} \otimes _{\mathbb {Z} }\mathbb {R} =\mathbb {R} \otimes _{\mathbb {Q} }\mathbb {R} $ as abelian group and thus as $\mathbb {Q} $-vector space (any $\mathbb {Z} $-linear map between $\mathbb {Q} $-vector spaces is $\mathbb {Q} $-linear). As $\mathbb {Q} $-vector space, $\mathbb {R} $ has dimension (cardinality of a basis) of continuum. Hence, $\mathbb {R} \otimes _{\mathbb {Q} }\mathbb {R} $ has a $\mathbb {Q} $-basis indexed by a product of continuums; thus its $\mathbb {Q} $-dimension is continuum. Hence, for dimension reason, there is a non-canonical isomorphism of $\mathbb {Q} $-vector spaces: $\mathbb {R} \otimes _{\mathbb {Z} }\mathbb {R} \approx \mathbb {R} \otimes _{\mathbb {R} }\mathbb {R} .$ Consider the modules $M=\mathbb {C} [x,y,z]/(f),N=\mathbb {C} [x,y,z]/(g)$ for $f,g\in \mathbb {C} [x,y,z]$ irreducible polynomials such that $\gcd(f,g)=1.$ Then, ${\frac {\mathbb {C} [x,y,z]}{(f)}}\otimes _{\mathbb {C} [x,y,z]}{\frac {\mathbb {C} [x,y,z]}{(g)}}\cong {\frac {\mathbb {C} [x,y,z]}{(f,g)}}$ Another useful family of examples comes from changing the scalars. Notice that ${\frac {\mathbb {Z} [x_{1},\ldots ,x_{n}]}{(f_{1},\ldots ,f_{k})}}\otimes _{\mathbb {Z} }R\cong {\frac {R[x_{1},\ldots ,x_{n}]}{(f_{1},\ldots ,f_{k})}}$ Good examples of this phenomenon to look at are when $R=\mathbb {Q} ,\mathbb {C} ,\mathbb {Z} /(p^{k}),\mathbb {Z} _{p},\mathbb {Q} _{p}.$ Construction The construction of M ⊗ N takes a quotient of a free abelian group with basis the symbols m ∗ n, used here to denote the ordered pair (m, n), for m in M and n in N by the subgroup generated by all elements of the form 1. −m ∗ (n + n′) + m ∗ n + m ∗ n′ 2. −(m + m′) ∗ n + m ∗ n + m′ ∗ n 3. (m · r) ∗ n − m ∗ (r · n) where m, m′ in M, n, n′ in N, and r in R. The quotient map which takes m ∗ n = (m, n) to the coset containing m ∗ n; that is, $\otimes :M\times N\to M\otimes _{R}N,\,(m,n)\mapsto [m*n]$ is balanced, and the subgroup has been chosen minimally so that this map is balanced. The universal property of ⊗ follows from the universal properties of a free abelian group and a quotient. If S is a subring of a ring R, then $M\otimes _{R}N$ is the quotient group of $M\otimes _{S}N$ by the subgroup generated by $xr\otimes _{S}y-x\otimes _{S}ry,\,r\in R,x\in M,y\in N$, where $x\otimes _{S}y$ is the image of $(x,y)$ under $\otimes :M\times N\to M\otimes _{S}N.$ In particular, any tensor product of R-modules can be constructed, if so desired, as a quotient of a tensor product of abelian groups by imposing the R-balanced product property. More category-theoretically, let σ be the given right action of R on M; i.e., σ(m, r) = m · r and τ the left action of R of N. Then, provided the tensor product of abelian groups is already defined, the tensor product of M and N over R can be defined as the coequalizer: $M\otimes R\otimes N{{{} \atop {\overset {\sigma \times 1}{\to }}} \atop {{\underset {1\times \tau }{\to }} \atop {}}}M\otimes N\to M\otimes _{R}N$ where $\otimes $ without a subscript refers to the tensor product of abelian groups. In the construction of the tensor product over a commutative ring R, the R-module structure can be built in from the start by forming the quotient of a free R-module by the submodule generated by the elements given above for the general construction, augmented by the elements r ⋅ (m ∗ n) − m ∗ (r ⋅ n). Alternately, the general construction can be given a Z(R)-module structure by defining the scalar action by r ⋅ (m ⊗ n) = m ⊗ (r ⋅ n) when this is well-defined, which is precisely when r ∈ Z(R), the centre of R. The direct product of M and N is rarely isomorphic to the tensor product of M and N. When R is not commutative, then the tensor product requires that M and N be modules on opposite sides, while the direct product requires they be modules on the same side. In all cases the only function from M × N to G that is both linear and bilinear is the zero map. As linear maps In the general case, not all the properties of a tensor product of vector spaces extend to modules. Yet, some useful properties of the tensor product, considered as module homomorphisms, remain. Dual module See also: Duality (mathematics) § Dual objects The dual module of a right R-module E, is defined as HomR(E, R) with the canonical left R-module structure, and is denoted E∗.[12] The canonical structure is the pointwise operations of addition and scalar multiplication. Thus, E∗ is the set of all R-linear maps E → R (also called linear forms), with operations $(\phi +\psi )(u)=\phi (u)+\psi (u),\quad \phi ,\psi \in E^{*},u\in E$ $(r\cdot \phi )(u)=r\cdot \phi (u),\quad \phi \in E^{*},u\in E,r\in R,$ The dual of a left R-module is defined analogously, with the same notation. There is always a canonical homomorphism E → E∗∗ from E to its second dual. It is an isomorphism if E is a free module of finite rank. In general, E is called a reflexive module if the canonical homomorphism is an isomorphism. Duality pairing We denote the natural pairing of its dual E∗ and a right R-module E, or of a left R-module F and its dual F∗ as $\langle \cdot ,\cdot \rangle :E^{*}\times E\to R:(e',e)\mapsto \langle e',e\rangle =e'(e)$ $\langle \cdot ,\cdot \rangle :F\times F^{*}\to R:(f,f')\mapsto \langle f,f'\rangle =f'(f).$ The pairing is left R-linear in its left argument, and right R-linear in its right argument: $\langle r\cdot g,h\cdot s\rangle =r\cdot \langle g,h\rangle \cdot s,\quad r,s\in R.$ An element as a (bi)linear map In the general case, each element of the tensor product of modules gives rise to a left R-linear map, to a right R-linear map, and to an R-bilinear form. Unlike the commutative case, in the general case the tensor product is not an R-module, and thus does not support scalar multiplication. • Given right R-module E and right R-module F, there is a canonical homomorphism θ : F ⊗R E∗ → HomR(E, F) such that θ(f ⊗ e′) is the map e ↦ f ⋅ ⟨e′, e⟩.[13] • Given left R-module E and right R-module F, there is a canonical homomorphism θ : F ⊗R E → HomR(E∗, F) such that θ(f ⊗ e) is the map e′ ↦ f ⋅ ⟨e, e′⟩.[14] Both cases hold for general modules, and become isomorphisms if the modules E and F are restricted to being finitely generated projective modules (in particular free modules of finite ranks). Thus, an element of a tensor product of modules over a ring R maps canonically onto an R-linear map, though as with vector spaces, constraints apply to the modules for this to be equivalent to the full space of such linear maps. • Given right R-module E and left R-module F, there is a canonical homomorphism θ : F∗ ⊗R E∗ → LR(F × E, R) such that θ(f′ ⊗ e′) is the map (f, e) ↦ ⟨f, f′⟩ ⋅ ⟨e′, e⟩. Thus, an element of a tensor product ξ ∈ F∗ ⊗R E∗ may be thought of giving rise to or acting as an R-bilinear map F × E → R. Trace Let R be a commutative ring and E an R-module. Then there is a canonical R-linear map: $E^{*}\otimes _{R}E\to R$ induced through linearity by $\phi \otimes x\mapsto \phi (x)$; it is the unique R-linear map corresponding to the natural pairing. If E is a finitely generated projective R-module, then one can identify $E^{*}\otimes _{R}E=\operatorname {End} _{R}(E)$ through the canonical homomorphism mentioned above and then the above is the trace map: $\operatorname {tr} :\operatorname {End} _{R}(E)\to R.$ :\operatorname {End} _{R}(E)\to R.} When R is a field, this is the usual trace of a linear transformation. Example from differential geometry: tensor field The most prominent example of a tensor product of modules in differential geometry is the tensor product of the spaces of vector fields and differential forms. More precisely, if R is the (commutative) ring of smooth functions on a smooth manifold M, then one puts ${\mathfrak {T}}_{q}^{p}=\Gamma (M,TM)^{\otimes p}\otimes _{R}\Gamma (M,T^{*}M)^{\otimes q}$ where Γ means the space of sections and the superscript $\otimes p$ means tensoring p times over R. By definition, an element of ${\mathfrak {T}}_{q}^{p}$ is a tensor field of type (p, q). As R-modules, ${\mathfrak {T}}_{p}^{q}$ is the dual module of ${\mathfrak {T}}_{q}^{p}.$[15] To lighten the notation, put $E=\Gamma (M,TM)$ and so $E^{*}=\Gamma (M,T^{*}M)$.[16] When p, q ≥ 1, for each (k, l) with 1 ≤ k ≤ p, 1 ≤ l ≤ q, there is an R-multilinear map: $E^{p}\times {E^{*}}^{q}\to {\mathfrak {T}}_{q-1}^{p-1},\,(X_{1},\dots ,X_{p},\omega _{1},\dots ,\omega _{q})\mapsto \langle X_{k},\omega _{l}\rangle X_{1}\otimes \cdots \otimes {\widehat {X_{l}}}\otimes \cdots \otimes X_{p}\otimes \omega _{1}\otimes \cdots {\widehat {\omega _{l}}}\otimes \cdots \otimes \omega _{q}$ where $E^{p}$ means $\prod _{1}^{p}E$ and the hat means a term is omitted. By the universal property, it corresponds to a unique R-linear map: $C_{l}^{k}:{\mathfrak {T}}_{q}^{p}\to {\mathfrak {T}}_{q-1}^{p-1}.$ It is called the contraction of tensors in the index (k, l). Unwinding what the universal property says one sees: $C_{l}^{k}(X_{1}\otimes \cdots \otimes X_{p}\otimes \omega _{1}\otimes \cdots \otimes \omega _{q})=\langle X_{k},\omega _{l}\rangle X_{1}\otimes \cdots {\widehat {X_{l}}}\cdots \otimes X_{p}\otimes \omega _{1}\otimes \cdots {\widehat {\omega _{l}}}\cdots \otimes \omega _{q}.$ Remark: The preceding discussion is standard in textbooks on differential geometry (e.g., Helgason). In a way, the sheaf-theoretic construction (i.e., the language of sheaf of modules) is more natural and increasingly more common; for that, see the section § Tensor product of sheaves of modules. Relationship to flat modules In general, $-\otimes _{R}-:{\text{Mod-}}R\times R{\text{-Mod}}\longrightarrow \mathrm {Ab} $ is a bifunctor which accepts a right and a left R module pair as input, and assigns them to the tensor product in the category of abelian groups. By fixing a right R module M, a functor $M\otimes _{R}-:R{\text{-Mod}}\longrightarrow \mathrm {Ab} $ arises, and symmetrically a left R module N could be fixed to create a functor $-\otimes _{R}N:{\text{Mod-}}R\longrightarrow \mathrm {Ab} .$ Unlike the Hom bifunctor $\mathrm {Hom} _{R}(-,-),$ the tensor functor is covariant in both inputs. It can be shown that $M\otimes _{R}-$ and $-\otimes _{R}N$ are always right exact functors, but not necessarily left exact ($0\to \mathbb {Z} \to \mathbb {Z} \to \mathbb {Z} _{n}\to 0,$ where the first map is multiplication by $n$, is exact but not after taking the tensor with $\mathbb {Z} _{n}$). By definition, a module T is a flat module if $T\otimes _{R}-$ is an exact functor. If $\{m_{i}\mid i\in I\}$ and $\{n_{j}\mid j\in J\}$ are generating sets for M and N, respectively, then $\{m_{i}\otimes n_{j}\mid i\in I,j\in J\}$ will be a generating set for $M\otimes _{R}N.$ Because the tensor functor $M\otimes _{R}-$ sometimes fails to be left exact, this may not be a minimal generating set, even if the original generating sets are minimal. If M is a flat module, the functor $M\otimes _{R}-$ is exact by the very definition of a flat module. If the tensor products are taken over a field F, we are in the case of vector spaces as above. Since all F modules are flat, the bifunctor $-\otimes _{R}-$ is exact in both positions, and the two given generating sets are bases, then $\{m_{i}\otimes n_{j}\mid i\in I,j\in J\}$ indeed forms a basis for $M\otimes _{F}N.$ See also: pure submodule Additional structure See also: Free product of associative algebras If S and T are commutative R-algebras, then, similar to #For equivalent modules, S ⊗R T will be a commutative R-algebra as well, with the multiplication map defined by (m1 ⊗ m2) (n1 ⊗ n2) = (m1n1 ⊗ m2n2) and extended by linearity. In this setting, the tensor product become a fibered coproduct in the category of commutative R-algebras. (But it is not a coproduct in the category of R-algebras.) If M and N are both R-modules over a commutative ring, then their tensor product is again an R-module. If R is a ring, RM is a left R-module, and the commutator rs − sr of any two elements r and s of R is in the annihilator of M, then we can make M into a right R module by setting mr = rm. The action of R on M factors through an action of a quotient commutative ring. In this case the tensor product of M with itself over R is again an R-module. This is a very common technique in commutative algebra. Generalization Tensor product of complexes of modules If X, Y are complexes of R-modules (R a commutative ring), then their tensor product is the complex given by $(X\otimes _{R}Y)_{n}=\sum _{i+j=n}X_{i}\otimes _{R}Y_{j},$ with the differential given by: for x in Xi and y in Yj, $d_{X\otimes Y}(x\otimes y)=d_{X}(x)\otimes y+(-1)^{i}x\otimes d_{Y}(y).$ [17] For example, if C is a chain complex of flat abelian groups and if G is an abelian group, then the homology group of $C\otimes _{\mathbb {Z} }G$ is the homology group of C with coefficients in G (see also: universal coefficient theorem.) Tensor product of sheaves of modules Main article: Sheaf of modules The tensor product of sheaves of modules is the sheaf associated to the pre-sheaf of the tensor products of the modules of sections over open subsets. In this setup, for example, one can define a tensor field on a smooth manifold M as a (global or local) section of the tensor product (called tensor bundle) $(TM)^{\otimes p}\otimes _{O}(T^{*}M)^{\otimes q}$ where O is the sheaf of rings of smooth functions on M and the bundles $TM,T^{*}M$ are viewed as locally free sheaves on M.[18] The exterior bundle on M is the subbundle of the tensor bundle consisting of all antisymmetric covariant tensors. Sections of the exterior bundle are differential forms on M. See also: Tensor product bundle One important case when one forms a tensor product over a sheaf of non-commutative rings appears in theory of D-modules; that is, tensor products over the sheaf of differential operators. See also • Tor functor • Tensor product of algebras • Tensor product of fields • Derived tensor product Notes 1. Tensoring with M the exact sequence $0\to I\to R\to R/I\to 0$ gives $I\otimes _{R}M{\overset {f}{\to }}R\otimes _{R}M=M\to R/I\otimes _{R}M\to 0$ where f is given by $i\otimes x\mapsto ix$. Since the image of f is IM, we get the first part of 1. If M is flat, f is injective and so is an isomorphism onto its image. 2. $R/I\otimes _{R}R/J={R/J \over I(R/J)}={R/J \over (I+J)/J}=R/(I+J).$ Q.E.D. References 1. Nathan Jacobson (2009), Basic Algebra II (2nd ed.), Dover Publications 2. Hazewinkel, et al. (2004), p. 95, Prop. 4.5.1 3. Bourbaki, ch. II §3.1 4. First, if $R=\mathbb {Z} ,$ then the claimed identification is given by $f\mapsto f'$ with $f'(x)(y)=f(x,y)$. In general, $\operatorname {Hom} _{\mathbb {Z} }(N,G)$ has the structure of a right R-module by $(g\cdot r)(y)=g(ry)$. Thus, for any $\mathbb {Z} $-bilinear map f, f′ is R-linear $\Leftrightarrow f'(xr)=f'(x)\cdot r\Leftrightarrow f(xr,y)=f(x,ry).$ 5. Bourbaki, ch. II §3.2. 6. Bourbaki, ch. II §3.8 7. The first three properties (plus identities on morphisms) say that the category of R-modules, with R commutative, forms a symmetric monoidal category. 8. Proof: (using associativity in a general form) $(M\otimes _{R}N)_{S}=(S\otimes _{R}M)\otimes _{R}N=M_{S}\otimes _{R}N=M_{S}\otimes _{S}S\otimes _{R}N=M_{S}\otimes _{S}N_{S}$ 9. Bourbaki, ch. II §4.4 10. Bourbaki, ch.II §4.1 Proposition 1 11. Example 3.6 of http://www.math.uconn.edu/~kconrad/blurbs/linmultialg/tensorprod.pdf 12. Bourbaki, ch. II §2.3 13. Bourbaki, ch. II §4.2 eq. (11) 14. Bourbaki, ch. II §4.2 eq. (15) 15. Helgason 1978, Lemma 2.3' 16. This is actually the definition of differential one-forms, global sections of $T^{*}M$, in Helgason, but is equivalent to the usual definition that does not use module theory. 17. May 1999, ch. 12 §3 18. See also Encyclopedia of Mathematics - Tensor bundle • Bourbaki, Algebra • Helgason, Sigurdur (1978), Differential geometry, Lie groups and symmetric spaces, Academic Press, ISBN 0-12-338460-5 • Northcott, D.G. (1984), Multilinear Algebra, Cambridge University Press, ISBN 613-0-04808-4. • Hazewinkel, Michiel; Gubareni, Nadezhda Mikhaĭlovna; Gubareni, Nadiya; Kirichenko, Vladimir V. (2004), Algebras, rings and modules, Springer, ISBN 978-1-4020-2690-4. • May, Peter (1999). A concise course in algebraic topology (PDF). University of Chicago Press. Tensors Glossary of tensor theory Scope Mathematics • Coordinate system • Differential geometry • Dyadic algebra • Euclidean geometry • Exterior calculus • Multilinear algebra • Tensor algebra • Tensor calculus • Physics • Engineering • Computer vision • Continuum mechanics • Electromagnetism • General relativity • Transport phenomena Notation • Abstract index notation • Einstein notation • Index notation • Multi-index notation • Penrose graphical notation • Ricci calculus • Tetrad (index notation) • Van der Waerden notation • Voigt notation Tensor definitions • Tensor (intrinsic definition) • Tensor field • Tensor density • Tensors in curvilinear coordinates • Mixed tensor • Antisymmetric tensor • Symmetric tensor • Tensor operator • Tensor bundle • Two-point tensor Operations • Covariant derivative • Exterior covariant derivative • Exterior derivative • Exterior product • Hodge star operator • Lie derivative • Raising and lowering indices • Symmetrization • Tensor contraction • Tensor product • Transpose (2nd-order tensors) Related abstractions • Affine connection • Basis • Cartan formalism (physics) • Connection form • Covariance and contravariance of vectors • Differential form • Dimension • Exterior form • Fiber bundle • Geodesic • Levi-Civita connection • Linear map • Manifold • Matrix • Multivector • Pseudotensor • Spinor • Vector • Vector space Notable tensors Mathematics • Kronecker delta • Levi-Civita symbol • Metric tensor • Nonmetricity tensor • Ricci curvature • Riemann curvature tensor • Torsion tensor • Weyl tensor Physics • Moment of inertia • Angular momentum tensor • Spin tensor • Cauchy stress tensor • stress–energy tensor • Einstein tensor • EM tensor • Gluon field strength tensor • Metric tensor (GR) Mathematicians • Élie Cartan • Augustin-Louis Cauchy • Elwin Bruno Christoffel • Albert Einstein • Leonhard Euler • Carl Friedrich Gauss • Hermann Grassmann • Tullio Levi-Civita • Gregorio Ricci-Curbastro • Bernhard Riemann • Jan Arnoldus Schouten • Woldemar Voigt • Hermann Weyl
Wikipedia
\begin{document} \title [Area-minimizing hypersurfaces in manifolds] {Area-minimizing hypersurfaces in manifolds of Ricci curvature bounded below} \author{Qi Ding} \address{Shanghai Center for Mathematical Sciences, Fudan University, Shanghai 200438, China} \email{[email protected]} \begin{abstract} In this paper, we study area-minimizing hypersurfaces in manifolds of Ricci curvature bounded below with Cheeger-Colding theory. Let $N_i$ be a sequence of smooth manifolds with Ricci curvature $\ge-n\kappa^2$ on $B_{1+\kappa'}(p_i)$ for constants $\kappa\ge0$, $\kappa'>0$, and volume of $B_1(p_i)$ has a positive uniformly lower bound. Assume $B_1(p_i)$ converges to a metric ball $B_1(p_\infty)$ in the Gromov-Hausdorff sense. For an area-minimizing hypersurface $M_i$ in $B_1(p_i)$ with $\p M_i\subset\p B_1(p_i)$, we prove the continuity for the volume function of area-minimizing hypersurfaces equipped with the induced Hausdorff topology. In particular, each limit $M_\infty$ of $M_i$ is area-minimizing in $B_1(p_\infty)$ provided $B_1(p_\infty)$ is a smooth Riemannian manifold. By blowing up argument, we get sharp dimensional estimates for the singular set of $M_\infty$ in $\mathcal{R}$, and $\mathcal{S}\cap M_\infty$. Here, $\mathcal{R}$, $\mathcal{S}$ are the regular and singular parts of $B_1(p_\infty)$, respectively. \end{abstract} \maketitle \section{Introduction} In Euclidean space, area-minimizing hypersurfaces have been studied intensely several decades before (see \cite{Gi}\cite{LYa}\cite{S} for a systematical introduction). The theory acts an important role in the famous Bernstein theorem for minimal graphs in Euclidean space. Let $\Omega$ be an open subset in $\mathbb R^{n+1}$, $\Sigma_i$ be a sequence of area-minimizing hypersurfaces in $\Omega$ with $\p \Sigma_i\subset\p\Omega$. The compactness theorem tells us that there are an area-minimizing hypersurface $\Sigma$ in $\Omega$ with $\p\Sigma\subset\p\Omega$, and a subsequence $i_j$ such that $\Sigma_{i_j}$ converges to $\Sigma$ in the weak sense (see also Theorem 37.2 in \cite{S}). In particular, for any open $V\subset\subset\Omega$ we have (see Lemma 9.1 in \cite{Gi} for the version of bounded variation) \begin{equation}\aligned\label{SiijVEuc} \lim_{j\rightarrow\infty}\mathcal{H}^n(\Sigma_{i_j}\cap V)=\mathcal{H}^n(\Sigma\cap V)\qquad \mathrm{provided} \ \mathcal{H}^n(\Sigma\cap\p V)=0. \endaligned \end{equation} For any area-minimizing hypersurface $M$ in $\mathbb R^{n+1}$ (or a smoooth manifold), De Giorgi \cite{DG0}, Federer \cite{Fe}, Reifenberg \cite{Re} proved that the singular set of $M$ has Hausdorff dimension $\le n-7$. Recently, Cheeger-Naber \cite{CN} and Naber-Valtorta \cite{NV} made important progress on quantitative stratifications of the singular set of stationary varifolds. In particular, for an area-minimizing hypersurface $M$ they proved that the second fundamental form of $M$ has \emph{a\,priori} estimates in $L^7_{weak}$ on $M$ \cite{NV}. Needless to say, all area-minimizing hypersurfaces in smooth manifolds are stable. The theory of stable minimal surfaces is a powerful tool to study the topology of 3-dimensional manifolds, see \cite{AR}\cite{FS}\cite{Liu}\cite{SY}\cite{SY0} for instance. In general speaking, local calculations concerned on minimal hypersurfaces in manifolds usually contain sectional curvature of the ambient manifolds. However, many properties of minimal hypersurfaces may have nothing to do with sectional curvature of the ambient manifolds. In this paper, we study area-minimizing hypersurfaces in manifolds of Ricci curvature bounded below with Cheeger-Colding theory \cite{CCo1,CCo2,CCo3}. For overcoming the difficulty of lacking sectional curvature condition, it is natural to take the possible limits in Ricci limit spaces, and then go back to the original problems by using the properties of such limits. In this sense, it is worth to understand the limits of a sequence of area-minimizing hypersurfaces in a sequence of geodesic balls with Ricci curvature uniformly bounded below. If the volume of unit geodesic balls goes to zero, the limit of the hypersurfaces may be equal to the whole ambient Ricci limit spaces. So we assume the non-collapsing condition on ambient manifolds in the most cases, i.e., the volume of geodesic balls has a uniformly positive lower bound. In a sequel, we will study Sobolev and Poincar$\mathrm{\acute{e}}$ inequalities on minimal graphs over manifolds \cite{D2}. Let $S$ be a smooth minimal hypersurface in an $(n+1)$-dimensional complete Riemannian manifold $N$. Using Jacobi fields, Heintze-Karcher \cite{HK} established the Laplacian comparison for distance functions to $M$ outside cut loci in terms of Ricci curvature of ambient spaces. Similar to Schoen-Yau's argument in \cite{SY2}, the comparison theorem holds globally in the distribution sense even $S$ instead by the support of an $n$-rectifiable stationary varifold in $N$. Then we are able to estimate the positive lower bound of the volume of $S\cap B_1(p)\subset N$ using volume and the lower bound of Ricci curvature of $B_1(p)$ (compared \cite{IK}\cite{Mm}). As an application, we obtain a non-existence result for complete area-minimizing hypersurfaces without the sectional curvature of ambient manifolds (compared with Theorem 1 of Anderson \cite{An}; see Theorem \ref{Nonexistminimizinghypersurface}). If $M$ is an area-minimizing hypersurface in $B_1(p)\subset N$ with $\p M\subset\p B_1(p)$, then the volume of $M$ in $B_1(p)$ has a uniform upper bound by a constant via the universal covers of ambient manifolds (see Lemma \ref{upbdareaM}). With lower and upper bounds for the volume of area-minimizing hypersurfaces, we are able to consider their possible limits in Ricci limit spaces in the following sense. Let $N_i$ be a sequence of $(n+1)$-dimensional smooth manifolds of Ricci curvature $\ge-n\kappa^2$ on $B_{1+\kappa'}(p_i)$ for constants $\kappa\ge0$, $\kappa'>0$. By Gromov precompactness theorem, up to a choice of the subsequence $\overline{B_1(p_i)}$ converges to a metric ball $\overline{B_1(p_\infty)}$ in the Gromov-Hausdorff sense. For each $i$, let $\Phi_i:\,B_1(p_i)\rightarrow B_1(p_\infty)$ be an $\epsilon_i$-Hausdorff approximations with $\epsilon_i\rightarrow0$. We further assume $\mathcal{H}^{n+1}(B_1(p_i))\ge v$ for some positive constant $v$. Let $M_i$ be an area-minimizing hypersurface in $B_1(p_i)$ with $\p M_i\subset\p B_1(p_i)$. Suppose that $\Phi_i(M_i)$ converges in the Hausdorff sense to a closed set $M_\infty$ in $\overline{B_1(p_\infty)}$ as $i\rightarrow\infty$. Suppose $M_\infty\cap B_1(p_\infty)\neq\emptyset$. Colding \cite{C}, Cheeger-Colding \cite{CCo1} proved the volume convergence of $B_1(p_i)$ under the Gromov-Hausdorff topology. Based on their results, we get the continuity for the volume function of $M_i$ in the following sense. \begin{theorem}\label{1.2} For any open set $\Omega\subset\subset B_1(p_\infty)$, let $\Omega_i\subset B_1(p_i)$ be open with $\Phi_i(\Omega_i)\rightarrow\Omega$ in the Hausdorff sense. Then \begin{equation}\aligned\label{HnMiOmiMOm000} \limsup_{i\rightarrow\infty}\mathcal{H}^n\left(M_i\cap \overline{\Omega_i}\right)\le&\mathcal{H}^n\left(M_\infty\cap \overline{\Omega}\right)\\ \endaligned \end{equation} and \begin{equation}\aligned\label{HnMiOmsiMOms000} \mathcal{H}^n\left(M_\infty\cap B_s(\Omega)\right)\le&\liminf_{i\rightarrow\infty}\mathcal{H}^n(M_i\cap B_s(\Omega_i))\\ \endaligned \end{equation} for any $B_s(\Omega)\subset\subset B_1(p_\infty)$ with $s>0$. Moreover, $M_\infty$ is an area-minimizing hypersurface in $B_1(p_\infty)$ provided $\overline{B_1(p_\infty)}$ is a smooth Riemannian manifold. \end{theorem} \eqref{SiijVEuc} can be seen as a special version of \eqref{HnMiOmiMOm000}\eqref{HnMiOmsiMOms000} in Euclidean space. From \eqref{HnMiOmiMOm000}\eqref{HnMiOmsiMOms000}, we immediately have \begin{equation}\aligned\label{HnMBtplim000} &\mathcal{H}^n\left(M_\infty\cap \overline{B_t(p_\infty)}\right)\ge\limsup_{i\rightarrow\infty}\mathcal{H}^{n}\left(M_i\cap \overline{B_t(p_i)}\right)\\ \ge&\liminf_{i\rightarrow\infty}\mathcal{H}^{n}(M_i\cap B_t(p_i))\ge\mathcal{H}^n(M_\infty\cap B_t(p_\infty)) \endaligned \end{equation} for any $t\in(0,1)$. In particular, the inequality \eqref{HnMBtplim000} attains equality when $B_1(p_\infty)$ is a Euclidean ball. Since $N_i$ may have unbounded sectional curvature, we can not directly prove Theorem \ref{1.2} by following the idea from the Euclidean case. The proof of Theorem \ref{1.2} will be given by Theorem \ref{Conv0}, Lemma \ref{UpMinfty*} and Lemma \ref{LimMi}. In $\S4$ we first prove \eqref{HnMBtplim000} and that $M_\infty$ is minimizing in $B_1(p_\infty)$ under the smooth condition on $\overline{B_1(p_\infty)}$. Then using the results in $\S4$ and covering techniques, we can prove \eqref{HnMiOmiMOm000}\eqref{HnMiOmsiMOms000} in $\S5$ by combining Cheeger-Colding theory and the theory of area-minimizing hypersurfaces in Euclidean space. The proof of \eqref{HnMBtplim000} for smooth $\overline{B_1(p_\infty)}$ will be divided into two parts (see Theorem \ref{Conv0}). On the one hand, we prove area-minimizing $M_\infty$ and \eqref{HnMBtplim000} in the sense of Minkowski content, where we use the volume convergence of unit geodesic balls by Colding \cite{C}, Cheeger-Colding \cite{CCo1}. On the other hand, for any $x\in M_\infty$ there is a constant $r_x>0$ so that $B_{r_x}(x)\cap M_\infty$ can be written as the boundary of an open set of finite perimeter in $B_{r_x}(x)$ using the volume convergence. Then combining Theorem 2.104 in \cite{AFP} by Ambrosio-Fusco-Pallara, we can prove the equivalence of Hausdorff measure and Minkowski content for $M_\infty$. As an application, the local volume of area-minimizing hypersurfaces in a class of manifolds can be controlled by large-scale conditions (see Theorem \ref{IntReg}). Let $\mathcal{R}$, $\mathcal{S}$ denote the regular and singular parts of $B_1(p_\infty)$, respectively. Let $\mathcal{S}_{M_\infty}$ denote the singular set of $M_\infty$ in $\mathcal{R}$, i.e., a set containing all the points $x\in\mathcal{R}$ such that one of tangent cones of $M_\infty$ at $x$ is not flat. In fact, we can prove that all the possible tangent cones of $M_\infty$ at $x$ are not flat provided $x\in\mathcal{S}_{M_\infty}$. By studying the local cone structure of $M_\infty$, we have the following sharp dimensional estimates for $\mathcal{S}_{M_\infty}$ and $M_\infty\cap\mathcal{S}$ (see Lemma \ref{codim7} and Lemma \ref{codim2}). \begin{theorem}\label{dimestn-7n-2} $\mathcal{S}_{M_\infty}$ has Hausdorff dimension $\le n-7$ for $n\ge7$, and it is empty for $n<7$; $\mathcal{S}\cap M_\infty$ has Hausdorff dimension $\le n-2$. \end{theorem} Here, the codimension 7 is sharp if we choose $N_\infty$ as Euclidean space. Suppose that $N_i$ splits off a line $\mathbb R$ isometrically, i.e., $N_i=\Sigma_i\times\mathbb R$ for some smooth manifold $\Sigma_i$. If we choose $M_i$ as $\Sigma_i\times\{0\}\subset\Sigma_i\times\mathbb R$, then the codimension 2 in the above theorem is sharp as the singular part of the limit of $\Sigma_i$ may have codimension 2. \textbf{Remark.} We do not know whether $M_\infty$ is associated with the rectifiable current defined in \cite{AK} by Ambrosio-Kirchheim. If we assume further $M_i=\p E_i\cap B_1(p_i)$ with some set $E_i\subset B_1(p_i)$ for each $i$, then the results in Theorem \ref{dimestn-7n-2} can be generalized into the setting of RCD spaces in \cite{MS} by Mondino-Semola (about the same time as me on the arXiv website). Theorem \ref{dimestn-7n-2} will act a crucial role for getting Sobolev inequality and Neumann-Poincar\'e inequality on minimal graphs over manifolds of Ricci curvature bounded below in \cite{D2}. \emph{Acknowledgments.} The author would like to thank J$\mathrm{\ddot{u}}$rgen Jost, Yuanlong Xin, Hui-Chun Zhang, Xi-Ping Zhu, Xiaohua Zhu for their interests and valuable discussions. The author would like to thank Gioacchino Antonelli and Daniele Semola for careful reading and valuable advice. The author is partially supported by NSFC 11871156 and NSFC 11922106. \section{Preliminary} Let $(X,d)$ be a complete metric space. For any subset $E\subset X$ and any constant $s\ge0$, we denote $\mathcal{H}^s(E)$ be the $s$-dimensional Hausdorff measure of $E$. Namely, \begin{equation}\aligned\label{SH} \mathcal{H}^s(E)=\lim_{\delta\rightarrow0}\mathcal{H}^s_\delta(E) \endaligned \end{equation} with \begin{equation}\aligned\label{mathcalHnep} \mathcal{H}_\delta^s(E)=\f{\omega_s}{2^s}\inf\left\{\sum_{i=1}^\infty (\mathrm{diam} U_i)^s\Big|\ E\subset\bigcup_{i=1}^\infty U_i\ \mathrm{for\ Borel\ sets}\ U_i\subset X,\ \mathrm{diam} U_i<\delta\right\}, \endaligned \end{equation} where $\omega_s=\f{\pi^{s/2}}{\Gamma(\f s2+1)}$, and $\Gamma(r)=\int_0^\infty e^{-t}t^{r-1}dt$ is the gamma function for $0<r<\infty$. In particular, for integer $s\ge1$, $\omega_s$ is the volume of the unit ball in $\mathbb R^s$, and $\omega_0=1$. For each $p\in X$, let $B_r(p)$ denote the geodesic ball in $X$ with radius $r$ and centered at $p$. More general, for any set $K$ in $X$, let $\rho_K=d(\cdot,K)=\inf_{y\in K}d(\cdot,y)$ be the distance function from $K$, and $B_t(K)$ denote the $t$-neighborhood of $K$ in $X$ defined by $\{x\in X|\ d(x,K)<t\}$. Moveover, $\rho_K$ is a Lipschitz function on $X$ with the Lipschitz constant $\le1$. Let $\Omega\subset X$ be a measurable set of Hausdorff dimension $m\ge1$. We define the upper and the lower $(m-1)$-dimensional \emph{Minkowski contents} of $K$ in $\Omega$ by \begin{equation}\aligned \mathcal{M}^*(K,\Omega)=&\limsup_{\delta\rightarrow0}\f1{2\delta}\mathcal{H}^{m}(\Omega\cap B_\delta(K)\setminus K)\\ \mathcal{M}_*(K,\Omega)=&\liminf_{\delta\rightarrow0}\f1{2\delta}\mathcal{H}^{m}(\Omega\cap B_\delta(K)\setminus K). \endaligned \end{equation} If $\mathcal{M}^*(K,\Omega)=\mathcal{M}_*(K,\Omega)$, the common value is denoted by $\mathcal{M}(K,\Omega)$. Let $N$ be an $(n+1)$-dimensional complete Riemannian manifold with Ricci curvature $\ge-n\kappa^2$ on $B_R(p)$ for constants $\kappa\ge0$ and $R>0$. For any integer $k\ge0$, let $V^k_s(r)$ denote the volume of a geodesic ball with radius $r$ in a $k$-dimensional space form with constant sectional curvature $-s^2$. In fact, $V^k_s(r)=k\omega_k s^{1-k}\int_0^r\sinh^{k-1}(st)dt$, and $\left(V^{k}_{s}(r)\right)'=k\omega_ks^{1-k}\sinh^{k-1}(s r)$ for each $r>0$. By Bishop-Gromov volume comparison, \begin{equation}\aligned\label{BiVolN} 1\ge\f{\mathcal{H}^{n+1}(B_{r_1}(p))}{V^{n+1}_{\kappa}(r_1)}\ge\f{\mathcal{H}^{n+1}(B_{r_2}(p))}{V^{n+1}_{\kappa}(r_2)} \endaligned \end{equation} for any $0<r_1\le r_2\le R$, and \begin{equation}\aligned\label{HnpBr} \mathcal{H}^{n}(\p B_{r}(p))\le \f{\left(V^{n+1}_{\kappa}(r)\right)'}{V^{n+1}_{\kappa}(r)}\mathcal{H}^{n+1}(B_{r}(p))\le\left(V^{n+1}_{\kappa}(r)\right)'=(n+1)\omega_{n+1}\f{\sinh^n(\kappa r)}{\kappa^n} \endaligned \end{equation} for any $0<r\le R$. Here, we see $\sinh(\kappa r)/\kappa=r$ for $\kappa=0$. Let us recall the isoperimetric inequality on $B_r(p)$ (see \cite{An1,B,Cr} for instance, or from the heat kernel \cite{Gri,LW}): for any $r\in(0,R/2]$ there is a constant $\alpha_{n,\kappa r}>0$ depending only on $n,\kappa r$ such that for any open set $\Omega\subset B_r(p)$, one has \begin{equation}\label{isoperi} \f{\mathcal{H}^{n}(\p\Omega)}{\left(\mathcal{H}^{n+1}(\Omega)\right)^{\f{n}{n+1}}}\ge \alpha_{n,\kappa r}\left(\f{\mathcal{H}^{n+1}(B_r(p))}{V^{n+1}_{\kappa}(r)}\right)^{\f1{n+1}}. \end{equation} Let exp$_p$ denote the exponential map from the tangent space $T_pN$ into $N$. For any two constants $R,\tau>0$, let $\Sigma$ be an embedded $C^2$-hypersurface in $B_{R+\tau}(p)$ with $\p \Sigma\subset\p B_{R+\tau}(p)$ ($\p B_{R+\tau}(p)$, $\p \Sigma$ maybe empty). By the uniqueness of the geodesics, if $\rho_\Sigma$ is differentiable at $x\in B_{R}(p)\cap B_\tau(\Sigma)\setminus\Sigma$, then there exist a unique $x_\Sigma\in \Sigma$ and a unique non-zero vector $v_x\in\mathbb R^{n+1}$ with $|v_x|=\rho_\Sigma(x)$ such that $\mathrm{exp}_{x_\Sigma}(v_x)=x$. Let $\gamma_x(t)$ denote the geodesic $\mathrm{exp}_{x_\Sigma}(tv_x/|v_x|)$ from $t=0$ to $t=|v_x|$. In particular, $\rho_\Sigma$ is smooth at $\gamma_x(t)$ for $t\in[0,|v_x|]$. Let $H_x(t), A_x(t)$ denote the mean curvature (pointing out of $\{\rho_\Sigma<t\}$), the second fundamental form of the level set $\{\rho_\Sigma=t\}$ at $\gamma_x(t)$, respectively. Let $\Delta_N$ denote the Laplacian of $N$ with respect to its Riemannian metric. From Heintze-Karcher \cite{HK}, one has \begin{equation}\label{Hxtge} -\Delta_N\rho_\Sigma=H_x(\rho_\Sigma)\ge H_x(0)-n\kappa\tanh\left(\kappa\rho_\Sigma\right) \end{equation} on the geodesic $\mathrm{exp}_{x_\Sigma}(tv_x/|v_x|)$ for $t\in[0,|v_x|]$. In fact, from the variational argument \begin{equation}\label{HtRic} \f{\p H_x}{\p t}=|A_x|^2+Ric\left(\dot{\gamma}_x,\dot{\gamma}_x\right)\ge\f1n|H_x|^2-n\kappa^2, \end{equation} we can solve the above differential inequality \eqref{HtRic}, and obtain \eqref{Hxtge}. For an open set $U\subset N$, we suppose that $U$ is properly embedded in $\mathbb R^{n+m}$ for some integer $m\ge1$. For a set $S$ in $U$, $S$ is said to be \emph{countably $n$-rectifiable} if $S\subset S_0\cup\bigcup_{j=1}^\infty F_j(\mathbb R^n)$, where $\mathcal{H}^n(S_0)=0$, and $F_j:\, \mathbb R^n\rightarrow \mathbb R^{n+m}$ are Lipschitz mappings for all integers $j\ge1$. We further assume that $S$ is compact in $N$, and that there are a Radon measure $\mu$ in $N$ and a constant $\gamma>0$ such that $\mu$ is absolutely continuous with respect to $\mathcal{H}^n$ and $\mu(B_r(x))\ge\gamma r^n$ for any $x\in S$, any $r\in(0,1)$. Then from Ambrosio-Fusco-Pallara (in \cite{AFP}, p. 110) \begin{equation}\aligned\label{MSHS} \mathcal{M}(S,N)=\mathcal{M}(S,U)=\mathcal{H}^n(S). \endaligned \end{equation} Let $G_{n,m}$ denote the (Grassmann) manifold including all the $n$-dimensional subspace of $\mathbb R^{n+m}$. An $n$-varifold $V$ in $U$ is a Radon measure on $$G_{n,m}(U)=\{(x,T)|\, x\in U,\, T\in G_{n,m}\cap T_xN\}.$$ An $n$-rectifiable varifold in $U$ is an $n$-varifold in $U$ with support on countably $n$-rectifiable sets. An $n$-varifold $V$ is said to be \emph{stationary} in $U$ if \begin{equation}\aligned\nonumber \int_{G_{n,m}(U)}\mathrm{div}_\omega YdV(x,\omega)=0 \endaligned \end{equation} for each $Y\in C^\infty_c(U,\mathbb R^{n+m})$ with $Y(x)\subset T_xN$ for each $x\in U$. The singular set of $V$ is a set containing every point $x$ in spt$V\cap U$ such that one of tangent cone of $V$ at $x$ is not flat. The notion of $n$-rectifiable stationary varifolds obviously generalizes the notion of minimal hypersurfaces. See \cite{LYa}\cite{S} for more details on varifolds and currents. Let $\mathcal{D}^n(U)$ denote the set including all smooth $n$-forms on $U$ with compact supports in $U$. Denote $\mathcal{D}_n(U)$ be the set of $n$-currents in $U$, which are continuous linear functionals on $\mathcal{D}^n(U)$. For each $T\in \mathcal{D}_n(U)$ and each open set $W$ in $U$, one defines the mass of $T$ on $W$ by \begin{equation*}\aligned \mathbb{M}(T\llcorner W)=\sup_{|\omega|_U\le1,\omega\in\mathcal{D}^n(U),\mathrm{spt}\omega\subset W}T(\omega) \endaligned \end{equation*} with $|\omega|_U=\sup_{x\in U}\langle\omega(x),\omega(x)\rangle^{1/2}$. Let $\p T$ be the boundary of $T$ defined by $\p T(\omega')=T(d\omega')$ for any $\omega'\in\mathcal{D}^{n-1}(U)$. For a countably $n$-rectifiable set $M\subset U$ with orientation $\xi$ (i.e., $\xi(x)$ is an $n$-vector representing $T_xM$ for $\mathcal{H}^n$-a.e. $x$), there is an $n$-current $\llbracket M\rrbracket\in\mathcal{D}_n(U)$ associated with $M$, i.e., \begin{equation*}\aligned \llbracket M\rrbracket(\omega)=\int_M\langle \omega,\xi\rangle, \qquad \omega\in\mathcal{D}^n(U). \endaligned \end{equation*} A countably $n$-rectifiable set $M$ (with orientation $\xi$) is said to be an \emph{area-minimizing} hypersurface in $U$ if the associated current $\llbracket M\rrbracket$ is a minimizing current in $U$. Namely, $\mathbb{M}(\llbracket M\rrbracket\llcorner W)\le \mathbb{M}(T\llcorner W)$ whenever $W\subset\subset U$, $\p T=\p\llbracket M\rrbracket$ in $U$, spt$(T-\llbracket M\rrbracket)$ is compact in $W$ (see \cite{S} for instance). In particular, $M$ does not contain any closed minimal hypersurface. Similar to the Euclidean case, $M$ is one-sided, and smooth outside a closed set of Hausdorff dimension $\le n-7$ (\cite{DG0}\cite{Fe}\cite{Re}). If $Z_1,Z_2$ are both metric spaces, then an admissible metric on the disjoint union $Z_1\coprod Z_2$ is a metric that extends the given metrics on $Z_1$ and $Z_2$. With this one can define the Gromov-Hausdorff distance as $$d_{GH}(Z_1,Z_2)=\inf\left\{d_H(Z_1,Z_2)\Big|\ \mathrm{adimissible\ metrics\ on}\ Z_1\coprod Z_2\right\},$$ where $d_H(Z_1,Z_2)$ is the Hausdorff distance of $Z_1,Z_2$. For any $\epsilon>0$, a map $\Phi:\ Z_1\rightarrow Z_2$ is said to be an $\epsilon$-\emph{Hausdorff approximation} (see \cite{Fu} for example) if $Z_2$ is the $\epsilon$-neighborhood of the image $\Phi(Z_1)$ and $$\left|d(x_1,x_2)-d(\Phi(x_1),\Phi(x_2))\right|\le\epsilon\qquad \ \mathrm{for\ every}\ x_1,x_2\in Z_1.$$ If $d_{GH}(Z_1,Z_2)\le\epsilon$, then there exists a $3\epsilon$-Hausdorff approximation $Z_1\rightarrow Z_2$. If there exists an $\epsilon$-Hausdorff approximation $Z_1\rightarrow Z_2$, then $d_{GH}(Z_1,Z_2)\le 3\epsilon$. Let $\{Z_i\}_{i=1}^\infty$ be a sequence of metric spaces with a point $p_i\in Z_i$, and $K_i$ be a subset of $Z_i$ for each $i$. Let $Z_\infty$ be a metric space with a point $p_\infty\in Z_\infty$. \begin{itemize} \item Case 1: $Z_\infty$ is compact with $\lim_{i\rightarrow\infty}d_{GH}(Z_i,Z_\infty)=0$. Namely, there is an $\epsilon_i$-Hausdorff approximations $\Phi_i:\ Z_i\rightarrow Z_\infty$ for some sequence $\epsilon_i\rightarrow0$. Then from Blaschke theorem (see Theorem 7.3.8 in \cite{BBI} for instance), there is a closed subset $K_\infty$ of $Z_\infty$ such that $\Phi_{i}(K_i)$ converges to $K_\infty$ in the Hausdorff sense up to choose the subsequence. \item Case 2: $Z_\infty$ is non-compact with $(B_{R_i}(p_i),p_i)$ converges to a metric space $(Z_\infty,p_\infty)$ in the pointed Gromov-Hausdorff sense for some sequence $R_i\rightarrow\infty$. Then there are sequences $r_i\rightarrow\infty$, $\epsilon_i\rightarrow0$ and a sequence of $\epsilon_{i}$-Hausdorff approximations $\Phi_{i}:\ B_{r_i}(p_i)\rightarrow B_{r_i}(p_\infty)$. From Blaschke theorem, there is a closed set $K_\infty$ of $Z_\infty$ such that for each $0<r<\infty$, $\Phi_{i}\left(K_i\cap B_r(p_i)\right)$ converges to $K_\infty\cap B_r(p_\infty)$ in the Hausdorff sense up to choose the (diagonal) subsequence. \end{itemize} It is easy to see that $K_\infty$ depends on the choice of $\Phi_i$ in the sense of isometry group of $Z_\infty$. For simplicity, we call that $K_i$ converges \textbf{in the induced Hausdorff sense} to $K_\infty$ in case 1, and $(K_i,p_i)$ converges \textbf{in the induced Hausdorff sense} to $(K_\infty,p_\infty)$ in case 2 unless we need emphasize the Hausdorff approximations $\Phi_i$. Let $N_i$ be a sequence of $(n+1)$-dimensional smooth manifolds of Ricci curvature \begin{equation}\aligned\label{Ric} \mathrm{Ric}\ge-n\kappa^2\qquad \mathrm{on}\ B_{1+\kappa'}(p_i) \endaligned \end{equation} for constants $\kappa\ge0$, $\kappa'>0$. By Gromov's precompactness theorem (see \cite{GLP} for instance), up to choose the subsequence $B_1(p_i)$ converges to a metric ball $B_1(p_\infty)$ in the Gromov-Hausdorff sense. From Cheeger-Colding theory \cite{CCo1,CCo2,CCo3}, there is a unique Radon measure $\nu_\infty$ on $B_1(p_\infty)$, which is a renormalized limit measure on $B_1(p_\infty)$ given by \begin{equation}\aligned\label{nuinfty} \nu_\infty(B_r(y))=\lim_{i\rightarrow\infty}\f{\mathcal{H}^{n+1}(B_r(y_i))}{\mathcal{H}^{n+1}(B_1(p_i))} \endaligned \end{equation} for every ball $B_r(y)\subset B_1(p_\infty)$ and for each sequence $y_i\in B_1(p_i)$ converging to $y$. Then $\nu_\infty(B_1(p_\infty))=1$ and $\nu_\infty$ satisfies Bishop-Gromov comparison theorem in form as \eqref{BiVolN}. We further assume that $B_1(p_i)$ satisfies a non-collapsing condition, i.e., \begin{equation}\aligned\label{Vol} \mathcal{H}^{n+1}(B_1(p_i))\ge v\qquad \mathrm{for\ some\ constant}\ v>0. \endaligned \end{equation} Then $\nu_\infty$ is just a multiple of the Hausdorff measure $\mathcal{H}^{n+1}$ and the volume convergence \begin{equation}\aligned\label{VolCOV} \mathcal{H}^{n+1}(B_1(p_\infty))=\lim_{i\rightarrow\infty}\mathcal{H}^{n+1}(B_1(p_i)) \endaligned \end{equation} holds from Colding \cite{C}, Cheeger-Colding \cite{CCo1}. Let $\mathcal{R}$ be the regular set of $B_1(p_\infty)$. Namely, a point $x\in \mathcal{R}$ if and only if each tangent cone at $x$ of $B_1(p_\infty)$ is $\mathbb R^{n+1}$. Let $\mathcal{S}=B_1(p_\infty)\setminus\mathcal{R}$ denote the singular set of $B_1(p_\infty)$. Then dim$(\mathcal{S})\le n-1$ from Cheeger-Colding \cite{CCo1}. In general, $\mathcal{S}$ may not be closed in $B_1(p_\infty)$. For any $\epsilon>0$, let $\mathcal{R}_\epsilon\supset\mathcal{R}$ and $\mathcal{S}_\epsilon$ be two subsets in $B_1(p_\infty)$ defined by \begin{equation}\aligned\label{RSep} \mathcal{R}_\epsilon=\left\{x\in B_1(p_\infty)\Big|\,\sup_{0<s\le r}s^{-1}d_{GH}(B_s(x),B_s(0))<\epsilon \ \mathrm{for\ some}\ r>0\right\},\ \mathcal{S}_\epsilon=B_1(p_\infty)\setminus\mathcal{R}_\epsilon. \endaligned \end{equation} Here, $B_r(0)$ is the ball in $\mathbb R^{n+1}$ centered at the origin with radius $r$. Let $\Lambda_\epsilon$ be a positive function on $\mathcal{R}_\epsilon$ defined by \begin{equation}\aligned\label{Laep} \Lambda_\epsilon(x)=\sup\{r>0|\, d_{GH}(B_{s}(x),B_{s}(0))<\epsilon s\ \mathrm{for\ each}\ 0<s\le r\} \endaligned \end{equation} for any $x\in \mathcal{R}_\epsilon$. From Bishop-Gromov volume comparison and Theorem 0.8 in \cite{C}, $\mathcal{S}_\epsilon$ is closed in $B_1(p_\infty)$, and $\Lambda_\epsilon$ is a lower semicontinuous function on $\mathcal{R}_\epsilon$. For each integer $0\le k\le n-1$ and constants $\epsilon>0,r>0$, let $\mathcal{S}^k_{\epsilon,r}$ be a set consisting of points $y\in B_1(p_\infty)$ such that for any metric space $X$, $(0^{k+1},x)\in\mathbb R^{k+1}\times X$ and $s>r$, we have $d_{GH}\left(B_s(y),B_s((0^{k+1},x))\right)\ge\epsilon s.$ Put \begin{equation}\aligned\label{Sk} \mathcal{S}^k=\cup_{\epsilon>0}\cap_{r>0}\mathcal{S}^k_{\epsilon,r}, \endaligned \end{equation} then $\mathcal{S}^{n-1}\setminus\mathcal{S}^{n-2}$ is the top stratum of the singular set and dim$(\mathcal{S}^k)\le k$ from Cheeger-Colding \cite{CCo1}. See Cheeger-Naber \cite{CN13} for further results on the singular sets. For any $x\in \mathcal{S}^{n-1}\setminus\mathcal{S}^{n-2}$, there is a tangent cone $C_x$ of $B_1(p_\infty)$ at $x$, which splits off $\mathbb R^{n-1}$ isometrically. Moreover, there is a round circle $\mathbb{S}^1_r$ in $\mathbb R^2$ with radius $r\in(0,1)$ such that $C_x=C\mathbb{S}^1_r\times\mathbb R^{n-1}$. \section{Volume estimates for area-minimizing hypersurfaces} From the Laplacian comparison for distance functions from minimal submanifolds by Heintze-Karcher \cite{HK}, Itokawa-Kobayashi obtained the uniform lower bound for the volume of minimal hypersurfaces in terms of the volume of their tubular neighborhoods in manifolds of nonnegative Ricci curvature (see Proposition 3.2 in \cite{IK}). Let $N$ be an $(n+1)$-dimensional smooth complete Riemannian manifold. Let $\nabla$ denote Levi-Civita connection of $N$, $\Delta_N$ denote the Laplacian of $N$. Using Lemma \ref{DerMge*} in appendix I, we can get the lower bound of the volume of the support of $n$-rectifiable stationary varifolds in $N$ with Ricci curvature bounded below as follows. \begin{lemma}\label{LOWERM000} Suppose that $N$ has Ricci curvature $\ge-n\kappa^2$ on a geodesic ball $B_R(p)\subset N$ for some constant $\kappa\ge0$. Let $V$ be an $n$-rectifiable stationary varifold in $B_{R}(p)$. Denote $M=\mathrm{spt} V\cap B_R(p)$. Then \begin{equation}\aligned\label{tArea} \mathcal{H}^{n+1}\left(B_t(M)\cap B_{s}(p)\right)\le\f{2t}{1-n\kappa t}\mathcal{H}^n\left(M\cap B_{t+s}(p)\right) \endaligned \end{equation} for each $0<t\le\min\{\f1{n\kappa},s\}$ and $s+t<R$. \end{lemma} \begin{proof} Let $\mathcal{R}_{M}$ denote the regular part of $M$, i.e., $\mathcal{R}_{M}=M\setminus \mathcal{S}_M$ with the singular part $\mathcal{S}_M$ of $M$. Let $\rho_M$ denote the distance function from $M$ in $N$. Let $M_i$ be the smooth embedded hypersurface constructed related to $M$ as in Lemma \ref{DerMge*} of appendix I. Let $\mathcal{C}_{M_i}$ denote the cut locus of $\rho_{M_i}$ for each $i$. For any $s<R$, $\overline{\mathcal{C}_{M_i}}\cap B_{R-s}(M_i)\cap B_{s}(p)$ has Hausdorff dimension $\le n$ (see Corollary 4.12 in \cite{MM}). Hence by co-area formula, there is a zero $\mathcal{H}^1$-measure set $\Lambda\subset(0,\infty)$ such that for any $t\in(0,R-s)\setminus\Lambda$ and $i\in \mathbb{N}^+$, $\{\rho_{M_i}=t\}\cap B_{R-s}(M_i)\cap B_{s}(p)$ is smooth except a closed set with Hausdorff dimension $\le n-1$. Let $\epsilon>0$ be small. From \eqref{equivMiM} there is an integer $i_0>0$ (depending on $s,t$) such that $$\{\rho_{M_i}=t\}\cap \overline{B_{s}(p)}=\{\rho_{M}=t\}\cap \overline{B_{s}(p)}$$ for all $i\ge i_0$. For \textbf{fixed} $s,t$ with $t\in(0,\infty)\setminus\Lambda$ and $0<t\le s<R-t$, there exists a finite collection of balls $\{B_{s_j}(z_j)\}_{j=1}^{N_\epsilon}$ with $\overline{\mathcal{C}_M}\subset\left(\cup_{j=1}^{N_\epsilon} B_{s_j}(z_j)\right)$ such that $\{\rho_{M}=t\}\cap \overline{B_s(p)}\cap\left(\cup_{j=1}^{N_\epsilon} B_{s_j}(z_j)\right)$ has $n$-dimensional Hausdorff measure $\le\epsilon$. Hence, there is a closed set $\Omega_{t,s}$ in $\{\rho_{M}=t\}\cap B_s(p)\setminus \left(\cup_{j=1}^{N_\epsilon} B_{s_j}(z_j)\right)$ with $\p\Omega_{t,s}\in C^\infty$ so that \begin{equation}\aligned\label{HnOmitsrM} \mathcal{H}^n\left(\{\rho_{M}=t\}\cap B_s(p)\right)\le\mathcal{H}^n\left(\Omega_{t,s}\right)+2\epsilon. \endaligned \end{equation} For any $z\in\Omega_{t,s}$ there is a unique $x_z\in \mathcal{R}_{M}\cap B_{t+s}(p)$ with $d(x_z,z)=t$. Let $W$ be a small neighborhood of $x$ in $\Omega_{t,s}$ with $\p W\in C^1$, then there is a neighborhood $K_W$ of $x_z$ in $\mathcal{R}_{M}$ such that $$W=\left\{\mathrm{exp}_{y}(t\mathbf{n}_M(y))\in N|\ y\in K_W \right\},$$ where $\mathbf{n}_M$ denotes the unit normal vector field to $\mathcal{R}_{M}$. Since the exponential map $\mathrm{exp}_{\cdot}(t\mathbf{n}_M(\cdot)):\, K_W\rightarrow W$ is 1-1 and differentiable, then $\p K_W\in C^1$ from the inverse mapping theorem and $\p W\in C^1$. Let $V_{t}$ denote the set defined by \begin{equation}\aligned\nonumber V_t=\left\{\mathrm{exp}_{y}(r\mathbf{n}_M(y))\in N|\ 0\le r\le t,\ y\in K_W \right\} \endaligned \end{equation} with the outward unit normal $\nu_{t}$ to $\p V_{t}$. For any $y\in\p V_{t}\setminus V_{t}$, if there is a point $(x_y,\tau)\in\p K_W\times\mathbb R$ with $y=\mathrm{exp}_{x_y}(\tau\mathbf{n}_M(x_y))$, then $\nabla\rho_M(y)=\f{\p}{\p \tau}\mathrm{exp}_{x_y}(\tau\mathbf{n}_M(x_y))$, which implies \begin{equation}\aligned\label{narMnuts} \left\langle\nabla\rho_M,\nu_{t}\right\rangle=0\qquad \mathrm{on}\ \ \p V_{t}\setminus V_{t}. \endaligned \end{equation} Under the assumption $0<t\le s<R-t$ and $t\in(0,\infty)\setminus\Lambda$, we further assume $t\le\f1{n\kappa+\epsilon}$. From Lemma \ref{DerMge*}, we have \begin{equation}\aligned &\Delta_N\left(\rho_M-\f1{n\kappa+\epsilon}\right)^2=2\left(\rho_M-\f1{n\kappa+\epsilon}\right)\Delta_N \rho_M+2\\ \ge&-2\left(\f1{n\kappa+\epsilon}-\rho_M\right)n\kappa\tanh(\kappa\rho_M)+2\ge0 \endaligned \end{equation} on $V_{t}$ in the distribution sense. With \eqref{narMnuts}, integrating the above inequality by parts infers \begin{equation}\aligned 0\le&\int_{V_{t}}\Delta_N\left(\rho_M-\f1{n\kappa+\epsilon}\right)^2=\int_{\p V_{t}}\left\langle\nabla\left(\rho_M-\f1{n\kappa+\epsilon}\right)^2,\nu_{t}\right\rangle\\ =&2\left(t-\f1{n\kappa+\epsilon}\right)\mathcal{H}^n(W) +\f2{n\kappa+\epsilon}\mathcal{H}^n(K_W), \endaligned \end{equation} which implies \begin{equation}\aligned\label{WKcont} \mathcal{H}^n(W)\le\f1{1-(n\kappa+\epsilon)t}\mathcal{H}^n(K_W). \endaligned \end{equation} Note that for any point $x\in M\cap B_{t+s}(p)$ with $d(x,\Omega_{t,s})=t$, there are 2 points $z_x,z_x'\in\Omega_{t,s}$ at most such that $d(x,z_x)=d(x,z_x')=t$. Hence, by choosing a suitable covering of $\Omega_{t,s}$, with \eqref{WKcont} we get \begin{equation}\aligned\label{rMBRBtR} \mathcal{H}^n\left(\Omega_{t,s}\right)\le\f2{1-(n\kappa+\epsilon)t}\mathcal{H}^n\left(M\cap B_{t+s}(p)\right) \endaligned \end{equation} for each $0<t\le s<R-t$ with $t\in(0,\infty)\setminus\Lambda$ and $t\le\f1{n\kappa+\epsilon}$. With co-area formula and \eqref{HnOmitsrM}, for each $0<t\le\min\{\f1{n\kappa+\epsilon},s\}$ and $s+t<R$ we have \begin{equation}\aligned \mathcal{H}^{n+1}\left(B_t(M)\cap B_{s}(p)\right)=&\int_0^t\mathcal{H}^n\left(\{\rho_M=\tau\}\cap B_{s}(p)\right)d\tau \le\int_0^t\left(\mathcal{H}^n\left(\Omega_{\tau,s}\right)+2\epsilon\right)d\tau\\ \le&\int_0^t\left(\f2{1-(n\kappa+\epsilon)\tau}\mathcal{H}^n\left(M\cap B_{\tau+s}(p)\right)+2\epsilon\right)d\tau\\ \le&\f{2t}{1-(n\kappa+\epsilon)t}\mathcal{H}^n\left(M\cap B_{t+s}(p)\right)+2\epsilon t. \endaligned \end{equation} Letting $\epsilon\rightarrow0$ completes the proof. \end{proof} If $\kappa>0$ and the manifold $N$ in Lemma \ref{LOWERM000} has Ric$\,\ge-n\kappa^2$ in $B_{1}(p)$, then we choose $t=s=\min\{\f12,\f1{2n\kappa}\}$ in \eqref{tArea}, and obtain \begin{equation}\aligned\label{nnnArea} \mathcal{H}^{n+1}(B_{t}(p))\le\min\left\{1,\f1{n\kappa}\right\}\mathcal{H}^n(M\cap B_1(p)). \endaligned \end{equation} Clearly, the constant $\min\left\{1,\f1{n\kappa}\right\}$ in the above estimation is not optimal. If $N$ has nonnegative Ricci curvature in $B_1(p)$, then we choose $t=s=\f12$ in \eqref{tArea}, and obtain \begin{equation}\aligned \mathcal{H}^{n+1}\left(B_{\f1{2}}(p)\right)\le\mathcal{H}^n\left(M\cap B_1(p)\right). \endaligned \end{equation} Let $\Sigma$ be an $(n+1)$-dimensional complete simply connected non-compact manifold with nonnegative sectional curvature. Anderson \cite{An} proved that if there is a constant $\delta_n>0$ depending only on $n$ such that $\limsup_{r\rightarrow\infty}r^{-1}\mathrm{diam}(\p B_r(p))<\delta_n$, then $\Sigma$ admits no complete area-minimizing hypersurfaces. Here, $\mathrm{diam}(\p B_r(p))=\inf_{x,y\in\p B_r(p)}d(x,y)$ denotes the (extrinsic) diameter of the sphere $\p B_r(p)$ in $\Sigma$. With Lemma \ref{LOWERM000}, we have the following non-existence result. \begin{theorem}\label{Nonexistminimizinghypersurface} Let $N$ be an $(n+1)$-dimensional complete simply connected non-compact manifold with nonnegative Ricci curvature. If $$\limsup_{r\rightarrow\infty}r^{-1}\mathrm{diam}(\p B_r(p))<\lambda_n$$ with $\lambda_n(1+\lambda_n)^n=1/(n+1)$, then $N$ admits no complete area-minimizing hypersurfaces. \end{theorem} \begin{proof} Suppose that there is a complete connected area-minimizing hypersurface $M$ in $N$. Let $\mathcal{R}_M$ denote the regular part of $M$. Note that $\mathcal{R}_M$ is one-sided and connected. Let $\mathbf{n}_M$ be the unit normal vector field to $\mathcal{R}_M$. Then for any $s>0$ $$\p B_s(M)\subset\{\mathrm{exp}_{x}(s\mathbf{n}_M(x))|\ x\in \mathcal{R}_{M}\}\cup\{\mathrm{exp}_{x}(-s\mathbf{n}_M(x))|\ x\in \mathcal{R}_{M}\}.$$ For each $\tau>0$, we define two open sets $B^+_{\tau}(M)$ and $B^-_{\tau}(M)$ by \begin{equation}\aligned\nonumber B^\pm_\tau(M)=B_\tau(M)\cap\{\mathrm{exp}_{x}(\pm s\mathbf{n}_M(x))|\ x\in \mathcal{R}_{M}, 0<s<r\}. \endaligned \end{equation} Fix a point $p\in M$, and $r>0$. We claim \begin{equation}\aligned\label{N+-tauM} B^+_{\tau}(M)\cap B^-_{\tau}(M)\cap B_r(p)=\emptyset\qquad \mathrm{for\ each}\ \tau>0. \endaligned \end{equation} If the claim \eqref{N+-tauM} fails, then there are a constant $\tau>0$ and a point $q\in\overline{B^+_{\tau}(M)}\cap \overline{B^-_{\tau}(M)}\cap B_r(p)$ but $B^+_{\tau}(M)\cap B^-_{\tau}(M)\cap B_r(p)=\emptyset$. By definition, there are two points $x^+_q,x^-_q\in M$ such that $$q=\mathrm{exp}_{x^+_q}\left(\tau\mathbf{n}_M(x^+_q)\right)=\mathrm{exp}_{x^-_q}\left(-\tau\mathbf{n}_M(x^-_q)\right).$$ Hence, there are smooth embedded curves $\gamma_\pm\subset B^\pm_{\tau}(M)$ connecting $p,q$. Clearly, $\overline{\gamma_+\cup\gamma_-}$ is a closed piecewise smooth curve in $N$ with $M\cap\overline{\gamma_+\cup\gamma_-}=\{p\}$. Namely, the intersection number (mod 2) of $\overline{\gamma_+\cup\gamma_-}$ with $M$ is 1. Therefore, $\overline{\gamma_+\cup\gamma_-}$ is homotopic nontrivial, which contradicts to the simply connected $N$. This gives the proof of the claim \eqref{N+-tauM}. Denote $E_r=B^+_{2r}(M)\cap B_r(p)$. The claim \eqref{N+-tauM} infers $$\p E_r=(M\cap B_r(p))\cup (B^+_{2r}(M)\cap\p B_r(p)).$$ Then $\p (M\cap B_r(p))=\p(B^+_{2r}(M)\cap\p B_r(p))$, and the area-minimizing hypersurface $M$ in $N$ implies \begin{equation}\aligned \mathcal{H}^{n}\left(M\cap B_r(p)\right)\le\mathcal{H}^n\left(B^+_{2r}(M)\cap\p B_r(p)\right). \endaligned \end{equation} Analogously, we have $\mathcal{H}^{n}\left(M\cap B_r(p)\right)\le\mathcal{H}^n\left(B^-_{2r}(M)\cap\p B_r(p)\right)$. With \eqref{N+-tauM}, we get \begin{equation}\aligned\label{McapBrpupbound} \mathcal{H}^{n}\left(M\cap B_r(p)\right)\le&\min\left\{\mathcal{H}^n\left(B^+_{2r}(M)\cap\p B_r(p)\right),\mathcal{H}^n\left(B^-_{2r}(M)\cap\p B_r(p)\right)\right\}\\ \le&\f12\mathcal{H}^n\left(\p B_r(p)\right). \endaligned \end{equation} Let $\lambda=\limsup_{r\rightarrow\infty}r^{-1}\mathrm{diam}(\p B_r(p))$. Then for each $\epsilon>0$ there is a constant $R_0>0$ such that $$ B_R(p)\subset B_{(\lambda+\epsilon)R}(M)$$ for each $R\ge R_0$. From \eqref{tArea} we have \begin{equation}\aligned\label{BRpcontr} \mathcal{H}^{n+1}\left(B_R(p)\right)=&\mathcal{H}^{n+1}\left(B_{(\lambda+\epsilon)R}(M)\cap B_{R}(p)\right) \le2(\lambda+\epsilon)R\mathcal{H}^n\left(M\cap B_{(1+\lambda+\epsilon)R}(p)\right). \endaligned \end{equation} From \eqref{HnpBr} and \eqref{McapBrpupbound}, we have \begin{equation}\aligned\label{MBRpcontr} \mathcal{H}^n\left(M\cap B_{(1+\lambda+\epsilon)R}(p)\right)\le&\f12 \mathcal{H}^n(\p B_{(1+\lambda+\epsilon)R}(p))\\ \le&\f{n+1}{2(1+\lambda+\epsilon)R}\mathcal{H}^{n+1}\left(B_{(1+\lambda+\epsilon)R}(p)\right). \endaligned\end{equation} Combining \eqref{BiVolN}\eqref{BRpcontr}\eqref{MBRpcontr}, we get \begin{equation}\aligned \mathcal{H}^{n+1}\left(B_R(p)\right)\le&\f{(n+1)(\lambda+\epsilon)}{1+\lambda+\epsilon}\mathcal{H}^{n+1}\left(B_{(1+\lambda+\epsilon)R}(p)\right)\\ \le&(n+1)(\lambda+\epsilon)(1+\lambda+\epsilon)^n\mathcal{H}^{n+1}\left(B_{R}(p)\right). \endaligned \end{equation} Since $\epsilon$ is arbitrary, then the above inequality implies \begin{equation}\aligned \lambda(1+\lambda)^n\ge\f1{n+1}. \endaligned \end{equation} This completes the proof. \end{proof} Using the universal covers of ambient manifolds, the volume of area-minimizing hypersurfaces in geodesic balls has a upper bound by an explicit constant as follows. \begin{lemma}\label{upbdareaM} Let $N$ be an $(n+1)$-dimensional smooth complete Riemannian manifold with Ricci curvature $\ge-n\kappa^2$ in a geodesic ball $B_1(p)$ for some constant $\kappa\ge0$. If $M$ is an area-minimizing hypersurface in $B_1(p)$ with $\p M\subset\p B_1(p)$, then there is a constant $c_{n,\kappa}=\f12(n+1)\omega_{n+1}\kappa^{-n}\sinh^n\kappa$ so that \begin{equation}\aligned\label{UNBOUNDn} \mathcal{H}^n(M\cap B_t(p))\le c_{n,\kappa}t^n\qquad \mathrm{for\ any}\ t\in(0,1]. \endaligned \end{equation} \end{lemma} \begin{proof} By scaling, we only need to prove \eqref{UNBOUNDn} for $t=1$. Let $\widetilde{N}$ be a universal cover of $N$ with the induced metric from $N$. In other words, there is a Riemannian covering map $\pi$ from simply-connected manifold $\widetilde{N}$ to $N$. Let $p'$ be a point in $\widetilde{N}$ with $\pi(p')=p$. Clearly, $\pi(B_1(p'))\subset B_1(p)$. For any geodesic segment $\gamma:\ [0,1]\rightarrow \overline{B_1(p)}$ with $\gamma(0)=p$ and $\gamma(1)\in\p B_1(p)$, let $\gamma'$ be a component of $\pi^{-1}(\gamma)$ containing $p'$. Obviously, $\gamma'\subset \overline{B_1(p')}$ and $\gamma'(1)\in\p B_1(p')$. Hence we get $\pi(B_1(p'))=B_1(p)$, and $\p B_1(p)\subset \pi(\p B_1(p'))$. In particular, $\widetilde{N}$ is an $(n+1)$-dimensional smooth complete Riemannian manifold with Ricci curvature $\ge-n\kappa^2$ in $B_1(p')$. Let $M$ be an area-minimizing hypersurface in $B_1(p)$ with $\p M\subset\p B_1(p)$. From $M\subset \pi(B_1(p'))$ and $\p M\subset \pi(\p B_1(p'))$ we can choose a subset set $M'$ of $\pi^{-1}(M)\subset\widetilde{N}$ such that $M'\subset B_1(p')$ and $\pi:\ M'\rightarrow M$ is 1-1. Then $M'$ is countably $n$-rectifiable, $\p M'\cap B_1(p')=\emptyset$, and $\mathcal{H}^n(M)=\mathcal{H}^n(M').$ We claim that \begin{center} $M'$ is area-minimizing in $B_1(p')$. \end{center} Clearly, $M'$ is one-sided outside its singular set. Let $T$ be the multiplicity one current in $B_1(p')$ with $M'=\mathrm{spt}T\cap B_1(p')$. Let $S$ be a current in $\widetilde{N}$ with $\p S=\p T$, spt$(S-T)\subset B_1(p')$. Let $\pi^\#S$ be the pushing forward of $S$ defined by $\pi^\#S(\omega)=S(\zeta\pi^\#\omega)$ for any smooth $n$-form $\omega\in \mathcal{D}^n(B_1(p))$, where $\zeta$ is any function in $C_c^\infty(\widetilde{N})$ with $\zeta=1$ in a neighborhood of spt$S\cap\mathrm{spt}(\pi^\#\omega)$ (see \cite{S} for further details). Denote $\pi^\#T$ be the pushing forward of $T$. Since $\pi:\ M'\rightarrow M$ is 1-1, then $\pi^\#T$ is the multiplicity one minimizing current in $B_1(p)$ with $M=\mathrm{spt}(\pi^\#T)\cap B_1(p)$. From $\pi(B_1(p'))=B_1(p)$ and spt$(S-T)\subset B_1(p')$, we get spt$(\pi^\#S-\pi^\#T)=\mathrm{spt}(\pi^\#(S-T))=\pi(\mathrm{spt}(S-T))\subset\pi(B_1(p'))=B_1(p)$. It's clear that $\p(\pi^\#S)=\pi^\#(\p S)=\pi^\#(\p T)=\p(\pi^\#T)$. Note that $\pi$ is a mapping with the Lipschitz constant $\mathbf{Lip}\pi=1$. Hence $$\mathbb{M}(T)=\mathcal{H}^n(M)=\mathbb{M}(\pi^\#T)\le \mathbb{M}(\pi^\#S)\le\mathbf{Lip}\pi\,\mathbb{M}(S)=\mathbb{M}(S),$$ which means that $M'$ is area-minimizing in $B_1(p')$. From the argument of the proof of Theorem \ref{Nonexistminimizinghypersurface}, there is a set $E$ in $B_1(p')$ such that $\p E\cap B_1(p')=M'$ and $\mathcal{H}^n\left(\p E\cap B_1(p')\right)\le\f12\mathcal{H}^n\left(\p B_1(p')\right)$. Then with \eqref{HnpBr} we have \begin{equation}\aligned \mathcal{H}^n(M)=\mathcal{H}^n(M')\le\f12\mathcal{H}^n(\p B_1(p'))\le\f12(n+1)\omega_{n+1}\f{\sinh^n\kappa}{\kappa^n}, \endaligned \end{equation} which completes the proof. \end{proof} \begin{remark} In general, $\mathcal{H}^n(M)$ may not be sufficiently small when $\mathcal{H}^{n+1}(B_1(p))$ is sufficiently small in Lemma \ref{upbdareaM}. For any $\epsilon>0$, let $N_\epsilon$ be a cylinder $\mathbb{S}^1(\epsilon)\times\mathbb R^n\subset\mathbb R^{n+2}$ with the metric induced from $\mathbb R^{n+2}$, and $M_\epsilon=\{\theta_\epsilon\}\times\mathbb R^n$ for $\theta_\epsilon\in\mathbb{S}^1(\epsilon)$. Denote $p_\epsilon=(\theta_\epsilon,0^n)\in N_\epsilon$, and $B_r(p_\epsilon)$ be the geodesic ball in $N_\epsilon$ with radius $r$ and centered at $p_\epsilon$. Then $\mathcal{H}^n(M_\epsilon\cap B_1(p_\epsilon))=\omega_n$ and $2\pi\epsilon\omega_n(1-\pi\epsilon)^n<\mathcal{H}^{n+1}(B_1(p_\epsilon))<2\pi\epsilon\omega_n$. Since $N_\epsilon$ is flat everywhere, then for any minimal hypersurface $S_\epsilon\subset N_\epsilon$ with $p_\epsilon\in S_\epsilon$ we have $\mathcal{H}^n(S_\epsilon\cap B_r(p_\epsilon))\ge\omega_nr^n$ from the monotonicity of $r^{-n}\mathcal{H}^n(S_\epsilon\cap B_r(p_\epsilon))$. Hence, $M_\epsilon$ is area-minimizing in $N_\epsilon$ for each $\epsilon>0$. \end{remark} Let us estimate the lower bound of the volume for a class of sets associated with area-minimizing hypersurfaces, which will be needed in Theorem \ref{Conv0}. \begin{lemma}\label{lbd} Let $N$ be an $(n+1)$-dimensional smooth non-compact Riemannian manifold with $Ric\ge-n\kappa^2$ on $B_1(p)$. There is a constant $\alpha^*_{n,\kappa}>0$ depending only on $n,\kappa$ such that if $E$ is an open set in $B_1(p)\subset N$ with $p\in \p E$, and $\p E\cap B_1(p)$ is an area-minimizing hypersurface in $B_1(p)$, then $$\mathcal{H}^{n+1}(E)\ge \alpha^*_{n,\kappa}\mathcal{H}^{n+1}(B_1(p)).$$ \end{lemma} \begin{proof} Let $E_{s}=E\cap\p B_s(p)$ for any $s\in(0,1]$, then $$\p E_{s}=\p E\cap\p B_s(p)=\p(B_s(p)\cap\p E).$$ Since $\p E\cap B_1(p)$ is area-minimizing in $B_1(p)$, then \begin{equation}\aligned \mathcal{H}^{n}(E_{s})\ge\mathcal{H}^{n}(B_s(p)\cap\p E). \endaligned \end{equation} From co-area formula, $$\mathcal{H}^{n+1}(B_t(p)\cap E)=\int_0^t\mathcal{H}^{n}(E_{s})ds\qquad \mathrm{for\ any}\ t\in(0,1].$$ By Bishop-Gromov volume comparison \eqref{BiVolN} and the isoperimetric inequality \eqref{isoperi}, we have \begin{equation}\aligned\label{ODEEsx} \widetilde{\alpha}_{n,\kappa}\left(\mathcal{H}^{n+1}(B_1(p))\right)^{\f1{n+1}}&\left(\int_0^t\mathcal{H}^{n}(E_{s})ds\right)^{\f{n}{n+1}}\le\mathcal{H}^{n}(\p(B_t(p)\cap E))\\ \le&\mathcal{H}^{n}(E_{t})+\mathcal{H}^{n}(B_t(p)\cap \p E)\le2\mathcal{H}^{n}(E_{t}) \endaligned\end{equation} for any $t\in(0,\f12]$. Here, $\widetilde{\alpha}_{n,\kappa}=\alpha_{n,\kappa}\left(V_{\kappa}^{n+1}(1)\right)^{-1/(n+1)}$, and $V^{n+1}_\kappa(1)$ denotes the volume of a unit geodesic ball in an $(n+1)$-dimensional space form with constant sectional curvature $-\kappa^2$. From $p\in \p E$, one has $\int_0^t\mathcal{H}^{n}(E_{s})ds>0$ for each $t\in(0,1]$. From the ODE \eqref{ODEEsx}, we get \begin{equation} \f{\p}{\p t}\left(\int_0^t\mathcal{H}^{n}(E_{s})ds\right)^{\f{1}{n+1}}\ge\f{\widetilde{\alpha}_{n,\kappa}}{2(n+1)}\left(\mathcal{H}^{n+1}(B_1(p))\right)^{\f1{n+1}}, \end{equation} which gives \begin{equation}\label{EsxB1p} \left(\int_0^t\mathcal{H}^{n}(E_{s})ds\right)^{\f{1}{n+1}}\ge\f{\widetilde{\alpha}_{n,\kappa}t}{2(n+1)}\left(\mathcal{H}^{n+1}(B_1(p))\right)^{\f1{n+1}} \end{equation} for any $t\in(0,\f12]$. In particular, \begin{equation} \mathcal{H}^{n+1}(E)\ge\int_0^{\f12}\mathcal{H}^{n}(E_{s})ds\ge\left(\f{\widetilde{\alpha}_{n,\kappa}}{4(n+1)}\right)^{n+1}\mathcal{H}^{n+1}(B_1(p)), \end{equation} which completes the proof. \end{proof} \textbf{Remark.} The set $E$ in the above lemma is said to be a \emph{minimal set} in some terminology, such as \cite{Gi}. \section{Convergence for area-minimizing hypersurfaces in geodesic balls} Let $N_i$ be a sequence of $(n+1)$-dimensional smooth Riemannian manifolds with $\mathrm{Ric}\ge-n\kappa^2$ on the metric ball $B_{1+\kappa'}(p_i)\subset N_i$ for constants $\kappa\ge0$, $\kappa'>0$. Up to a choice of the subsequence, we assume that $\overline{B_1(p_i)}$ converges to a metric ball $\overline{B_1(p_\infty)}$ in the Gromov-Hausdorff sense. Namely, there is a sequence of $\epsilon_i$-Hausdorff approximations $\Phi_i:\, B_1(p_i)\rightarrow B_1(p_\infty)$ for some sequence $\epsilon_i\rightarrow0$. Let $\nu_\infty$ denote the renormalized limit measure on $B_1(p_\infty)$ obtained from the renormalized measures as \eqref{nuinfty}. For any set $K$ in $\overline{B_1(p_\infty)}$, let $B_\delta(K)$ be the $\delta$-neighborhood of $K$ in $\overline{B_1(p_\infty)}$ defined by $\{y\in \overline{B_1(p_\infty)}|\ d(y,K)<\delta\}$. Here, $d$ denotes the distance function on $\overline{B_1(p_\infty)}$. Let us state the continuity for measure of $\delta$-neighborhood of sets in the Gromov-Hausdorff topology, which will be needed in a sequel. \begin{lemma}\label{UpMinfty} Let $F_i$ be a sequence of sets in $B_1(p_i)$. If $F_i$ converges to a closed set $F_\infty\subset \overline{B_1(p_\infty)}$ in the induced Hausdorff sense, then for each $t\in(0,1)$ and $\delta\in(0,1)$ one has $$\nu_\infty\left(B_\delta(F_\infty)\cap B_t(p_\infty)\right)=\lim_{i\rightarrow\infty}\mathcal{H}^{n+1}(B_\delta(F_i)\cap B_t(p_i))/\mathcal{H}^{n+1}\left(B_{1}(p_i)\right).$$ \end{lemma} \begin{proof} For each fixed $\tau\in(0,\min\{t,\delta\})$, from Lemma \ref{nuinftyKthj} in appendix II, there is a sequence of mutually disjoint balls $\{B_{\theta_j}(x_j)\}_{j=1}^\infty$ with $x_j\in \overline{B_{\delta-\tau}(F_\infty)}\cap \overline{B_{t-\tau}(p_\infty)}$ and $\theta_j<\tau$ such that $\overline{B_{\delta-\tau}(F_\infty)}\cap \overline{B_{t-\tau}(p_\infty)}\subset \bigcup_{1\le j\le k}B_{\theta_j+2\theta_k}(x_j)$ for the sufficiently large $k$, and \begin{equation}\aligned\label{430} \nu_\infty\left(\overline{B_{\delta-\tau}(F_\infty)}\cap \overline{B_{t-\tau}(p_\infty)}\right)\le\sum_{j=1}^{\infty}\nu_\infty\left(B_{\theta_j}(x_j)\right). \endaligned \end{equation} Note that $\Phi_i(F_i)$ converges to $F_\infty$ in the Hausdorff sense. For each $j\ge0$, there is a sequence of points $x_{i,j}\in B_{\delta'}(F_i)$ with $\lim_{i\rightarrow\infty}x_{i,j}=x_j$. From \eqref{nuinfty}, \begin{equation}\aligned \lim_{i\rightarrow\infty}\mathcal{H}^{n+1}\left(B_{\theta_j}(x_{i,j})\right)/\mathcal{H}^{n+1}\left(B_{1}(p_i)\right)=\nu_\infty\left(B_{\theta_j}(x_{j})\right). \endaligned \end{equation} Now we fix an integer $m>0$. By the selection of $B_{\theta_j}(x_j)$, we can require $B_{\theta_j}(x_{i,j})\cap B_{\theta_k}(x_{i,k})=\emptyset$ for $j\neq k$ and $j,k\le m$ provided $i$ is sufficiently large. Hence combining $B_{\theta_j}(x_{i,j})\subset B_\delta(F_i)\cap B_t(p_i)$, \eqref{430} implies \begin{equation}\aligned\label{wnthjn} \sum_{j=1}^{m}\nu_\infty\left(B_{\theta_j}(x_{j})\right)\le\mathcal{H}^{n+1}(B_\delta(F_i)\cap B_t(p_i))/\mathcal{H}^{n+1}\left(B_{1}(p_i)\right) \endaligned \end{equation} for the sufficiently large $i$. Letting $i\rightarrow\infty$ first, then letting $m\rightarrow\infty$ infers \begin{equation}\aligned\label{432} \sum_{j=1}^{\infty}\nu_\infty\left(B_{\theta_j}(x_{j})\right)\le\liminf_{i\rightarrow\infty}\mathcal{H}^{n+1}(B_\delta(F_i)\cap B_t(p_i))/\mathcal{H}^{n+1}\left(B_{1}(p_i)\right). \endaligned \end{equation} Combining \eqref{430}\eqref{432} we have \begin{equation}\aligned\label{436} \nu_\infty\left(\overline{B_{\delta-\tau}(F_\infty)}\cap \overline{B_{t-\tau}(p_\infty)}\right)\le\liminf_{i\rightarrow\infty}\mathcal{H}^{n+1}(B_\delta(F_i)\cap B_t(p_i))/\mathcal{H}^{n+1}\left(B_{1}(p_i)\right). \endaligned \end{equation} Letting $\tau\rightarrow0$ implies $$\nu_\infty\left(B_\delta(F_\infty)\cap B_t(p_\infty)\right)\le\liminf_{i\rightarrow\infty}\mathcal{H}^{n+1}(B_\delta(F_i)\cap B_t(p_i))/\mathcal{H}^{n+1}\left(B_{1}(p_i)\right).$$ Combining Lemma \ref{Cont*} in appendix II, we complete the proof. \end{proof} From now on, we further assume \begin{equation}\aligned\label{VOL*} \mathcal{H}^{n+1}(B_1(p_i))\ge v\qquad \mathrm{for\ each}\ i\ge1\ \mathrm{and\ some\ constant}\ v>0. \endaligned \end{equation} For each $i$, let $M_i$ be an area-minimizing hypersurface in $B_1(p_i)$ with $\p M_i\subset\p B_1(p_i)$. Suppose that $M_i$ converges to a closed set $M_\infty\subset \overline{B_1(p_\infty)}$ in the induced Hausdorff sense. For any $0<t<1$ and $0<\delta\le\min\{\f1{n\kappa},t-\delta\}$, from \eqref{VolCOV}, Lemma \ref{LOWERM000} and Lemma \ref{UpMinfty} we have \begin{equation}\aligned\label{Hn1Bdede*} \mathcal{H}^{n+1}\left(B_{\delta}(M_\infty)\cap B_{t-\delta}(p_\infty)\right)=&\liminf_{i\rightarrow\infty}\mathcal{H}^{n+1}\left(B_\delta(M_i)\cap B_{t-\delta}(p_i)\right)\\ \le&\f{2\delta}{1-n\kappa\delta}\liminf_{i\rightarrow\infty}\mathcal{H}^n\left(M_i\cap B_{t}(p_i)\right). \endaligned \end{equation} From the definition of Minkowski contents and \eqref{Hn1Bdede*}, for any $0<t'<t<1$ we immediately have \begin{equation}\aligned\label{nu*MinftyBtp} \mathcal{M}^*\left(M_\infty,B_{t'}(p_\infty)\right)\le\limsup_{\delta\rightarrow0}\f{\mathcal{H}^{n+1}\left(B_{\delta}(M_\infty)\cap B_{t-\delta}(p_\infty)\right)}{2\delta} \le\liminf_{i\rightarrow\infty}\mathcal{H}^n\left(M_i\cap B_{t}(p_i)\right). \endaligned \end{equation} The upper Minkowski content of the limit $M_\infty$ preserves the minimizing property under some suitable conditions in the following sense. Let $S_i$ be a subset in $B_1(p_i)$ such that $S_i\setminus M_i$ is smooth embedded, $S_i=M_i$ outside $B_t(p_i)$ and $\p \llbracket S_i\rrbracket=\p\llbracket M_i\rrbracket$. For each $i$, let $\mathcal{R}_{M_i}$ denote the regular part of $M_i$, and $\mathcal{R}_{S_i}=(S_i\setminus M_i)\cup(S_i\cap \mathcal{R}_{M_i})$. Clearly, $\mathcal{R}_{S_i}$ is one-sided and connected. Let $\mathbf{n}_{S_i}$ be the unit normal vector field to $\mathcal{R}_{S_i}$. For each $r\in(0,1-t)$, we define two open sets $B^\pm_{r}(S_i)$ by \begin{equation}\aligned\nonumber B^\pm_{r}(S_i)=B_r(S_i)\cap\{\mathrm{exp}_{x}(\pm s\mathbf{n}_{S_i}(x))|\ x\in \mathcal{R}_{S_i},\, 0<s<r\}. \endaligned \end{equation} Clearly, $B^+_{r}(S_i)\cap B^-_{r}(S_i)\cap S_i=\emptyset$, and $$B_{r}(S_i)\cap B_{1-r}(p_i)=(B^+_{r}(S_i)\cup B^-_{r}(S_i)\cup S_i)\cap B_{1-r}(p_i).$$ \begin{proposition}\label{MinMSinfty} Suppose that there is a constant $\theta\in(0,t)$ such that $S_i,B^+_{\theta}(S_i),B^-_{\theta}(S_i)$ converge to closed sets $S_\infty,\Gamma^+_{\theta,\infty},\Gamma^-_{\theta,\infty}\subset \overline{B_1(p_\infty)}$ in the induced Hausdorff sense, respectively. If $\Gamma^+_{\theta,\infty}\cap\Gamma^-_{\theta,\infty}\subset S_\infty$, then \begin{equation}\aligned\label{MinftytSinfty} \mathcal{M}^*\left(M_\infty,B_t(p_\infty)\right)\le\mathcal{M}^*\left(S_\infty,B_t(p_\infty)\right),\quad \mathcal{M}_*\left(M_\infty,B_t(p_\infty)\right)\le\mathcal{M}_*\left(S_\infty,B_t(p_\infty)\right). \endaligned \end{equation} \end{proposition} \begin{proof} For any $\tau\in(t',1)$ and $\delta\in(0,\min\{\f1{2n\kappa},\tau,1-\tau\})$, from \eqref{tArea} and Lemma \ref{upbdareaM} we have \begin{equation}\aligned\label{Hn+1BdeMitaupi} \mathcal{H}^{n+1}\left(B_\delta(M_i)\cap B_{\tau}(p_i)\right)\le\f{2\delta}{1-n\kappa\delta}\mathcal{H}^n\left(M_i\cap B_{\tau+\delta}(p_i)\right)\le \f{2\delta c_{n,\kappa}}{1-n\kappa\delta}. \endaligned \end{equation} From co-area formula, there is a constant $t_*\in[\f{1+3t'}4,\f{3+t'}4]$ such that \begin{equation}\aligned\label{410*****} \mathcal{H}^{n+1}\left(B_\delta(M_i)\cap B_{t_*+\sqrt{\delta}}(p_i)\setminus B_{t_*}(p_i)\right)\le\f12\delta^{4/3} \endaligned \end{equation} for the sufficiently small $0<\delta<\f1{2n\kappa}$. Combining \eqref{410*****} and co-area formula, there is a constant $\tilde{\delta}\in[\delta,\sqrt{\delta}]$ such that \begin{equation}\aligned \mathcal{H}^{n}\left(B_\delta(M_i)\cap\p B_{t_*+\tilde{\delta}}(p_i)\right)\le&\f1{\sqrt{\delta}-\delta}\mathcal{H}^{n+1}\left(B_\delta(M_i)\cap B_{t_*+\sqrt{\delta}}(p_i)\setminus B_{t_*+\delta}(p_i)\right)\\ \le&\f{\delta^{4/3}}{2(\sqrt{\delta}-\delta)}\le\delta^{5/6}. \endaligned \end{equation} Note that $B_{\delta}(S_i)=B_{\delta}(M_i)$ outside $B_{t'}(p_i)$ for $0<\delta<t'-t$. Hence \begin{equation}\aligned\label{HnBdeSit*pi} \mathcal{H}^{n}\left(B_\delta(S_i)\cap\p B_{t_*+\tilde{\delta}}(p_i)\right)=\mathcal{H}^{n}\left(B_\delta(M_i)\cap\p B_{t_*+\tilde{\delta}}(p_i)\right)\le\delta^{5/6}. \endaligned \end{equation} By the definition of $B^\pm_r(S_i)$, $S_i\cap B_{t_*+\tilde{\delta}}(p_i)\subset \p\left(B^\pm_r(S_i)\cap B_{t_*+\tilde{\delta}}(p_i)\right)$ and $$\p\left(B^\pm_r(S_i)\cap B_{t_*+\tilde{\delta}}(p_i)\right)=\left(\p(B^\pm_r(S_i))\cap B_{t_*+\tilde{\delta}}(p_i)\right)\cup\left(B^\pm_r(S_i)\cap\p B_{t_*+\tilde{\delta}}(p_i)\right).$$ From the minimizing property of $M_i$ and $S_i=M_i$ outside $B_t(p_i)$, we get \begin{equation*}\aligned &\mathcal{H}^n\left(M_i\cap B_{t_*+\tilde{\delta}}(p_i)\right)\le\mathcal{H}^n\left(\p(B^+_r(S_i))\cap B_{t_*+\tilde{\delta}}(p_i)\setminus S_i\right) +\mathcal{H}^n\left(B^+_r(S_i)\cap\p B_{t_*+\tilde{\delta}}(p_i)\right)\\ &\mathcal{H}^n\left(M_i\cap B_{t_*+\tilde{\delta}}(p_i)\right)\le\mathcal{H}^n\left(\p(B^-_r(S_i))\cap B_{t_*+\tilde{\delta}}(p_i)\setminus S_i\right) +\mathcal{H}^n\left(B^-_r(S_i)\cap\p B_{t_*+\tilde{\delta}}(p_i)\right) \endaligned. \end{equation*} From $\Gamma^+_{\theta,\infty}\cap\Gamma^-_{\theta,\infty}\subset S_\infty$, there is an integer $i_0>0$ so that $\p(B_r(S_i))\cap B_{t_*+\tilde{\delta}}(p_i)=\left(\p(B^+_r(S_i))\cup\p(B^-_r(S_i))\right)\cap B_{t_*+\tilde{\delta}}(p_i)\setminus S_i$ for all $r\in[\delta^2,\delta]$ and all $i\ge i_0$. Hence, from the above two inequalities we get \begin{equation}\aligned\label{2HMiBt*de} 2\mathcal{H}^n\left(M_i\cap B_{t_*+\tilde{\delta}}(p_i)\right)\le\mathcal{H}^n\left(\p(B_r(S_i))\cap B_{t_*+\tilde{\delta}}(p_i)\right) +\mathcal{H}^n\left(B_r(S_i)\cap\p B_{t_*+\tilde{\delta}}(p_i)\right) \endaligned \end{equation} for all $r\in[\delta^2,\delta]$ and all $i\ge i_0$. With co-area formula and \eqref{Hn+1BdeMitaupi}\eqref{HnBdeSit*pi}\eqref{2HMiBt*de}, for each $i\ge i_0$ we have \begin{equation}\aligned &\mathcal{H}^{n+1}\left(B_{\delta-\delta^2}(S_i)\cap B_{t_*+\tilde{\delta}}(p_i)\right)=\int_0^{\delta-\delta^2}\mathcal{H}^n\left(\p(B_\tau(S_i))\cap B_{t_*+\tilde{\delta}}(p_i)\right)d\tau\\ \ge&\int_{\delta^2}^{\delta-\delta^2}\left(2\mathcal{H}^n\left(M_i\cap B_{t_*+\tilde{\delta}}(p_i)\right)-\mathcal{H}^n\left(B_\tau(S_i)\cap\p B_{t_*+\tilde{\delta}}(p_i)\right)\right)d\tau\\ \ge&2(\delta-2\delta^2)\mathcal{H}^n\left(M_i\cap B_{t_*+\tilde{\delta}}(p_i)\right)-\delta^{\f{11}6}\\ \ge&(1-2\delta)(1-n\kappa\delta)\mathcal{H}^{n+1}\left(B_\delta(M_i)\cap B_{t_*}(p_i)\right)-\delta^{\f{11}6}. \endaligned \end{equation} Hence combining \eqref{410*****}, we obtain \begin{equation}\aligned\label{Bde-de2Side43116} &\mathcal{H}^{n+1}\left(B_{\delta-\delta^2}(S_i)\cap B_{t_*}(p_i)\right)+\f12\delta^{4/3}\\ \ge&\mathcal{H}^{n+1}\left(B_{\delta-\delta^2}(S_i)\cap B_{t_*}(p_i)\right)+\mathcal{H}^{n+1}\left(B_\delta(S_i)\cap B_{t_*+\sqrt{\delta}}(p_i)\setminus B_{t_*}(p_i)\right)\\ \ge&\mathcal{H}^{n+1}\left(B_{\delta-\delta^2}(S_i)\cap B_{t_*+\sqrt{\delta}}(p_i)\right)\\ \ge&(1-2\delta)(1-n\kappa\delta)\mathcal{H}^{n+1}\left(B_\delta(M_i)\cap B_{t_*}(p_i)\right)-\delta^{\f{11}6}. \endaligned \end{equation} From \eqref{VolCOV} and Lemma \ref{UpMinfty}, letting $i\to\infty$ in \eqref{Bde-de2Side43116} implies \begin{equation}\aligned (1-2\delta)(1-n\kappa\delta)\mathcal{H}^{n+1}\left(B_\delta(M_\infty)\cap B_{t_*}(p_\infty)\right)\le\mathcal{H}^{n+1}\left(B_\delta(S_\infty)\cap B_{t_*}(p_\infty)\right)+2\delta^{\f{11}6}. \endaligned \end{equation} As $S_i=M_i$ outside $B_t(p_i)$, \begin{equation}\aligned\label{deHn+1BdeMt'*} (1-2\delta)(1-n\kappa\delta)\mathcal{H}^{n+1}\left(B_\delta(M_\infty)\cap B_{t'}(p_\infty)\right)\le\mathcal{H}^{n+1}\left(B_\delta(S_\infty)\cap B_{t'}(p_\infty)\right)+2\delta^{\f{11}6}. \endaligned \end{equation} Forcing $t'\rightarrow t$ in \eqref{deHn+1BdeMt'*} implies \begin{equation}\aligned\label{deHn+1BdeMt'**} (1-2\delta)(1-n\kappa\delta)\mathcal{H}^{n+1}\left(B_\delta(M_\infty)\cap B_{t}(p_\infty)\right)\le\mathcal{H}^{n+1}\left(B_\delta(S_\infty)\cap B_{t}(p_\infty)\right)+2\delta^{\f{11}6}. \endaligned \end{equation} We divide $2\delta$ on both sides of \eqref{deHn+1BdeMt'**} and get \eqref{MinftytSinfty} by letting $\delta\rightarrow0$. \end{proof} From Lemma \ref{LOWERM000} and Lemma \ref{upbdareaM}, we can get upper and positive lower bounds for the Hausdorff measure of $M_\infty$. \begin{lemma}\label{UPPLOWMinfty} There is a constant $c_{n,\kappa}^*>0$ depending only on $n,\kappa$ such that \begin{equation}\aligned\label{HnBrpinfUP} \mathcal{H}^n\left(M_\infty\cap B_r(p_\infty)\right)\le c_{n,\kappa}^*r^n\qquad \mathrm{for\ any}\ r\in(0,1]. \endaligned \end{equation} Moreover, if we further suppose $p_\infty\in M_\infty$, then there is a constant $\delta_{n,\kappa,v}>0$ depending only on $n,\kappa,v$ such that \begin{equation}\aligned\label{lowerbdMinftyr} \mathcal{H}^n\left(M_\infty\cap B_r(p_\infty)\right)\ge \delta_{n,\kappa,v}r^n&\qquad \mathrm{for\ any}\ r\in(0,1]. \endaligned \end{equation} \end{lemma} \begin{proof} For any $\epsilon\in(0,\f12)$ and $r\in(0,1-2\epsilon)$, let $\{B_{s_j}(x_j)\}_{j=1}^{N_\epsilon}$ be a covering of $M_\infty\cap \overline{B_r(p_\infty)}$ with all $s_j<\min\{\epsilon,(\kappa+1)^{-1}\}$ such that \begin{equation}\aligned\label{HnMinftyCOVER} \mathcal{H}^n\left(M_\infty\cap \overline{B_r(p_\infty)}\right)\le \omega_n\sum_{j=1}^{N_\epsilon}s_j^n+\epsilon. \endaligned \end{equation} We assume $M_\infty\cap B_{s_j}(x_j)\neq\emptyset$ and $w_j\in M_\infty\cap B_{s_j}(x_j)$. Then $\{B_{2s_j}(w_j)\}_{j=1}^{N_\epsilon}$ is a covering of $M_\infty\cap\overline{B_r(p_\infty)}$. By a covering lemma, up to a choice of the subsequence we may assume that $B_{\f25s_j}(w_j)$ and $B_{\f25s_{j'}}(w_{j'})$ are disjoint for all distinct $j,j'$. Let $w_{i,j}\in M_i\cap B_r(p_i)$ be a sequence of points converging as $i\rightarrow\infty$ to $w_j$. Then we can assume that $B_{\f25s_j}(w_{i,j})$ and $B_{\f25s_{j'}}(w_{i,j'})$ are disjoint for all distinct $j,j'\le N_\epsilon$ and for all the sufficiently large $i$. Hence, from Lemma \ref{LOWERM000} and $s_j\le(\kappa+1)^{-1}$ (up to a scaling of $N_i$) \begin{equation}\aligned \mathcal{H}^n\left(M_i\cap B_{\f{2s_j}5}(w_{i,j})\right)\ge \delta_{n}s_j^n \endaligned \end{equation} for some constant $\delta_n>0$ depending only on $n$. With \eqref{UNBOUNDn} we have \begin{equation}\aligned \delta_n\sum_{j=1}^{N'_\epsilon}s_j^n\le\sum_{j=1}^{N'_\epsilon}\mathcal{H}^n\left(M_i\cap B_{\f{2s_j}5}(w_{i,j})\right)\le\mathcal{H}^n(M_i\cap B_{r+2\epsilon}(p_i))\le c_{n,\kappa}(r+2\epsilon)^n. \endaligned \end{equation} With \eqref{HnMinftyCOVER}, there is a constant $c_{n,\kappa}^*>0$ depending only on $n,\kappa$ such that \begin{equation}\aligned\label{upbdMinftyr} \mathcal{H}^n\left(M_\infty\cap \overline{B_r(p_\infty)}\right)\le c_{n,\kappa}^* r^n\qquad \mathrm{for\ each}\ 0<r<1, \endaligned \end{equation} which implies \eqref{HnBrpinfUP}. We further suppose $p_\infty\in M_\infty$. For any $0<\epsilon'<r<1$, (combining spherical measure is equivalent to Hausdorff measure up to a constant) there is a finite covering $\{B_{r_j}(z_j)\}_{j=1}^{N_\epsilon'}$ of $M_\infty\cap \overline{B_r(p_\infty)}$ with $B_{r_j}(z_j)\subset B_1(p_\infty)$ such that \begin{equation}\aligned\label{ddddd} \mathcal{H}^n\left(M_\infty\cap \overline{B_r(p_\infty)}\right)\ge c_n\sum_{j=1}^{N_\epsilon'}r_j^n-\epsilon, \endaligned \end{equation} where $c_n$ is a constant depending only on $n$. For each $j$, let $z_{i,j}\in N_i$ be a sequence of points with $B_{r_j}(z_{i,j})\subset B_1(p_i)$ and $z_{i,j}\rightarrow z_j$ as $i\rightarrow\infty$. Then $\{B_{r_j}(z_{i,j})\}_{j=1}^{N_\epsilon'}$ is a covering of $M_i\cap B_{r}(p_i)$ for the sufficiently large $i$. With \eqref{UNBOUNDn} and \eqref{ddddd}, we have \begin{equation}\aligned \mathcal{H}^n\left(M_\infty\cap \overline{B_r(p_\infty)}\right)\ge\f{c_n}{c_{n,\kappa}}\sum_{j=1}^{N_\epsilon}\mathcal{H}^n\left(M_i\cap B_{r_j}(z_{i,j})\right)-\epsilon \ge\f{c_n}{c_{n,\kappa}}\mathcal{H}^n\left(M_i\cap B_{r}(p_i)\right)-\epsilon. \endaligned \end{equation} Combining \eqref{tArea} (or \eqref{nnnArea}), there is a constant $\delta_{n,\kappa,v}>0$ depending only on $n,\kappa,v$ such that \begin{equation}\aligned \mathcal{H}^n\left(M_\infty\cap \overline{B_r(p_\infty)}\right)\ge\delta_{n,\kappa,v}r^n-\epsilon. \endaligned \end{equation} We get \eqref{lowerbdMinftyr} by letting $\epsilon\rightarrow0$ in the above inequality. This completes the proof. \end{proof} Now let us study the limits of a sequence of area-minimizing hypersurfaces in a sequence of manifolds converging to a smooth geodesic ball. \begin{theorem}\label{Conv0} Let $N_i$ be a sequence of $(n+1)$-dimensional smooth Riemannian manifolds with $\mathrm{Ric}\ge-n\kappa^2$ on the metric ball $B_{1+\kappa'}(p_i)\subset N_i$ with $\kappa\ge0$, $\kappa'>0$. Suppose that $B_1(p_i)$ converges to an $(n+1)$-dimensional smooth manifold $\overline{B_1(p)}$ in the Gromov-Hausdorff sense. Let $M_i$ be an area-minimizing hypersurface in $B_1(p_i)$ with $p_i\in M_i$ and $\p M_i\subset\p B_1(p_i)$, and $M_i$ converges to a closed set $M_\infty$ in $\overline{B_1(p)}$ in the induced Hausdorff sense. Then $M_\infty$ is area-minimizing in $B_1(p)$ and for any $t\in(0,1)$ \begin{equation}\aligned\label{HnMinfequMitpi} &\mathcal{H}^n\left(M_\infty\cap \overline{B_t(p)}\right)\ge\limsup_{i\rightarrow\infty}\mathcal{H}^{n}\left(M_i\cap \overline{B_t(p_i)}\right)\\ \ge&\liminf_{i\rightarrow\infty}\mathcal{H}^{n}(M_i\cap B_t(p_i))\ge\mathcal{H}^n(M_\infty\cap B_t(p)). \endaligned \end{equation} \end{theorem} \begin{proof} Let $x\in M_\infty\cap B_1(p)$. There is a suitable small $0<r<\f12(1-d(p,x))$ such that $B_{r}(x)$ is diffeomorphic to an $(n+1)$-dimensional Euclidean ball $B_r(0)\subset\mathbb R^{n+1}$. Let $x_i\in B_1(p_i)$ with $x_i\rightarrow x$. From Theorem A.1.8 in \cite{CCo1}, there are open sets $U_i\subset B_r(x_i)$ such that $U_i$ is homeomorphic to $B_r(0)$, and $U_i\supset B_{(1-\epsilon_i)r}(x_i)$ for some sequence $\epsilon_i\to0$ as $i\to\infty$. Then there is an open set $E_i$ in $U_i$ with $\p E_i\cap U_i=M_i\cap U_i$. Up to a choice of the subsequence, we assume that $E_i$ converges to a closed set $E_\infty$ in $\overline{B_r(x)}\subset B_1(p)$ in the induced Hausdorff sense. From Lemma \ref{subset} in the appendix II, it follows that $\p E_\infty\cap B_r(x)\subset M_\infty$. We claim \begin{equation}\aligned\label{CLAy} M_\infty\cap B_r(x)=\p E_\infty\cap B_r(x). \endaligned \end{equation} It remains to prove $y\in \p E_\infty$ for any $y\in M_\infty\cap B_r(x)$. If it fails, i.e., there is a point $y\in M_\infty\cap B_r(x)\setminus \p E_\infty$. Then there is a positive constant $\delta_0\in(0,r-d(x,y))$ such that $B_{\delta_0}(y)\subset E_\infty$. Let $y_{i}\in \p E_{i}\cap B_1(p_i)$ with $y_i\rightarrow y$, then for any $\delta\in(0,\delta_0]$, $E_i\cap B_\delta(y_i)$ converges to $E_\infty\cap B_\delta(y)=B_\delta(y)$ in the induced Hausdorff sense. From Lemma \ref{nuinftyKthj}, there is a sequence of mutually disjoint balls $\{B_{\theta_j}(z_j)\}_{j=1}^\infty$ with $z_j\in \overline{B_{(1-\delta)\delta}(y)}\setminus B_{\delta'}(M_\infty)$ and $\theta_j<\min\{\delta'/6,\delta^2/6\}$ such that $\overline{B_{(1-\delta)\delta}(y)}\setminus B_{\delta'}(M_\infty)\subset \bigcup_{1\le j\le k}B_{\theta_j+2\theta_k}(z_j)$ for the sufficiently large $k$, and \begin{equation}\aligned\label{(1-de)deyMde'} \mathcal{H}^{n+1}\left(\overline{B_{(1-\delta)\delta}(y)}\setminus B_{\delta'}(M_\infty)\right)\le\sum_{j=1}^{\infty}\mathcal{H}^{n+1}\left(B_{\theta_j}(z_j)\right). \endaligned \end{equation} Denote $E_i^\delta=E_i\cap B_\delta(y_i)$. For each $j$, there is a sequence of points $z_{j,i}\in N_i$ with $z_{j,i}\to z_j$. Since $E_i^\delta\to B_\delta(y)$ and $\p E_i\cap B_\delta(y_i)\to M_\infty\cap B_\delta(y)$, for each large $N_*$ there is an integer $i_0>0$ such that $B_{\theta_j}(z_{j,i})\subset E_i^\delta$, $B_{\theta_{j}}(z_{j,i})\cap B_{\theta_{j'}}(z_{j',i})=\emptyset$ for each $1\le j\neq j'\le N_*$ and $i\ge i_0$. Hence, from \eqref{VolCOV} it follows that \begin{equation}\aligned \sum_{j=1}^{N_*}\mathcal{H}^{n+1}\left(B_{\theta_j}(z_j)\right)=\lim_{i\to\infty}\sum_{j=1}^{N_*}\mathcal{H}^{n+1}\left(B_{\theta_j}(z_{j,i})\right) \le\liminf_{i\to\infty}\mathcal{H}^{n+1}\left(E_i^\delta\right). \endaligned \end{equation} Combining \eqref{(1-de)deyMde'} we have \begin{equation}\aligned\label{(1-de)deyMde''} \mathcal{H}^{n+1}\left(\overline{B_{(1-\delta)\delta}(y)}\setminus B_{\delta'}(M_\infty)\right)\le\liminf_{i\to\infty}\mathcal{H}^{n+1}\left(E_i^\delta\right). \endaligned \end{equation} Combining \eqref{nu*MinftyBtp} and Lemma \ref{lbd}, there is a constant $\delta'>0$ such that \begin{equation}\aligned\label{qqqqq1} \mathcal{H}^n\left(B_{\delta'}(M_\infty)\cap B_{\delta}(y)\right)\le \delta\mathcal{H}^{n+1}\left(E_i^\delta\right)\qquad \mathrm{for\ each}\ i\ge 1. \endaligned \end{equation} With \eqref{(1-de)deyMde''}\eqref{qqqqq1}, for each $i\ge1$ we deduce \begin{equation}\aligned \mathcal{H}^{n+1}(B_{(1-\delta)\delta}(y))\le&\mathcal{H}^{n+1}\left(\overline{B_{(1-\delta)\delta}(y)}\setminus B_{\delta'}(M_\infty)\right)+\mathcal{H}^{n+1}\left(B_{\delta'}(M_\infty)\cap B_{\delta}(y)\right)\\ \le& \liminf_{i\to\infty}\mathcal{H}^{n+1}\left(E_i^\delta\right)+\delta\mathcal{H}^{n+1}\left(E_i^\delta\right). \endaligned \end{equation} From \eqref{VolCOV} and Lemma \ref{lbd}, the above inequality is impossible for the sufficiently small $\delta>0$. Hence $y\in \p E_\infty$ and we have shown the claim \eqref{CLAy}. From Theorem 4.4 in \cite{Gi} and \eqref{upbdMinftyr}\eqref{CLAy}, $M_\infty$ is countably $n$-rectifiable in $B_1(p)$. Combining \eqref{MSHS} and \eqref{lowerbdMinftyr}, we deduce \begin{equation}\aligned\label{HnMinftyBtpde} \mathcal{H}^n\left(M_\infty\cap \overline{B_t(p)}\right)=\lim_{\delta\rightarrow0}\f1{2\delta}\mathcal{H}^{n+1}\left(B_\delta\left(M_\infty\cap \overline{B_t(p)}\right)\right) \endaligned \end{equation} for any $t\in(0,1)$. For the fixed $t\in(0,1)$, let $S$ be a hypersurface in $B_1(p)$ such that $S\setminus M_\infty$ is smooth embedded, $S=M_\infty$ outside $B_t(p)$ and $\p \llbracket S\rrbracket=\p\llbracket M_\infty\rrbracket$. Let $\mathbf{n}_{S}$ be the unit normal vector field to the regular part $\mathcal{R}_S$ of $S$. Clearly, there is a constant $\theta>0$ such that \begin{equation}\aligned\nonumber \{\mathrm{exp}_{x}(s\mathbf{n}_S(x))|\ x\in \mathcal{R}_S,\, 0<s<\theta\}\cap \{\mathrm{exp}_{x}(-s\mathbf{n}_S(x))|\ x\in \mathcal{R}_S,\, 0<s<\theta\}=\emptyset. \endaligned \end{equation} Moreover, there is a sequence of hypersurfaces $S_{i}\subset B_1(p_i)$ such that $S_{i}\setminus M_i$ is smooth embedded, $S_{i}=M_i$ outside $B_t(p_i)$, $\p \llbracket S_{i}\rrbracket=\p\llbracket M_i\rrbracket$, and $S_{i}$ converges to $S$ in the induced Hausdorff sense. From Proposition \ref{MinMSinfty} and \eqref{HnMinftyBtpde}, we have \begin{equation}\aligned \mathcal{H}^n\left(M_\infty\cap \overline{B_t(p)}\right)\le\mathcal{H}^{n}\left(S\cap \overline{B_t(p)}\right). \endaligned \end{equation} Namely, $M_\infty$ is area-minimizing in $B_t(p)$. As $t$ is arbitrary in $(0,1)$, we conclude that $M_\infty$ is area-minimizing in $B_1(p)$. From \eqref{Bde-de2Side43116} with $S_i=M_i$, we have \begin{equation}\aligned\label{Hn12121} \mathcal{H}^{n+1}(B_\delta(M_i)\cap B_{t+\psi_\delta}(p_i))+\psi_\delta\ge2\delta\mathcal{H}^{n}\left(M_i\cap \overline{B_t(p_i)}\right) \endaligned \end{equation} for any $t\in(0,1)$ and the small $\delta>0$, where $\psi_\delta$ is a general function depending only on $n,\delta$ with $\lim_{\delta\rightarrow0}\psi_\delta=0$. From Lemma \ref{Cont*} in the appendix II, we have \begin{equation}\aligned\label{Hn12122} \mathcal{H}^{n+1}\left(B_{\delta}(M_\infty)\cap B_{t+\psi_\delta}(p)\right)\ge\limsup_{i\rightarrow\infty}\mathcal{H}^{n+1}(B_\delta(M_i)\cap B_{t+\psi_\delta}(p_i)). \endaligned \end{equation} Combining \eqref{Hn12121}\eqref{Hn12122}, we get \begin{equation}\aligned\label{Hn12123} \liminf_{\delta\rightarrow0}\f1{2\delta}\mathcal{H}^{n+1}\left(B_{\delta}(M_\infty)\cap B_{t+\psi_\delta}(p)\right)\ge\limsup_{i\rightarrow\infty}\mathcal{H}^{n}\left(M_i\cap \overline{B_t(p_i)}\right). \endaligned \end{equation} Combining \eqref{nu*MinftyBtp}\eqref{HnMinftyBtpde} and \eqref{Hn12123}, for any $t'<t$ we have \begin{equation}\aligned\label{HnMinftyBtpMipi} &\mathcal{H}^n\left(M_\infty\cap \overline{B_{2t-t'}(p)}\right)\ge\liminf_{\delta\rightarrow0}\f1{2\delta}\mathcal{H}^{n+1}\left(B_{\delta}(M_\infty)\cap B_{2t-t'}(p)\right)\\ \ge&\limsup_{i\rightarrow\infty}\mathcal{H}^{n}\left(M_i\cap \overline{B_t(p_i)}\right) \ge\liminf_{i\rightarrow\infty}\mathcal{H}^{n}(M_i\cap B_t(p_i))\\ \ge&\mathcal{M}^*\left(M_\infty,B_{t'}(p)\right)\ge\mathcal{H}^n(M_\infty\cap B_{2t'-t}(p)). \endaligned \end{equation} Forcing $t'\to t$ gets \eqref{HnMinfequMitpi}. This completes the proof. \end{proof} \textbf{Remark.} If we further assume that $B_1(p)$ is a Euclidean ball $B_1(0)$, then $M_\infty$ is an area-minimizing hypersurface in $B_1(0)$. Clearly, $\mathcal{H}^n(M_\infty\cap\p B_t(0))=0$ for each $0<t<1$. Then \eqref{HnMinfequMitpi} implies \begin{equation}\aligned\label{VOLCov} \mathcal{H}^n(M_\infty\cap B_t(0))=\lim_{i\rightarrow\infty}\mathcal{H}^{n}(M_i\cap B_t(p_i))=\lim_{i\rightarrow\infty}\mathcal{H}^{n}\left(M_i\cap \overline{B_t(p_i)}\right). \endaligned \end{equation} \begin{proposition}\label{ContiOmega} Let $N_i,B_1(p_i),M_i,B_1(p_\infty),M_\infty$ be the notions as in Theorem \ref{Conv0}. Suppose that $B_1(p_\infty)$ is an $(n+1)$-dimensional Euclidean ball $B_1(0)$. Then for any open set $\Omega\subset\subset B_1(0)$, any open $\Omega_i\subset B_1(p_i)$ with $\Omega_i\rightarrow\Omega$ in the induced Hausdorff sense, \begin{equation}\aligned \mathcal{H}^n\left(M_\infty\cap \overline{\Omega}\right)\ge\limsup_{i\rightarrow\infty}\mathcal{H}^{n}\left(M_i\cap \overline{\Omega_i}\right). \endaligned \end{equation} \end{proposition} \begin{proof} By contradiction and compactness of area-minimizing hypersurfaces in Euclidean space, for any $\epsilon>0$ there is a constant $\delta_\epsilon$ (depending only on $n,\epsilon$) such that \begin{equation}\aligned\label{1-epdeept} \mathcal{H}^n\left(M_\infty\cap B_t(q)\right)\ge(1-\epsilon)\mathcal{H}^n(M_\infty\cap B_{(1+\delta_\epsilon)t}(q)) \endaligned \end{equation} for any $B_{(1+\delta_\epsilon)t}(q)\subset B_1(0)$. From Lemma \ref{nuinftyKthj} in appendix II, there is a sequence of mutually disjoint balls $\{B_{\theta_j}(x_j)\}_{j=1}^\infty$ with $x_j\in M_\infty\cap \overline{\Omega}$ and $\theta_j<\inf_{x\in\p\Omega}|1-x|$ such that $M_\infty\cap \overline{\Omega}\subset \bigcup_{1\le j\le k}B_{\theta_j+2\theta_k}(x_j)$ for the sufficiently large $k$, $\theta_j\ge \theta_{j+1}$ for each $j\ge1$ and \begin{equation}\aligned\label{thjn2k**} \sum_{j=1}^{\infty}\theta_j^n =\lim_{k\rightarrow\infty}\sum_{j=1}^{k}(\theta_{j}+2\theta_k)^n. \endaligned \end{equation} From Lemma \ref{UPPLOWMinfty}, \begin{equation}\aligned \mathcal{H}^n\left(M_\infty\right)\ge\sum_{j=1}^{\infty}\mathcal{H}^n\left(M_\infty\cap B_{\theta_j}(x_j)\right)\ge\delta_{n,\kappa,v}\sum_{j=1}^{\infty}\theta_j^n. \endaligned \end{equation} Hence there is an integer $j_0>1$ such that $\sum_{j=j_0}^{\infty}\theta_j^n<\epsilon$. Then with \eqref{thjn2k**} \begin{equation}\aligned \limsup_{k\rightarrow\infty}\sum_{j=j_0}^{k}(\theta_{j}+2\theta_k)^n\le \lim_{k\rightarrow\infty}\sum_{j=1}^{k}(\theta_{j}+2\theta_k)^n-\sum_{j=1}^{j_0-1}\theta_j^n =\sum_{j=1}^{\infty}\theta_j^n-\sum_{j=1}^{j_0-1}\theta_j^n=\sum_{j=j_0}^{\infty}\theta_j^n<\epsilon. \endaligned \end{equation} With Lemma \ref{UPPLOWMinfty}, for the sufficiently large $k$ we have \begin{equation}\aligned &\mathcal{H}^n\left(M_\infty\cap B_\epsilon(\Omega)\right)\ge\sum_{j=1}^{j_0-1}\mathcal{H}^n\left(M_\infty\cap B_{\theta_j}(x_j)\right) +c_{n,\kappa}^*\sum_{j=j_0}^k(\theta_j+2\theta_k)^n-c_{n,\kappa}^*\epsilon\\ &\ge\sum_{j=1}^{j_0-1}\mathcal{H}^n\left(M_\infty\cap B_{\theta_j}(x_j)\right) +\sum_{j=j_0}^k\mathcal{H}^n\left(M_\infty\cap B_{\theta_j+2\theta_k}(x_j)\right)-c_{n,\kappa}^*\epsilon. \endaligned \end{equation} For the sufficiently large $k$, $(1+\delta_\epsilon)\theta_j>\theta_j+2\theta_k$ for each $j=1,\cdots,j_0-1$. Let $x_{j,i}\in B_1(p_i)$ with $x_{j,i}\to x_j$ as $i\to\infty$. With \eqref{1-epdeept}, we have \begin{equation}\aligned\label{MinftyovOm} \mathcal{H}^n\left(M_\infty\cap B_\epsilon(\Omega)\right)\ge(1-\epsilon)\limsup_{i\rightarrow\infty}\sum_{j=1}^k\mathcal{H}^n\left(M_i\cap B_{\theta_j+2\theta_k}(x_{j,i})\right)-c_{n,\kappa}^*\epsilon. \endaligned \end{equation} Since $M_\infty\cap \overline{\Omega}\subset \bigcup_{1\le j\le k}B_{\theta_j+2\theta_k}(x_j)$, then for any open set $\Omega_i\subset B_1(p_i)$ with $\Omega_i\rightarrow\Omega$ in the induced Hausdorff sense, we have \begin{equation}\aligned M_i\cap \overline{\Omega_i}\subset\bigcup_{j=1}^{k}B_{\theta_j+2\theta_k}(x_{j,i}) \endaligned \end{equation} for the sufficiently large $i>0$. From \eqref{MinftyovOm}, we have \begin{equation}\aligned\nonumber \mathcal{H}^n\left(M_\infty\cap B_\epsilon(\Omega)\right)\ge(1-\epsilon)\limsup_{i\rightarrow\infty}\mathcal{H}^n\left(M_i\cap \overline{\Omega_i}\right)-c_{n,\kappa}^*\epsilon. \endaligned \end{equation} Letting $\epsilon\to0$ completes the proof. \end{proof} In the $(n+1)$-dimensional Euclidean ball $B_1(0)$, any minimal hypersurface $S$ through the origin satisfies $\mathcal{H}^n(S)\ge\omega_n$ by the classical monotonicity of $r^{-n}\mathcal{H}^n(S\cap B_r(0))$. Combining \eqref{tArea} and Theorem \ref{Conv0}, we immediately have the following result. \begin{corollary}\label{epdeHM} For any $\epsilon>0$ there is a constant $\delta_\epsilon>0$ depending only on $n,\epsilon$ such that if $B_1(p)$ is an $(n+1)$-dimensional smooth geodesic ball with Ricci curvature $\ge-\delta_\epsilon$, $\mathcal{H}^{n+1}\left(B_1(p)\right)\ge(1-\delta_\epsilon)\omega_{n+1},$ and $M$ is an area-minimizing hypersurface in $B_1(p)$ with $p\in M$, $\p M\subset\p B_1(p)$, then \begin{equation}\aligned \mathcal{H}^n(M)\ge(1-\epsilon)\omega_n,\ \ \mathcal{H}^{n+1}(B_t(M)\cap B_1(p))\ge2(1-\epsilon)t\omega_n \ \mathrm{for\ any}\ t\in(0,\delta_\epsilon]. \endaligned \end{equation} \end{corollary} Moreover, we have the volume estimate for area-minimizing hypersurfaces in a class of smooth manifolds as follows. \begin{proposition}\label{EnlocAM} For any $\epsilon\in(0,\f12]$ there is a constant $\delta_\epsilon>0$ depending only on $n,\epsilon$ such that if $B_1(p)$ is an $(n+1)$-dimensional smooth geodesic ball with Ricci curvature $\ge-\delta_\epsilon$, $\mathcal{H}^{n+1}\left(B_1(p)\right)\ge(1-\delta_\epsilon)\omega_{n+1},$ and $M$ is an area-minimizing hypersurface in $B_1(p)$ with $p\in M$, $\p M\subset\p B_1(p)$ and $\mathcal{H}^n(M)\le(1+\delta_\epsilon)\omega_n,$ then for any subset $U\subset B_{1-\epsilon}(p)$ with $diam\, U=1$ we have \begin{equation}\aligned\label{LR2211} \mathcal{H}^n\left(M\cap U\right)\le(1+\epsilon)\omega_n2^{-n}. \endaligned \end{equation} \end{proposition} \begin{proof} Assume that there are a constant $\epsilon_*\in(0,\f12]$, and a sequence of $(n+1)$-dimensional smooth geodesic balls $B_1(p_i)$ with Ricci curvature $\ge-\f1i$ and $\mathcal{H}^{n+1}\left(B_1(p_i)\right)\ge(1-\f1i)\omega_{n+1}$, and a sequence of area-minimizing hypersurfaces $M_i$ in $B_1(p_i)$ with $p_i\in M_i$, $\p M_i\subset\p B_1(p_i)$ and $\mathcal{H}^n(M_i)\le(1+\f1i)\omega_n$ such that \begin{equation}\aligned\label{Mi||ep*} \mathcal{H}^n\left(M_i\cap U_i\right)>(1+\epsilon_*)\omega_n2^{-n} \endaligned \end{equation} for some sequence of sets $U_i\subset B_{1-\epsilon_*}(p_i)$ with diam$\, U_i=1$. We fix a constant $t\in(0,\epsilon_*)$. From Theorem \ref{Conv0}, without loss of generality, we can assume that $B_{1-t}(p_i)$ converges to a Euclidean ball $\overline{B_{1-t}(0)}$ in the Gromov-Hausdorff sense, $M_i$ converges to an area-minimizing hypersurface $M_{t,\infty}$ through $0$ in $B_{1-t}(0)$ in the induced Hausdorff sense, and $U_i$ converges to a closed set $U_{t,\infty}$ in $\overline{B_{1-\epsilon_*}(0)}$ in the induced Hausdorff sense with diam$\, U_{t,\infty}=1$. From Proposition \ref{ContiOmega} and \eqref{Mi||ep*}, we have \begin{equation}\aligned\label{Mi||ep**} \mathcal{H}^n\left(M_{t,\infty}\cap U_{t,\infty}\right)\ge\limsup_{i\rightarrow\infty}\mathcal{H}^n\left(M_i\cap U_i\right)\ge(1+\epsilon_*)\omega_n2^{-n}. \endaligned \end{equation} From \eqref{HnMinfequMitpi} and $\mathcal{H}^n(M_i)\le(1+\f1i)\omega_n$, we get $\mathcal{H}^n(M_{t,\infty}\cap B_{1-t}(0))\le\omega_n$. From compactness of area-minimizing hypersurfaces, there is a sequence $t_i\to0$ such that $M_{t_i,\infty}$ converges to an area-minimizing hypersurface $M_\infty$ through $0$ in $B_1(0)$. Then from $\mathcal{H}^n(M_{t_i,\infty}\cap B_{1-t_i}(0))\le\omega_n$ , we get $\mathcal{H}^n(M_{\infty}\cap B_1(0))\le\omega_n$, which implies flatness of $M_{\infty}$. Without loss of generality, we assume that $U_{t_i,\infty}$ converges in the Hausdorff sense to a closed set $U_{\infty}$ in $\overline{B_{1-\epsilon_*}(0)}$ with diam$\, U_{\infty}=1$. Combining Proposition \ref{ContiOmega} and \eqref{Mi||ep**}, we get \begin{equation}\aligned \mathcal{H}^n\left(M_\infty\cap U_{\infty}\right)\ge\limsup_{i\rightarrow\infty}\mathcal{H}^n\left(M_{t_i,\infty}\cap U_{t_i,\infty}\right)\ge(1+\epsilon_*)\omega_n2^{-n}. \endaligned \end{equation} This contradicts to the flatness of $M_{\infty}$ with the isodiametric inequality (see 2.5 in Chapter 1 of \cite{S} or \cite{FH1}\cite{HR}). We complete the proof. \end{proof} For any integer $n\ge7$, (from Allard's regularity theorem) there is a positive constant $\theta_n$ depending only on $n$ such that the densities of all the non-flat area-minimizing hypercones in $\mathbb R^{n+1}$ is no less than $1+\theta_n$. Namely, any area-minimizing non-flat hypercone $C$ in $\mathbb R^{n+1}$ satisfies \begin{equation}\aligned\label{DEFthn} \lim_{r\rightarrow\infty}\f1{\omega_n r^n}\mathcal{H}^n(C\cap B_r(0))\ge1+\theta_n. \endaligned \end{equation} For any integer $2\le n\le6$, let $\theta_n=\infty$. \begin{lemma}\label{almostmono} For any integer $n\ge2$ and any $0<\epsilon'<\epsilon<\theta_n$ there is a constant $\delta>0$ depending only on $n,\epsilon,\epsilon'$ such that if $B_1(p)$ is an $(n+1)$-dimensional smooth geodesic ball with Ricci curvature $\mathrm{Ric}\ge-\delta$ and $\mathcal{H}^{n+1}\left(B_1(p)\right)\ge(1-\delta)\omega_{n+1},$ $M$ is an area-minimizing hypersurface in $B_1(p)$ with $p\in M$, $\p M\subset\p B_1(p)$, $\mathcal{H}^n(M)\le(1+\epsilon')\omega_n$, then \begin{equation}\aligned\label{WRegM} \mathcal{H}^n(M\cap B_r(p))\le(1+\epsilon)\omega_nr^n\qquad \mathrm{for\ any}\ 0<r<1. \endaligned \end{equation} \end{lemma} \begin{proof} Let us prove \eqref{WRegM} by contradiction. Assume that there are constants $0<\epsilon'<\epsilon<\theta_n$, a sequence $0<r_i<1$, a sequence of $(n+1)$-dimensional smooth geodesic balls $B_1(p_i)$ with Ricci curvature $\ge-\f1i$ and $\mathcal{H}^{n+1}\left(B_1(p_i)\right)\ge(1-\f1i)\omega_{n+1}$, and a sequence of area-minimizing hypersurfaces $M_i$ in $B_1(p_i)$ with $p_i\in M_i$ and $\p M_i\subset\p B_1(p_i)$ such that \begin{equation}\aligned\label{AssumeMi} \mathcal{H}^n(M_i)\le(1+\epsilon')\omega_n \endaligned \end{equation} and \begin{equation}\aligned\label{AssumeMiBripi} \mathcal{H}^n(M_i\cap B_{r_i}(p_i))>(1+\epsilon)\omega_nr_i^n. \endaligned \end{equation} Let us consider the density function $\Theta_i$ defined by $$\Theta_i(r)=\f{\mathcal{H}^n(M_i\cap B_r(p_i))}{\omega_nr^n}$$ for any $r\in(0,1]$. From \eqref{AssumeMi} and \eqref{AssumeMiBripi}, there is a sequence of numbers $t_i\in(r_i,1)$ such that $\Theta_i(t_i)=1+\epsilon$ and $\Theta_i(r)\le1+\epsilon$ for all $r\in(t_i,1)$. Moreover, $(1+\epsilon)\omega_nt_i^n\le(1+\epsilon')\omega_n$ implies \begin{equation}\aligned\label{tiepep'} t_i\le\left(\f{1+\epsilon'}{1+\epsilon}\right)^{1/n}. \endaligned \end{equation} Scale $B_1(p_i),M_i$ by the factor $\f{1}{t_i}$ w.r.t. $p_i$. Namely, we define $B_{1/t_i}(\xi_i)=\f{1}{t_i}B_1(p_i)$, $M_i^*=\f{1}{t_i}M_i\subset B_{1/t_i}(\xi_i)$, then $M_i^*$ is area-minimizing in $B_{1/t_i}(\xi_i)$. Up to a choice of the subsequence, $(B_{t_i^{-1/2}}(\xi_i),\xi_i)$ converges to a metric space $(N_\infty^*,\xi^*)$ in the pointed Gromov-Hausdorff sense. From \eqref{tiepep'}, $\mathrm{Ric}\ge-\f1i$ on $B_1(p_i)$ and $\mathcal{H}^{n+1}\left(B_1(p_i)\right)\ge(1-\f1i)\omega_{n+1}$, we deduce that $N_\infty^*$ contains a metric ball $B_{s_*}(\xi^*)$, which is a Euclidean ball with $s_*=\left(\f{1+\epsilon}{1+\epsilon'}\right)^{1/(2n)}$. With Theorem \ref{Conv0}, we can assume that $M_i^*\cap B_{s_*}(\xi_i)$ converges to an area-minimizing hypersurface $M_\infty^*$ in $B_{s_*}(\xi^*)$. From \eqref{HnMinfequMitpi} and the choice of $t_i$, we have \begin{equation}\aligned\label{Ri111} r^{-n}\mathcal{H}^n\left(M_\infty^*\cap B_{r}(\xi^*)\right)\le \mathcal{H}^n\left(M_\infty^*\cap B_{1}(\xi^*)\right)=(1+\epsilon)\omega_n \endaligned \end{equation} for all $r\in(1,s_*)$. The monotonicity of $r^{-n}\mathcal{H}^n\left(M_\infty^*\cap B_r(\xi^*)\right)$ implies that $M_\infty^*$ is a cone. Then \eqref{Ri111} contradicts to the definition of $\theta_n$. This completes the proof. \end{proof} Combining Proposition \ref{EnlocAM} and Lemma \ref{almostmono}, the local property of area-minimizing hypersurfaces in a class of manifolds can be controlled by large-scale conditions in the following sense. \begin{theorem}\label{IntReg} For any integer $n\ge2$, and any constant $\epsilon\in(0,\f12]$, there is a constant $\delta>0$ depending only on $n,\epsilon$ such that if $B_1(p)$ is an $(n+1)$-dimensional smooth geodesic ball with Ricci curvature $\ge-\delta$, $\mathcal{H}^{n+1}\left(B_1(p)\right)\ge(1-\delta)\omega_{n+1},$ and $M$ is an area-minimizing hypersurface in $B_1(p)$ with $p\in M$, $\p M\subset\p B_1(p)$ and $\mathcal{H}^n(B_1(p)\cap M)<(1+\delta)\omega_n$, then for any subset $U\subset B_{1-\epsilon}(p)$ we have \begin{equation}\aligned \mathcal{H}^n(M\cap U)\le(1+\epsilon)\omega_{n}2^{-n}(diam\, U)^n. \endaligned \end{equation} In particular, $M$ is smooth in $B_{1-\epsilon}(p)$ provided we choose $\epsilon<\theta_n$. \end{theorem} Note that the singular set of every area-minimizing hypersurface in a manifold has the codimension 7 at most. Then combining Theorem \ref{Conv0}, Proposition \ref{EnlocAM} and Lemma \ref{almostmono}, we can prove the following result by contradiction. \begin{theorem}\label{PARTReg} For any integer $n\ge2$, and any constant $\epsilon\in(0,\f12]$, there is a constant $\delta>0$ depending only on $n,\epsilon$ such that if $B_1(p)$ is an $(n+1)$-dimensional smooth geodesic ball with Ricci curvature $\ge-\delta$, $\mathcal{H}^{n+1}\left(B_1(p)\right)\ge(1-\delta)\omega_{n+1},$ and $M$ is an area-minimizing hypersurface in $B_1(p)$ with $\p M\subset\p B_1(p)$, then there is a collection of balls $\{B_{s_i}(x_i)\}_{i=1}^{N_\epsilon}$ with $\sum_{i=1}^{N_\epsilon}s_i^{n-7+\epsilon}<\epsilon$ so that for any subset $U\subset B_{1-\epsilon}(p)\setminus\cup_{i=1}^{N_\epsilon}B_{s_i}(x_i)$ with $diam\, U<\delta$ we have \begin{equation}\aligned \mathcal{H}^n(M\cap U)\le(1+\epsilon)\omega_{n}2^{-n}(diam\, U)^n. \endaligned \end{equation} \end{theorem} \section{Continuity for the volume function of area-minimizing hypersurfaces} Let $N_i$ be a sequence of $(n+1)$-dimensional complete smooth manifolds with \eqref{Ric} and \eqref{Vol}. Up to choose the subsequence, we assume that $\overline{B_1(p_i)}$ converges to a metric ball $\overline{B_1(p_\infty)}$ in the Gromov-Hausdorff sense. Let $M_i$ be an area-minimizing hypersurface in $B_1(p_i)\subset N_i$ with $\p M_i\subset\p B_1(p_i)$. Suppose that $M_i$ converges to a closed set $M_\infty\subset \overline{B_1(p_\infty)}$ in the induced Hausdorff sense. \begin{lemma}\label{LowerMinfty} For any $\delta>0$ there is a constant $r_\delta\in(0,1)$ such that if $0<r\le r_\delta$ and $x\in M_\infty\cap B_{1-3r}(p_\infty)$ with $d_{GH}\left(B_{2r}(x),B_{2r}(0)\right)<r_\delta r,$ then for any sequence $M_i\ni x_i\rightarrow x$, there holds \begin{equation}\aligned\label{1-deBrMinf} (1-\delta)\limsup_{i\rightarrow\infty}\mathcal{H}^n\left(M_i\cap \overline{B_{(1+r_\delta)r}(x_i)}\right)\le\mathcal{H}^n\left(M_\infty\cap \overline{B_{r}(x)}\right). \endaligned \end{equation} \end{lemma} \begin{proof} We only need to show that there is a constant $r_\delta'\in(0,1)$ such that if $0<r\le r_\delta'$ and $x\in M_\infty\cap B_{1-3r}(p_\infty)$ with $d_{GH}\left(B_{2r}(x),B_{2r}(0)\right)<r_\delta' r,$ then for any sequence $M_i\ni x_i\rightarrow x$ \begin{equation}\aligned\label{1-deBrMinf*} (1-\delta/2)\limsup_{i\rightarrow\infty}\mathcal{H}^n\left(M_i\cap \overline{B_{(1+s)r}(x_i)}\right)\le\mathcal{H}^n\left(M_\infty\cap \overline{B_{(1+s)r}(x)}\right)\quad \mathrm{for\ any}\ 0<s\le r_\delta'. \endaligned \end{equation} For the sufficiently small $r_\delta''>0$ we can assume \begin{equation}\aligned \mathcal{H}^n\left(M_\infty\cap \overline{B_{(1+r_\delta'')r}(x)}\right)\le\f{1-\delta/2}{1-\delta}\mathcal{H}^n\left(M_\infty\cap \overline{B_{r}(x)}\right). \endaligned \end{equation} We choose $r_\delta=\min\{r_\delta',r_\delta''\}$, then \eqref{1-deBrMinf} holds. Let us prove \eqref{1-deBrMinf*} by contradiction. We assume that there are a constant $\epsilon_0>0$, 3 sequences of positive constants $r_j,r_j^*\le r_j'\rightarrow0$, a sequence of points $z_j\in M_\infty\cap B_{1-3r_j}(p_\infty)$, and a sequence of points $M_i\ni z_{i,j}\rightarrow z_j$ such that $ d_{GH}\left(B_{2r_j}(z_j),B_{2r_j}(0)\right)<r_j'r_j $ and \begin{equation}\aligned\label{ep0CONTR} (1-\epsilon_0)\limsup_{i\rightarrow\infty}\mathcal{H}^n\left(M_i\cap \overline{B_{(1+r_j^*)r_j}(z_{i,j})}\right)>\mathcal{H}^n\left(M_\infty\cap \overline{B_{(1+r_j^*)r_j}(z_j)}\right). \endaligned \end{equation} For any $0<\tau<\epsilon$, for each $j$ let $\{U_{j,k}\}_{k=1}^{N_{\tau,\epsilon,j}}$ be a finite covering of $M_{\infty}\cap \overline{B_{(1+r_j^*)r_j}(z_j)}$ with $\mathrm{diam} U_{j,k}\le\tau r_j$ such that \begin{equation}\aligned\label{MBrjNtauepj**} \mathcal{H}^n\left(M_{\infty}\cap \overline{B_{(1+r_j^*)r_j}(z_j)}\right)> \f{\omega_n}{2^n}\sum_{k=1}^{N_{\tau,\epsilon,j}} (\mathrm{diam} U_{j,k})^n-\epsilon r_j^n. \endaligned \end{equation} Without loss of generality, we assume that all the $U_{j,k}$ are open sets. There is a subsequence $i_j$ so that $d_{GH}\left(B_{2r_j}(z_{i_j,j}),B_{2r_j}(0)\right)<r_j'r_j$ and \begin{equation}\aligned\label{MijsipMeprj} \mathcal{H}^n\left(M_{i_j}\cap \overline{B_{(1+r_j^*)r_j}(z_{i_j,j})}\right)\ge\limsup_{i\rightarrow\infty}\mathcal{H}^n\left(M_i\cap \overline{B_{(1+r_j^*)r_j}(z_{i,j})}\right)-\epsilon r_j^n. \endaligned \end{equation} Up to a choice of the subsequence, for each $j$ there is a collection of open sets $U_{j,k}^*\subset B_{3r_j/2}(z_{i_j,j})$ so that $M_{i_j}\cap \overline{B_{(1+r_j^*)r_j}(z_{i_j,j})}\subset\bigcup_{k=1}^{N_{\tau,\epsilon,j}}U_{j,k}^*$ and $\mathrm{diam} U_{j,k}^*=\mathrm{diam} U_{j,k}$. From Theorem \ref{PARTReg}, for each $j$ there are $\tau_\epsilon>0$ and a collection of balls $\{B_{s_{j,k}}(x_{j,k})\}_{k=1}^{N_\epsilon}\subset B_{3r_j/2}(z_{i_j,j})$ with $\sum_{k=1}^{N_\epsilon}s_{j,k}^n<\epsilon r_j^n$ so that for any subset $U\subset B_{3r_j/2}(z_{i_j,j})\setminus\cup_{k=1}^{N_\epsilon}B_{s_{j,k}}(x_{j,k})$ with diam$U<\tau_\epsilon r_j$ we have \begin{equation}\aligned\label{1-epMijU} (1-\epsilon)\mathcal{H}^n(M_{i_j}\cap U)\le\omega_{n}2^{-n}(\mathrm{diam}\, U)^n. \endaligned \end{equation} Up to a choice of the subsequence of $i_j$, we can require $\tau_\epsilon$ depending only on $n,\epsilon,r_j$. Hence we can assume $\tau<\tau_\epsilon$. Let $F_{j,\epsilon}=\bigcup_{k=1}^{N_\epsilon} B_{s_{j,k}}(x_{j,k})$, then combining Lemma \ref{upbdareaM} \begin{equation}\aligned\label{HNtaurjEjep} &\mathcal{H}^{n+1}(M_{i_j}\cap B_{2\tau r_j}(F_{j,\epsilon}))\le \sum_{k=1}^{N_\epsilon}\mathcal{H}^{n+1}\left(M_{i_j}\cap B_{s_{j,k}+2\tau r_j}(x_{j,k})\right)\\ \le& c_{n,\kappa}\sum_{k=1}^{N_\epsilon}(s_{j,k}+2\tau r_j)^n\le c_{n,\kappa}\sum_{k=1}^{N_\epsilon}c_n(s_{j,k}^n+\tau^n r_j^n)\le c_{n,\kappa}c_n(\epsilon+N_\epsilon\tau^n) r_j^n, \endaligned \end{equation} where $c_n$ is a constant depending only on $n$. Denote $\mathcal{I}_{\tau,\epsilon,j}=\{k=1,\cdots,N_{\tau,\epsilon,j}|\, U_{j,k}^*\cap F_{j,\epsilon}=\emptyset\}$. Note $\mathrm{diam} U_{j,k}^*=\mathrm{diam} U_{j,k}\le\tau r_j$. Then $M_{i_j}\cap\overline{B_{(1+r_j^*)r_j}(z_{i_j,j})}\subset\bigcup_{k\in \mathcal{I}_{\tau,\epsilon,j}}U_{j,k}^*\cup B_{2\tau r_j}(F_{j,\epsilon})$. From \eqref{MBrjNtauepj**}\eqref{1-epMijU}\eqref{HNtaurjEjep}, \begin{equation*}\aligned &\mathcal{H}^n\left(M_{\infty}\cap \overline{B_{(1+r_j^*)r_j}(z_j)}\right)> \f{\omega_n}{2^n}\sum_{k\in \mathcal{I}_{\tau,\epsilon,j}} (\mathrm{diam} U_{j,k}^*)^n-\epsilon r_j^n\\ \ge&(1-\epsilon)\sum_{k\in \mathcal{I}_{\tau,\epsilon,j}} \mathcal{H}^n(M_{i_j}\cap U_{j,k}^*)-\epsilon r_j^n-c_{n,\kappa}c_n(\epsilon+N_\epsilon\tau^n) r_j^n+\mathcal{H}^{n+1}(M_{i_j}\cap B_{2\tau r_j}(F_{j,\epsilon}))\\ \ge&(1-\epsilon)\mathcal{H}^n\left(M_{i_j}\cap \overline{B_{(1+r_j^*)r_j}(z_{i_j,j})}\right)-\epsilon r_j^n-c_{n,\kappa}c_n(\epsilon+N_\epsilon\tau^n) r_j^n. \endaligned \end{equation*} Combining \eqref{MijsipMeprj}, we get \begin{equation}\aligned &\mathcal{H}^n\left(M_{\infty}\cap \overline{B_{(1+r_j^*)r_j}(z_j)}\right)\\ \ge&(1-\epsilon)\limsup_{i\rightarrow\infty}\mathcal{H}^n\left(M_i\cap \overline{B_{(1+r_j^*)r_j}(z_{i,j})}\right)-2\epsilon r_j^n-c_{n,\kappa}c_n(\epsilon+N_\epsilon\tau^n)r_j^n. \endaligned \end{equation} Letting $\tau\to0$ first, and then $\epsilon\to0$ implies \begin{equation}\aligned \mathcal{H}^n\left(M_{\infty}\cap \overline{B_{(1+r_j^*)r_j}(z_j)}\right)\ge\limsup_{i\rightarrow\infty}\mathcal{H}^n\left(M_i\cap \overline{B_{(1+r_j^*)r_j}(z_{i,j})}\right), \endaligned \end{equation} which contradicts to \eqref{ep0CONTR}. This completes the proof. \end{proof} Now let us study the upper semicontinuity for the volume function of area-minimizing hypersurfaces equipped with the induced Hausdorff topology. Let $\mathcal{R}$ and $\mathcal{S}$ denote the regular set and the singular set of $B_1(p_\infty)$, respectively. For any $\epsilon>0$, let $\mathcal{R}_\epsilon$ and $\mathcal{S}_\epsilon$ denote the sets as \eqref{RSep}. Note that $\mathcal{S}_\epsilon\subset\mathcal{S}$ and $\mathcal{S}_\epsilon$ is closed in $B_1(p_\infty)$ with Hausdorff dimension of $\mathcal{S}_\epsilon$ $\le n-1$. For each $0<t<1$, there are finite balls $\{B_{r_{y_j}}(y_j)\}_{j=1}^{N_\epsilon}\subset B_1(p_\infty)$ with $\mathcal{S}_\epsilon\cap\overline{B_{t}(p_\infty)}\subset\bigcup_{j=1}^{N_\epsilon}B_{r_{y_j}}(y_j)$ so that \begin{equation}\aligned\label{EepUPB} r_{y_j}<\epsilon,\qquad \mathrm{and}\qquad \mathcal{H}^n_\epsilon(E_{\epsilon,t})\le\omega_n\sum_{j=1}^{N_\epsilon}r_{y_j}^n<\epsilon\quad \mathrm{with}\ E_{\epsilon,t}\triangleq\bigcup_{j=1}^{N_\epsilon}B_{r_{y_j}}(y_j). \endaligned \end{equation} Let $\Lambda_\epsilon$ be the function defined as \eqref{Laep}. Since $\Lambda_\epsilon$ is lower semicontinuous on $\mathcal{R}_\epsilon$, then \begin{equation}\aligned\label{Laeptxinfty*} \Lambda_{\epsilon,t}^*\triangleq\inf\left\{\Lambda_\epsilon(x)|\, x\in M_\infty\cap\overline{B_{t}(p_\infty)}\setminus E_{\epsilon,t}\right\}>0. \endaligned \end{equation} \begin{lemma}\label{UpMinfty*} Let $\Omega$ be an open set in $B_t(p_\infty)$. If $\Omega_i\subset B_1(p_i)$ are open with $\Omega_i\rightarrow\Omega$ in the induced Hausdorff sense, then for any $B_s(\Omega)\subset B_t(p_\infty)$ with $s>0$, we have $$\mathcal{H}^n\left(M_\infty\cap B_s(\Omega)\right)\le \min\left\{\liminf_{i\rightarrow\infty}\mathcal{H}^n(M_i\cap B_s(\Omega_i)),\mathcal{M}_*\left(M_\infty, B_s(\Omega)\right)\right\}.$$ \end{lemma} \begin{proof} Let $\epsilon\in(0,s)$. From Lemma \ref{UPPLOWMinfty} and Lemma \ref{nuinftyKthj} in appendix II, there is a sequence of mutually disjoint balls $\{B_{\theta_j}(x_j)\}_{j=1}^\infty$ with $x_j\in M_\infty\cap \overline{B_{s-\epsilon}(\Omega)}\setminus E_{\epsilon,t}$ and $\theta_j<\min\{\Lambda_{\epsilon,t}^*,\epsilon/3\}$ such that \begin{equation}\aligned\label{MinftyBt'Eep} \mathcal{H}^n_\epsilon\left(M_\infty\cap \overline{B_{s-\epsilon}(\Omega)}\setminus E_{\epsilon,t}\right)\le\omega_n\sum_{j=1}^{\infty}\theta_j^n. \endaligned \end{equation} For each integer $j\ge 1$, there is a sequence of points $x_{i,j}\in M_i$ with $\lim_{i\rightarrow\infty}x_{i,j}=x_j$. By Corollary \ref{epdeHM} and the definition of $\Lambda_\epsilon$ in \eqref{Laep}, for any integer $k\ge1$ there are a positive function $\psi_\epsilon$ (independent of $k$) with $\lim_{\epsilon\rightarrow0}\psi_\epsilon=0$ and an integer $i_k$ (depending on $k$) such that \begin{equation}\aligned\label{430*} (1+\psi_\epsilon)\mathcal{H}^n\left(M_i\cap B_{\theta_j}(x_{i,j})\right)&\ge\omega_n\theta_j^{n},\\ (1+\psi_\epsilon)\mathcal{H}^{n+1}\left(B_\theta(M_i)\cap B_{\theta_j}(x_{i,j})\right)&\ge2\theta\omega_n\theta_j^{n}\quad \mathrm{for\ any}\ \theta\in(0,\psi_\epsilon)\\ \endaligned \end{equation} for each $i\ge i_k$ and each $1\le j\le k$. By the selection of $B_{\theta_j}(x_j)$, we can require $B_{\theta_{j_1}}(x_{i,j_1})\cap B_{\theta_{j_2}}(x_{i,j_2})=\emptyset$ for all $j_1\neq j_2$ and the sufficiently large $i$. Hence there is a constant $\delta_k>0$ (depending on $k$) such that from \eqref{430*} \begin{equation}\aligned\label{wnthjn*} \omega_n\sum_{j=1}^{k}\theta_j^{n}\le&(1+\psi_\epsilon)\mathcal{H}^n(M_i\cap B_s(\Omega_i)),\\ 2\theta\omega_n\sum_{j=1}^{k}\theta_j^{n}\le&(1+\psi_\epsilon)\mathcal{H}^{n+1}\left(B_\theta(M_i)\cap B_s(\Omega_i)\right) \endaligned \end{equation} for each $i\ge i_k$ and each $\theta\in(0,\min\{\psi_\epsilon,\delta_k\})$. With Lemma \ref{UpMinfty}, letting $i\rightarrow\infty$, then $\theta\rightarrow0$ infers \begin{equation}\aligned \omega_n\sum_{j=1}^{k}\theta_j^{n}\le&(1+\psi_\epsilon)\liminf_{i\rightarrow\infty}\mathcal{H}^n(M_i\cap B_s(\Omega_i)),\\ \omega_n\sum_{j=1}^{k}\theta_j^{n}\le&(1+\psi_\epsilon)\mathcal{M}_*\left(M_\infty, B_s(\Omega)\right). \endaligned \end{equation} Letting $k\rightarrow\infty$ infers \begin{equation}\aligned\label{432*} \omega_n\sum_{j=1}^{\infty}\theta_j^{n}\le(1+\psi_\epsilon)\min\left\{\liminf_{i\rightarrow\infty}\mathcal{H}^n(M_i\cap B_s(\Omega_i)),\mathcal{M}_*\left(M_\infty, B_s(\Omega)\right)\right\}. \endaligned \end{equation} By the definition of $\mathcal{H}^n_\epsilon$ in \eqref{mathcalHnep}, \eqref{EepUPB} and \eqref{MinftyBt'Eep}, we have \begin{equation}\aligned\label{433*} \mathcal{H}^n_\epsilon\left(M_\infty\cap \overline{B_{s-\epsilon}(\Omega)}\right) \le\omega_n\sum_{j=1}^{N_\epsilon}r_{y_j}^n+\mathcal{H}^n_\epsilon\left(M_\infty\cap \overline{B_{s-\epsilon}(\Omega)}\setminus E_{\epsilon,t}\right) <\epsilon+\omega_n\sum_{j=1}^{\infty}\theta_j^n. \endaligned \end{equation} Combining \eqref{432*}\eqref{433*} we have \begin{equation}\aligned\label{436*} \mathcal{H}^n_\epsilon\left(M_\infty\cap \overline{B_{s-\epsilon}(\Omega)}\right)\le\epsilon+(1+\psi_\epsilon)\min\left\{\liminf_{i\rightarrow\infty}\mathcal{H}^n(M_i\cap B_s(\Omega_i)),\mathcal{M}_*\left(M_\infty, B_s(\Omega)\right)\right\}. \endaligned \end{equation} Note that the Hausdorff measure $\mathcal{H}^n$ is a Radon measure. Letting $\epsilon\to0$ suffices to complete the proof. \end{proof} From Lemma \ref{UpMinfty*}, we immediately have \begin{equation}\aligned\label{HnBtpinfMiMINF} \mathcal{H}^n\left(M_\infty\cap B_t(p_\infty)\right)\le \min\left\{\liminf_{i\rightarrow\infty}\mathcal{H}^n(M_i\cap B_t(p_i)),\mathcal{M}_*\left(M_\infty, B_{t}(p_\infty)\right)\right\} \endaligned \end{equation} for each $t\in(0,1)$. Using Lemma \ref{LowerMinfty}, we are able to show the lower semicontinuity for the volume function of area-minimizing hypersurfaces equipped with the induced Hausdorff topology (see Corollary \ref{ContiOmega} for the special case). \begin{lemma}\label{LimMi} For any open set $\Omega\subset\subset B_1(p_\infty)$, if $\Omega_i\subset B_1(p_i)$ are open with $\Omega_i\rightarrow\Omega$ in the induced Hausdorff sense, then $$\limsup_{i\rightarrow\infty}\mathcal{H}^n\left(M_i\cap \overline{\Omega_i}\right)\le\mathcal{H}^n\left(M_\infty\cap \overline{\Omega}\right).$$ \end{lemma} \begin{proof} Suppose $\overline{\Omega}\subset B_{t}(p_\infty)$ for some $t\in(0,1)$. Let $\epsilon<1-t$, $E_{\epsilon,t}$ be as defined in \eqref{EepUPB} and $\Lambda_{\epsilon,t}^*$ be as defined in \eqref{Laeptxinfty*}. From Lemma \ref{UPPLOWMinfty} and Lemma \ref{nuinftyKthj} in appendix II, there is a sequence of mutually disjoint balls $\{B_{\theta_j}(x_j)\}_{j=1}^\infty$ with $x_j\in M_\infty\cap \overline{\Omega}\setminus E_{\epsilon,t}$ and $\theta_j<\min\{\Lambda_{\epsilon,t}^*,\epsilon/3\}$ such that $M_\infty\cap \overline{\Omega}\setminus E_{\epsilon,t}\subset \bigcup_{1\le j\le k}B_{\theta_j+2\theta_k}(x_j)$ for the sufficiently large $k$, and \begin{equation}\aligned \mathcal{H}^n\left(M_\infty\right)\ge\sum_{j=1}^{\infty}\mathcal{H}^n\left(M_\infty\cap B_{\theta_j}(x_j)\right)\ge\delta_{n,\kappa,v}\sum_{j=1}^{\infty}\theta_j^n =\delta_{n,\kappa,v}\lim_{k\rightarrow\infty}\sum_{j=1}^{k}(\theta_{j}+2\theta_k)^n. \endaligned \end{equation} Hence there is an integer $j_0>1$ such that $\sum_{j=j_0}^{\infty}\theta_j^n<\epsilon$. Then \begin{equation}\aligned \limsup_{k\rightarrow\infty}\sum_{j=j_0}^{k}(\theta_{j}+2\theta_k)^n\le \lim_{k\rightarrow\infty}\sum_{j=1}^{k}(\theta_{j}+2\theta_k)^n-\sum_{j=1}^{j_0-1}\theta_j^n =\sum_{j=1}^{\infty}\theta_j^n-\sum_{j=1}^{j_0-1}\theta_j^n=\sum_{j=j_0}^{\infty}\theta_j^n<\epsilon. \endaligned \end{equation} With Lemma \ref{UPPLOWMinfty}, for the sufficiently large $k$ we have \begin{equation}\aligned\label{HnMinfgecnk*ep} &\mathcal{H}^n\left(M_\infty\cap B_{\epsilon}(\Omega)\right)\ge\sum_{j=1}^{j_0-1}\mathcal{H}^n\left(M_\infty\cap B_{\theta_j}(x_j)\right) +c_{n,\kappa}^*\sum_{j=j_0}^k(\theta_j+2\theta_k)^n-c_{n,\kappa}^*\epsilon\\ &\ge\sum_{j=1}^{j_0-1}\mathcal{H}^n\left(M_\infty\cap B_{\theta_j}(x_j)\right) +\sum_{j=j_0}^k\mathcal{H}^n\left(M_\infty\cap B_{\theta_j+2\theta_k}(x_j)\right)-c_{n,\kappa}^*\epsilon. \endaligned \end{equation} From Lemma \ref{LowerMinfty}, for the suitable small $\epsilon>0$ there is a constant $\xi_\epsilon>1$ such that for each $j$ and each sequence $M_i\ni x_{j,i}\rightarrow x_j$ as $i\rightarrow\infty$, we have \begin{equation}\aligned\label{abcd111} (1-\epsilon)\limsup_{i\rightarrow\infty}\mathcal{H}^n\left(M_i\cap \overline{B_{\xi_\epsilon\theta_j}(x_{j,i})}\right)\le\mathcal{H}^n\left(M_\infty\cap \overline{B_{\theta_j}(x_j)}\right). \endaligned \end{equation} For the sufficiently large $k$, $\xi_\epsilon\theta_j>\theta_j+2\theta_k$ for each $1\le j\le j_0-1$. With \eqref{HnMinfgecnk*ep}, we have \begin{equation}\aligned\label{M*tpinftyep} \mathcal{H}^n\left(M_\infty\cap B_{\epsilon}(\Omega)\right)\ge(1-\epsilon)\limsup_{i\rightarrow\infty}\sum_{j=1}^k\mathcal{H}^n\left(M_i\cap B_{\theta_j+2\theta_k}(x_{j,i})\right)-c_{n,\kappa}^*\epsilon. \endaligned \end{equation} Recalling $\mathcal{S}_\epsilon\cap\overline{B_{t}(p_\infty)}\subset E_{\epsilon,t}=\bigcup_{j=1}^{N_\epsilon}B_{r_{y_j}}(y_j)$. Now we fix an integer $k$ sufficiently large so that \begin{equation}\aligned\label{M01ep} M_\infty\cap \overline{\Omega}\subset\bigcup_{j=1}^{k}B_{\theta_j+2\theta_k}(x_j)\cup\bigcup_{j=1}^{N_\epsilon}B_{r_{y_j}}(y_j). \endaligned \end{equation} Let $y_{j,i}\in M_i$ with $y_{j,i}\to y_j$ as $i\to\infty$. Then from \eqref{M01ep}, we have \begin{equation}\aligned\label{5.37} M_i\cap \overline{\Omega_i}\subset\bigcup_{j=1}^{k}B_{\theta_j+2\theta_k}(x_{j,i})\cup\bigcup_{j=1}^{N_\epsilon}B_{r_{y_j}}(y_{k,i}) \endaligned \end{equation} for the sufficiently large $i>0$. Combining Lemma \ref{UPPLOWMinfty} and \eqref{EepUPB}\eqref{M*tpinftyep}\eqref{5.37}, we have \begin{equation}\aligned\nonumber \mathcal{H}^n\left(M_\infty\cap B_{\epsilon}(\Omega)\right)\ge & (1-\epsilon)\limsup_{i\rightarrow\infty}\sum_{j=1}^k\mathcal{H}^n\left(M_i\cap B_{\theta_j+2\theta_k}(x_{j,i})\right)-c_{n,\kappa}^*\epsilon\\ +& \limsup_{i\rightarrow\infty}\sum_{j=1}^{N_\epsilon}\mathcal{H}^n\left(M_i\cap B_{r_{y_j}}(y_{j,i})\right)-c_{n,\kappa}^*\sum_{j=1}^{N_\epsilon}r_{y_j}^n\\ \ge&(1-\epsilon)\limsup_{i\rightarrow\infty}\mathcal{H}^n\left(M_i\cap \overline{\Omega_i}\right)-c_{n,\kappa}^*\epsilon-c_{n,\kappa}^*\omega_n^{-1}\epsilon. \endaligned \end{equation} Letting $\epsilon\to0$ completes the proof. \end{proof} Uniting \eqref{HnBtpinfMiMINF} and Lemma \ref{LimMi}, we immediately have the following result. \begin{theorem}\label{LimMiMib} For each integer $i\ge1$, let $M_i$ be an area-minimizing hypersurface in $B_1(p_i)$ with $\p M_i\subset\p B_1(p_i)$. If $M_i$ converges to a closed set $M_\infty\subset \overline{B_1(p_\infty)}$ in the induced Hausdorff sense, then for any $t\in(0,1)$ we have \begin{equation}\aligned \mathcal{H}^n\left(M_\infty\cap B_t(p_\infty)\right)\le& \liminf_{i\rightarrow\infty}\mathcal{H}^n\left(M_i\cap B_t(p_i)\right)\\ \le&\limsup_{i\rightarrow\infty}\mathcal{H}^n\left(M_i\cap \overline{B_{t}(p_i)}\right)\le\mathcal{H}^n\left(M_\infty\cap \overline{B_{t}(p_\infty)}\right). \endaligned \end{equation} \end{theorem} In particular, if $\mathcal{H}^n\left(M_\infty\cap\p B_{t}(p_\infty)\right)=0$ for some $t\in(0,1)$, then the above theorem implies \begin{equation}\aligned\label{EQUIVMEA} \mathcal{H}^n\left(M_\infty\cap B_t(p_\infty)\right)=\liminf_{i\rightarrow\infty}\mathcal{H}^n\left(M_i\cap B_t(p_i)\right). \endaligned \end{equation} From \eqref{Hn1Bdede*} and Lemma \ref{LimMi}, for any $\tau\in(0,1)$ we have \begin{equation*}\aligned \f{1-n\kappa\delta}{2\delta}\mathcal{H}^{n+1}\left(B_\delta(M_\infty)\cap B_{\tau}(p_\infty)\right)\le \liminf_{i\rightarrow\infty}\mathcal{H}^n\left(M_i\cap B_{\tau+\delta}(p_i)\right)\le\mathcal{H}^n\left(M_\infty\cap \overline{B_{\tau+\delta}(p_\infty)}\right), \endaligned \end{equation*} which implies \begin{equation}\aligned\label{M*Btau} \mathcal{M}^*\left(M_\infty, B_{\tau}(p_\infty)\right)\le\mathcal{H}^n\left(M_\infty\cap \overline{B_{\tau}(p_\infty)}\right). \endaligned \end{equation} Combining Lemma \ref{UpMinfty*} and \eqref{M*Btau}, we get the following result. \begin{corollary}\label{M**Btau} For each integer $i\ge1$, let $M_i$ be an area-minimizing hypersurface in $B_1(p_i)$ with $\p M_i\subset\p B_1(p_i)$. If $M_i$ converges to a closed set $M_\infty\subset \overline{B_1(p_\infty)}$ in the induced Hausdorff sense, then for any $\tau\in(0,1)$ \begin{equation*}\aligned \mathcal{H}^n\left(M_\infty\cap B_{\tau}(p_\infty)\right)\le\mathcal{M}_*\left(M_\infty, B_{\tau}(p_\infty)\right)\le\mathcal{M}^*\left(M_\infty, B_{\tau}(p_\infty)\right) \le\mathcal{H}^n\left(M_\infty\cap \overline{B_{\tau}(p_\infty)}\right). \endaligned \end{equation*} \end{corollary} \section{Singular sets related to limits of area-minimizing hypersurfaces} Let $C_*$ be a 2-dimensional metric cone with cross section $\Gamma$ and vertex at $o_*$, where $\Gamma$ is a 1-dimensional round circle with radius $\le1$. Let $C=C_*\times\mathbb R^{n-1}$ with the standard product metric and $o=(o_*,0^{n-1})\in C$. Let $\mathcal{C}_{n+1}$ denote the set including all the cones $C$ defined above. Clearly, the Euclidean space $\mathbb R^{n+1}$ with the standard flat metric belongs to $\mathcal{C}_{n+1}$. For each cone $C\in \mathcal{C}_{n+1}$, $C$ has the flat Euclidean metric on its regular part. Let us prove a monotonicity formula for all the limits of area-minimizing hypersurfaces in the metric cone $C$ provided $C\in\mathcal{C}_{n+1}$ as follows. \begin{lemma}\label{pM+Minfty} Let $Q_i$ be a sequence of $(n+1)$-dimensional complete Riemannian manifolds with Ricci curvature $\ge-(n-1)R_i^{-2}$ on $B_{R_i}(q_i)\subset Q_i$ for some sequence $R_i\rightarrow\infty$. Suppose that $(Q_i,q_i)$ converges to a cone $(C,o)$ in $\mathcal{C}_{n+1}$ in the pointed Gromov-Hausdorff sense. Let $\Sigma_i$ be a sequence of area-minimizing hypersurfaces in $B_1(q_i)\subset Q_i$ with $\p \Sigma_i\subset\p B_1(q_i)$, which converges in the induced Hausdorff sense to a closed set $\Sigma$ in $C$. Then for any $0<t'<t<1$, we have \begin{equation}\aligned\label{t1t2HnMpinfty} t^{-n}\mathcal{H}^n\left(\Sigma\cap B_{t}(o)\right)\ge (t')^{-n}\mathcal{H}^n\left(\Sigma\cap \overline{B_{t'}(o)}\right). \endaligned \end{equation} \end{lemma} \begin{proof} Let $\mathcal{S}_C$ denote the singular set of $C$. From Theorem \ref{Conv0}, there is a multiplicity one rectifiable $n$-stationary varifold $V$ in $B_1(o)\setminus \mathcal{S}_C$ with supp$V\cap B_1(o)=\Sigma\cap B_1(o)$ outside $\mathcal{S}_C$. Note that $B_1(o)\setminus \mathcal{S}_C$ has flat standard Euclidean metric. Denote $\Sigma^*=\Sigma\cap B_1(o)\setminus\mathcal{S}_C$. Let $\rho$ be the distance function from $o$ in $C$, and $\p_\rho$ denote the unit radial vector perpendicular to $\p B_\rho(o)$. For any smooth function $f$ with compact support in $C\setminus \mathcal{S}_C$, any $0<t_1<t_2<1$, we have the mean value inequality (see \cite{CM} or \cite{S} for instance) \begin{equation}\aligned\label{MVfSi} &t_2^{-n}\int_{\Sigma\cap B_{t_2}(o)}f-t_1^{-n}\int_{\Sigma\cap B_{t_1}(o)}f\\ =&\int_{\Sigma\cap B_{t_2}(o)\setminus B_{t_1}(o)}f|(\p_\rho)^N|^2\rho^{-n}+\int_{t_1}^{t_2}\tau^{-n-1}\int_{\Sigma\cap B_\tau(o)}\langle \nabla_\Sigma f,\p_\rho\rangle d\tau, \endaligned \end{equation} where $\nabla_\Sigma$ denotes the Levi-Civita connection of the regular part of $\Sigma^*$, $(\p_\rho)^N$ denotes the projection of $\p_\rho$ to the normal bundle of the regular part of $\Sigma^*$. Since $\mathcal{S}_C$ has Hausdorff dimension $\le n-1$, then $\mathcal{H}^n\left(\Sigma\cap\mathcal{S}_C\right)=0$. Let $\rho_{\mathcal{S}_C}$ denote the distance function from $\mathcal{S}_C$. For any small $0<\epsilon<t_1$, let $\phi_\epsilon$ be a Lipschitz function on $C$ defined by $\phi_\epsilon=1$ on $\{\rho_{\mathcal{S}_C}\ge2\epsilon\rho\}$, $\phi_\epsilon=\rho_{\mathcal{S}_C}/(\epsilon\rho)-1$ on $\{\epsilon\rho\le\rho_{\mathcal{S}_C}<2\epsilon\rho\}$, and $\phi_\epsilon=0$ on $\{\rho_{\mathcal{S}_C}<\epsilon\rho\}$. Then \begin{equation}\aligned \langle \nabla_C \phi_\epsilon,\p_\rho\rangle =0\qquad \mathrm{a.e.\ \ on} \ C\setminus \mathcal{S}_C, \endaligned \end{equation} where $\nabla_C$ denotes the Levi-Civita connection of $C\setminus \mathcal{S}_C$. Let $\eta_\epsilon$ be a Lipschitz function on $C$ defined by $\eta_\epsilon=1$ on $\{\rho\ge2\epsilon\}$, $\eta_\epsilon=\rho/\epsilon-1$ on $\{\epsilon\le\rho<2\epsilon\}$, and $\eta_\epsilon=0$ on $\{\rho<\epsilon\}$. Note that $\phi_\epsilon \eta_\epsilon$ is Lipschitz on $\Sigma^*$ for a.e. $\epsilon>0$. Then for $\tau\in[t_1,t_2]$ \begin{equation}\aligned &\left|\int_{\Sigma\cap B_\tau(o)}\langle \nabla_\Sigma(\phi_\epsilon \eta_\epsilon),\p_\rho\rangle \right|=\left|\int_{\Sigma\cap B_\tau(o)}\phi_\epsilon\langle \nabla_\Sigma\eta_\epsilon,\p_\rho\rangle \right|\\ \le&\int_{\Sigma\cap B_{2\epsilon}(o)\setminus B_\epsilon(o)}\f1{\epsilon}\le\f1{\epsilon}\mathcal{H}^n(\Sigma\cap B_{2\epsilon}(o)). \endaligned \end{equation} With Lemma \ref{UPPLOWMinfty}, we get \begin{equation}\aligned \lim_{\epsilon\rightarrow0}\int_{\Sigma\cap B_\tau(o)}\langle \nabla_\Sigma(\phi_\epsilon \eta_\epsilon),\p_\rho\rangle=0. \endaligned \end{equation} We choose the function $f$ in \eqref{MVfSi} approaching to $\phi_\epsilon\eta_\epsilon$, and then $\epsilon\to0$ implies \begin{equation}\aligned\label{MontSi} &t_2^{-n}\mathcal{H}^n(\Sigma\cap B_{t_2}(o))-t_1^{-n}\mathcal{H}^n(\Sigma\cap B_{t_1}(o)) =&\int_{\Sigma^*\cap B_{t_2}(o)\setminus B_{t_1}(o)}|(\p_\rho)^N|^2\rho^{-n}. \endaligned \end{equation} For any $0<t'<t<1$, let $t_2=t$, $t_1\to t'$ with $t_1>t'$, we complete the proof. \end{proof} Let $N_i$ be a sequence of $(n+1)$-dimensional complete smooth manifolds with \eqref{Ric} and \eqref{Vol}. Up to choose the subsequence, we assume that $\overline{B_1(p_i)}$ converges to a metric ball $\overline{B_1(p_\infty)}$ in the Gromov-Hausdorff sense. Let $M_i$ be an area-minimizing hypersurface in $B_1(p_i)\subset N_i$ with $\p M_i\subset\p B_1(p_i)$. Suppose that $M_i$ converges to a closed set $M_\infty\subset \overline{B_1(p_\infty)}$ in the induced Hausdorff sense. Let $\mathcal{S}^{n-2}$ be the subset of $B_1(p_\infty)$ defined in \eqref{Sk}. Using Lemma \ref{pM+Minfty}, we have the following cone property. \begin{theorem}\label{Covcone} For $y_\infty\in M_\infty\cap B_1(p_\infty)\setminus\mathcal{S}^{n-2}$, let $s_j$ be a sequence with $s_j\to0^+$ as $j\to\infty$ so that $\f1{s_j}(B_1(p_\infty),y_\infty)$ converges to a metric cone $(C,o)$ in the pointed Gromov-Hausdorff sense with $C\in\mathcal{C}_{n+1}$, and $\f1{s_j}(M_\infty,y_\infty)$ converges in the induced Hausdorff sense to $(M^*,o)$ for some closed set $M^*\subset C$. Then for any sequence $\rho_k\rightarrow0^+$, there is a subsequence $\rho_{k'}$ of $\rho_k$ such that $\f1{\rho_{k'}}(M^*,o)$ converges in the induced Hausdorff sense to a metric cone $(C^*,o)$ with $C^*\subset C$. Moreover, there is a sequence $r_j\rightarrow0^+$ such that $\f1{r_j}(B_1(p_\infty),y_\infty)$ converges to $(C,o)$ in the pointed Gromov-Hausdorff sense, and $\f1{r_j}(M_\infty,y_\infty)$ converges in the induced Hausdorff sense to $(C^*,o)$. \end{theorem} \begin{proof} Given $t>1$, by taking the diagonal subsequence, (up to choose the subsequences) we may assume $2ts_j\rho_k\le 1-d(p_\infty,y_\infty)$ and \begin{equation}\aligned\label{GHrjykl000} d_{GH}\left(B_{ts_j\rho_{k}}(y_{\infty}),B_{ts_j\rho_{k}}(o)\right)<s_j\rho_{k}/k \endaligned \end{equation} for each integer $j\ge k>0$. Here, $B_{r}(o)$ is the ball in $C$ centered at its vertex $o$ with the radius $r$. Let $\Phi_{k,j}:\,B_{ts_j\rho_k}(y_{\infty})\to B_{ts_j\rho_k}(o)\subset C$ denote a $3s_j\rho_{k}/k$-Hausdorff approximation. Up to a choice of subsequence of $j$, we can assume \begin{equation}\aligned\label{eq510add000} d_{H}\left(s_j^{-1}\Phi_{k,j}(M_{\infty}\cap B_{ts_j\rho_{k}}(y_{\infty})),M^*\cap B_{t\rho_{k}}(o)\right)<\rho_{k}/k \endaligned \end{equation} for any $j\ge k$. There is a sequence of points $y_i\in M_i$ so that $y_i\to y_\infty$ as $i\to\infty$. By taking the diagonal subsequence, (up to choose the subsequences) we can assume \begin{equation}\aligned\label{GHrjykl} d_{GH}\left(B_{ts_j\rho_k}(y_{i}),B_{ts_j\rho_k}(y_\infty)\right)<s_j\rho_k/k, \endaligned \end{equation} and \begin{equation}\aligned\label{eq510add} d_{H}\left(\Phi_{k,j,i}(M_{i}\cap B_{ts_j\rho_k}(y_{i})),M_\infty\cap B_{ts_j\rho_k}(y_\infty)\right)<s_j\rho_k/k \endaligned \end{equation} for any $i\ge j\ge k>0$, where $\Phi_{k,j,i}$ is a $3s_j\rho_k/k$-Hausdorff approximation from $B_{ts_j\rho_k}(y_{i})$ to $B_{ts_j\rho_k}(y_{\infty})\subset B_1(p_\infty)$. Up to a choice of subsequence of $\rho_k$, we assume that $\f1{\rho_{k}}(M^*,o)$ converges in the induced Hausdorff sense to $(C^*,o)$ for some closed subset $C^*\subset C$ with $o\in C^*$. Using the monotonicity formula \eqref{t1t2HnMpinfty}, we get \begin{equation}\aligned\label{limrkrto0+} \lim_{k\to\infty}\rho_{k}^{-n}\mathcal{H}^n\left(M^*\cap B_{\rho_{k}}(o)\right)=\lim_{r\to0^+}r^{-n}\mathcal{H}^n\left(M^*\cap B_{r}(o)\right). \endaligned \end{equation} From \eqref{GHrjykl000} and \eqref{GHrjykl}, we obtain \begin{equation}\aligned\label{dGHrkskt0001} \lim_{i\to\infty}d_{GH}\left(s_i^{-1}\rho_k^{-1}B_{ts_i\rho_k}(y_{j}),B_{t}(o)\right)=\lim_{k\to\infty}d_{GH}\left(s_k^{-1}\rho_k^{-1}B_{ts_k\rho_k}(y_{k}),B_{t}(o)\right)=0. \endaligned \end{equation} From \eqref{eq510add000}\eqref{eq510add} and $\f1{\rho_{k}}(M^*,o)\to(C^*,o)$, we obtain \begin{equation}\aligned\label{dHrkskt0001} &\lim_{i\to\infty}d_{H}\left(s_i^{-1}\rho_k^{-1}\Phi_{k,i}(\Phi_{k,i,i}(M_i\cap B_{ts_i\rho_{k}}(y_i))),\rho_k^{-1}M^*\cap B_{t}(o)\right)\\ =&\lim_{k\to\infty}d_{H}\left(s_k^{-1}\rho_k^{-1}\Phi_{k,k}(\Phi_{k,k,k}(M_k\cap B_{ts_k\rho_{k}}(y_k))),C^*\cap B_{t}(o)\right)=0. \endaligned \end{equation} By Lemma \ref{UPPLOWMinfty} and Theorem \ref{Conv0}, there is a multiplicity one rectifiable $n$-stationary varifold $V$ in the regular part of $C\cap B_t(o)$ with spt$V\triangleq C^*\cap B_t(o)$ such that $s_i^{-1}\rho_i^{-1}(M_{i}\cap B_{ts_i\rho_{i}}(y_i))$ converges in the induced Hausdorff sense to $ C^*\cap B_t(o)$. From Theorem \ref{LimMiMib} and \eqref{dGHrkskt0001}\eqref{dHrkskt0001}, given $0<t_1<t_2<t$ for each $k$ we have \begin{equation}\aligned \limsup_{i\to\infty}s_i^{-n}\mathcal{H}^n(M_{i}\cap\overline{B_{t_2s_{i}\rho_k}(p_i)})\le\mathcal{H}^n( M^*\cap\overline{B_{t_2\rho_k}(o)}) \endaligned \end{equation} and \begin{equation}\aligned \mathcal{H}^n( M^*\cap B_{t_1\rho_k}(o))\le\liminf_{i\to\infty}s_i^{-n}\mathcal{H}^n(M_{i}\cap B_{t_1s_{i}\rho_k}(p_i)). \endaligned \end{equation} With \eqref{limrkrto0+}, by taking the diagonal subsequence, (up to choose the subsequences) we have \begin{equation}\aligned\label{cpdejMkjadd} t_2^{-n}\liminf_{k\to\infty}s_k^{-n}\rho_k^{-n}\mathcal{H}^n\left(M_{k}\cap\overline{B_{t_2s_{k}\rho_k}(p_k)}\right)\le t_1^{-n} \limsup_{k\to\infty}s_k^{-n}\rho_k^{-n}\mathcal{H}^n(M_{k}\cap B_{t_1s_{k}\rho_k}(p_k)), \endaligned \end{equation} and still have \begin{equation}\aligned\label{dGHrkskt0002} &\lim_{k\to\infty}d_{GH}\left(s_k^{-1}\rho_k^{-1}B_{ts_k\rho_k}(y_{k}),B_{t}(o)\right)\\ =&\lim_{k\to\infty}d_{H}\left(s_k^{-1}\rho_k^{-1}\Phi_{k,k}(\Phi_{k,k,k}(M_k\cap B_{ts_k\rho_{k}}(y_k))),C^*\cap B_{t}(o)\right)=0 \endaligned \end{equation} from \eqref{dGHrkskt0001}\eqref{dHrkskt0001}. Combining Theorem \ref{LimMiMib}, \eqref{cpdejMkjadd} and \eqref{dGHrkskt0002}, we conclude that $$t_2^{-n}\mathcal{H}^n(C^*\cap B_{t_2}(o))\le t_1^{-n}\mathcal{H}^n\left(C^*\cap \overline{B_{t_1}(o)}\right).$$ From Lemma \ref{pM+Minfty}, $\tau^{-n}\mathcal{H}^n(C^*\cap B_{\tau}(o))$ is a constant on $(0,t)$. From arbitrariness of $t$ and the uniqueness of solutions of minimal hypersurface equation, we conclude that $C^*$ is a metric cone in $C$. From \eqref{GHrjykl000} and \eqref{eq510add000}, $\f1{s_j\rho_j}B_{ts_j\rho_j}(y_\infty)$ converges to $B_t(o)$ in the Gromov-Hausdorff sense, and $\f1{s_j\rho_j}(M_\infty\cap B_{ts_j\rho_j}(y_\infty))$ converges to $C^*\cap B_t(o)$ in the induced Hausdorff sense. There is a subsequence $\tau_j$ of $s_j\rho_j$ so that $\f1{\tau_j}B_{t^2\tau_j}(y_\infty)$ converges to $B_{t^2}(o)$ in the Gromov-Hausdorff sense, and $\f1{\tau_j}(M_\infty\cap B_{t^2\tau_j}(y_\infty))$ converges to $C^*\cap B_{t^2}(o)$ in the induced Hausdorff sense. Then we extract the subsequence of $\tau_j$ for the case $t^3$ (namely, $B_{t^3}(o)$ and $C^*\cap B_{t^3}(o)$), and continue to do this process inductively for $t^l$ until $l\to\infty$. By taking the diagonal subsequence, there is a subsequence $r_j\to0^+$ as $j\to\infty$ so that $\f1{r_j}(B_1(p_\infty),y_\infty)$ converges to $(C,o)$ in the pointed Gromov-Hausdorff sense, and $\f1{r_j}(M_\infty,y_\infty)$ converges in the induced Hausdorff sense to $(C^*,o)$. This completes the proof. \end{proof} \textbf{Remark.} In general, I do not know whether $M^*$ is a metric cone in $C$ in Theorem \ref{Covcone}. I would like to thank Gioacchino Antonelli and Daniele Semola for indicating this problem, which I overlooked in the last version. Using Lemma \ref{almostmono}, we have the uniqueness of the limit for the density function of $M_\infty$ in a special case, which acts a key role in studying the regularity of $M_\infty$ in $B_1(p_\infty)$. \begin{lemma}\label{4EQUIV} For any $x\in M_\infty\cap \mathcal{R}$, if $\liminf_{r\rightarrow0}r^{-n}\mathcal{H}^n(M_\infty\cap B_{r}(x))=\omega_n$, then $\lim_{r\rightarrow0}r^{-n}\mathcal{H}^n(M_\infty\cap B_{r}(x))=\omega_n$, and for any sequence $s_i\rightarrow0^+$ there is a subsequence $s_{i_j}$ so that $s_{i_j}^{-1}(M_\infty,x)$ converges to $(\mathbb R^n,0)$ in $\mathbb R^{n+1}$ in the induced Hausdorff sense. \end{lemma} \begin{proof} From the assumption, there is a positive sequence $r_j\rightarrow0$ so that $\lim_{j\rightarrow\infty}r_j^{-n}\mathcal{H}^n(M_\infty\cap B_{r_j}(x))=\omega_n$. Let $\theta_n$ be the constant defined as \eqref{DEFthn} such that \begin{equation}\aligned\label{Cep*} \mathcal{H}^n(C\cap B_{1}(0))\ge(1+\theta_n)\omega_n \endaligned \end{equation} for each non-flat area-minimizing hypercone $C$ in $\mathbb R^{n+1}$ with the vertex at the origin. Let $x_i\in M_i$ with $x_i\to x$ as $i\to\infty$. Given a small constant $\epsilon\in(0,\delta_n)$ with $(1-\epsilon^2)^{-n-2}\le1+\epsilon/2$. Since $M_i$ converges to $M_\infty\subset \overline{B_1(p_\infty)}$ in the induced Hausdorff sense, from Theorem \ref{LimMiMib} we have \begin{equation}\aligned \mathcal{H}^n(M_i\cap B_{(1-\epsilon^2)r_j}(x_i))<(1-\epsilon^2)^{-1}\mathcal{H}^n(M_\infty\cap B_{r_j}(x)) \endaligned \end{equation} for all sufficiently large $i$. Hence, there is an integer $j_\epsilon$ so that for each integer $j\ge j_\epsilon$ \begin{equation}\aligned \mathcal{H}^n(M_i\cap B_{(1-\epsilon^2)r_j}(x_i))<(1-\epsilon^2)^{-2}\omega_nr_j^n\le(1+\epsilon/2)\omega_n\left((1-\epsilon^2)r_j\right)^n \endaligned \end{equation} for all sufficiently large $i$ (depending on $j$). From Lemma \ref{almostmono}, (up to a choice of $j_\epsilon$) for each $s\in(0,(1-\epsilon^2)r_j]$ with $j\ge j_\epsilon$ we have \begin{equation}\aligned\label{MiBsxiep<} \mathcal{H}^n(M_i\cap B_{s}(x_i))<(1+\epsilon)\omega_ns^n \endaligned \end{equation} for all sufficiently large $i$ (depending on $j$). From \eqref{MiBsxiep<} and Theorem \ref{LimMiMib} again, we have \begin{equation}\aligned s^{-n}\mathcal{H}^n(M_\infty\cap B_{s}(x))\le s^{-n}\liminf_{i\to\infty}\mathcal{H}^n(M_i\cap B_{s}(x_i))\le(1+\epsilon)\omega_n. \endaligned \end{equation} Letting $s\to0$ and then $\epsilon\to0$ imply $$\lim_{r\rightarrow0}s^{-n}\mathcal{H}^n(M_\infty\cap B_{s}(x))\le\omega_n.$$ Combining Theorem \ref{Conv0} and Theorem \ref{LimMiMib}, for any sequence $s_i\rightarrow0^+$ there is a subsequence $s_{i_j}$ so that $s_{i_j}^{-1}(M_\infty,x)$ converges to $(\mathbb R^n,0)$ in $\mathbb R^{n+1}$ in the induced Hausdorff sense. This completes the proof. \end{proof} Inspired by Allard's regularity theorem and the work on the singular sets by Cheeger-Colding \cite{CCo1}, we define some approximate sets related to the singular sets of area-minimizing hypersurfaces in metric spaces as follows. Let $\mathcal{S}_{M_\infty,\epsilon,r}$ denote the subsets in the regular set $\mathcal{R}\subset B_1(p_\infty)$ containing all the points $y$ satisfying $$\mathcal{H}^n\left(M_\infty\cap B_{s}(y)\right)\ge(1+\epsilon)\omega_n s^n\qquad \mathrm{for\ some}\ s\in(0,r].$$ Note that $M_\infty\subset\overline{B_1(p_\infty)}$. Let $$\mathcal{S}_{M_\infty,\epsilon}=\bigcap_{0<r\le1}\mathcal{S}_{M_\infty,\epsilon,r}, \qquad\mathcal{S}_{M_\infty}=\bigcup_{\epsilon>0}\mathcal{S}_{M_\infty,\epsilon}=\bigcup_{\epsilon>0}\bigcap_{0<r\le1}\mathcal{S}_{M_\infty,\epsilon,r}.$$ From Lemma \ref{4EQUIV}, every tangent cone of $M_\infty$ at each point in $\mathcal{S}_{M_\infty}$ is not a hyperplane. Let $\theta_n>0$ be the constant defined in \eqref{DEFthn}. From Lemma \ref{4EQUIV} again, for any $x\in\mathcal{S}_{M_\infty,\epsilon}$ with $\epsilon<\theta_n$ there is a constant $r_x>0$ such that \begin{equation}\aligned \mathcal{H}^n\left(M_\infty\cap B_{s}(x)\right)\ge(1+2\epsilon/3)\omega_n s^n\qquad \mathrm{for\ any}\ s\in(0,r_x]. \endaligned \end{equation} Noting that $\mathcal{S}_{M_\infty,\epsilon}$ may not be closed as $\mathcal{R}$ may not be closed in $B_1(p_\infty)$. However, from Theorem \ref{IntReg} and Theorem \ref{LimMiMib}, there is a constant $r_x^*>0$ such that \begin{equation}\aligned\label{MinfBsy1ep2} \mathcal{H}^n\left(M_\infty\cap B_{s}(y)\right)\ge(1+\epsilon/2)\omega_n s^n\qquad \mathrm{for\ any}\ s\in(0,r_x^*], \ y\in B_{r_x^*}(x)\cap\mathcal{S}_{M_\infty,\epsilon}. \endaligned \end{equation} We call $\mathcal{S}_{M_\infty}$ \emph{the singular set} of $M_\infty$ in $\mathcal{R}$. Denote $\mathcal{R}_{M_\infty,\epsilon,r}=M_\infty\cap\mathcal{R}\setminus\mathcal{S}_{M_\infty,\epsilon,r}$. If $y\in\mathcal{R}_{M_\infty,\epsilon,r}$, then there is a number $s\in(0,r]$ so that $$\mathcal{H}^n\left(M_\infty\cap B_{s}(y)\right)<(1+\epsilon)\omega_n s^n.$$ Set $$\mathcal{R}_{M_\infty}=M_\infty\cap\mathcal{R}\setminus\mathcal{S}_{M_\infty}=\bigcap_{\epsilon>0}\bigcup_{0<r\le1}\mathcal{R}_{M_\infty,\epsilon,r}.$$ From Lemma \ref{4EQUIV}, for each $x\in \mathcal{R}_{M_\infty}$, any tangent cone of $M_\infty$ at $x$ is a hyperplane in $\mathbb R^{n+1}$. Now let us prove Theorem \ref{dimestn-7n-2}, which is divided into the following two lemmas. \begin{lemma}\label{codim7} $\mathcal{S}_{M_\infty}$ has Hausdorff dimension $\le n-7$ for $n\ge7$, and it is empty for $n<7$. \end{lemma} \begin{proof} For the case $n\ge 7$, we only need to show $\mathrm{dim}\mathcal{S}_{M_\infty,\epsilon}\le n-7$ for each $\epsilon>0$. Suppose there exists a constant $\beta>n-7$ (maybe not integer) such that $\mathcal{H}^\beta\left(\mathcal{S}_{M_\infty,\epsilon}\right)>0$. Then $\mathcal{H}^\beta_\infty\left(\mathcal{S}_{M_\infty,\epsilon}\right)=\lim_{t\rightarrow\infty}\mathcal{H}^\beta_t\left(\mathcal{S}_{M_\infty,\epsilon}\right)>0$ from Lemma 11.2 in \cite{Gi}. By the argument of Proposition 11.3 in \cite{Gi}, there is a point $x_0\in \mathcal{S}_{M_\infty,\epsilon}$ and a sequence $r_j\rightarrow0$ such that \begin{equation}\aligned \mathcal{H}^\beta_\infty\left(\mathcal{S}_{M_\infty,\epsilon}\cap B_{r_j}(x_0)\right)>2^{-\beta-1}\omega_\beta r_j^\beta. \endaligned \end{equation} Denote $(N_j^*,x_j)=\f1{r_j}(B_1(p_\infty),x_0)$, and $M^j_\infty=\f1{r_j}M_\infty$. Then for the ball $B_{1}(x_j)\subset N_j^*$ we have \begin{equation}\aligned \mathcal{H}^\beta_\infty\left(\mathcal{S}_{M^j_\infty,\epsilon}\cap B_{1}(x_j)\right)>2^{-\beta-1}\omega_\beta. \endaligned \end{equation} Without loss of generality, by Theorem \ref{Conv0} we assume that $(M^j_\infty,x_j)$ converges as $j\rightarrow\infty$ in the induced Hausdorff sense to $(M^*_\infty,0)$ in $\mathbb R^{n+1}$, where $M^*_\infty$ is an area-minimizing hypersurface through the origin in $\mathbb R^{n+1}$. If $y_j\in \mathcal{S}_{M^j_\infty,\epsilon}\cap B_{1}(x_j)$ and $y_j\rightarrow y_*\in M^*_\infty$, then from \eqref{MinfBsy1ep2} and the definition of $M^j_\infty$ one has $$\mathcal{H}^n\left(M^j_\infty\cap B_{s}(y_j)\right)\ge(1+\epsilon/2)\omega_n s^n$$ for any $0<s\le1$ and the sufficiently large $j$. Combining Theorem \ref{Conv0} and \eqref{EQUIVMEA}, taking the limit in the above inequality implies $$\mathcal{H}^n\left(M^*_\infty\cap B_{s}(y_*)\right)\ge(1+\epsilon/2)\omega_n s^n.$$ Hence, we conclude that $y_*$ is a singular point of $M^*_\infty$. From the proof of Lemma 11.5 in \cite{Gi}, we have \begin{equation}\aligned \mathcal{H}^\beta_\infty\left(\mathcal{S}_{M^*_\infty,\epsilon}\cap B_{1}(0)\right)>2^{-\beta-1}\omega_\beta. \endaligned \end{equation} Now we can use Theorem 11.8 in \cite{Gi} to get a contradiction. Hence, $\mathcal{S}_{M_\infty}$ has Hausdorff dimension $\le n-7$ for $n\ge7$. For $n<7$, we can use the above argument to show $\mathcal{S}_{M_\infty}$ empty. This completes the proof. \end{proof} \begin{lemma}\label{codim2} $\mathcal{S}\cap M_\infty$ has Hausdorff dimension $\le n-2$. \end{lemma} \begin{proof} Let us prove it by contradiction. Assume there is a constant $\delta\in(0,1]$ such that $\mathcal{H}^{n-2+\delta}(\mathcal{S}\cap M_\infty)\ge\delta$. From the definition of $\mathcal{S}^k$ in \eqref{Sk} and dim$(\mathcal{S}^k)\le k$ for each nonnegative integer $k\le n-1$, the set $\Lambda_{M_\infty}\triangleq M_\infty\cap\mathcal{S}^{n-1}$ satisfies $\mathcal{H}^{n-2+\delta}(\Lambda_{M_\infty})\ge\delta$. From Proposition 11.3 in \cite{Gi}, for $\mathcal{H}^{n-2+\delta}$-almost every point $y\in\Lambda_{M_\infty}$, there is a sequence $r_i\rightarrow0$ such that \begin{equation}\aligned\label{ri2-n-deLaMinfty} \lim_{i\rightarrow\infty}r_i^{2-n-\delta}\mathcal{H}^{n-2+\delta}_\infty(\Lambda_{M_\infty}\cap B_{r_i}(y))>0. \endaligned \end{equation} By Gromov's compactness theorem, we can assume that $\f1{r_i}(B_1(p_\infty),y)$ converges to a metric cone $(Y_y,o_y)$ in the pointed Gromov-Hausdorff sense, $\f1{r_i}(M_\infty,y)$ converges to $(M_{y,\infty},o_y)$ with closed $M_{y,\infty}$ in $Y_y$, and $\f1{r_i}\left(\Lambda_{M_\infty},y\right)$ converges to $(\Lambda_{y,M_\infty},o_y)$ with closed $\Lambda_{y,M_\infty}\subset M_{y,\infty}$ in the induced Hausdorff sense. We write $Y_y=\mathbb R^{n-1}\times CS^1_r$ for some metric cone $CS^1_r$ with the cross section $S^1_r$, where $S^1_r$ is the round circle with the radius $r\in(0,1)$. By the definition of $\Lambda_{M_\infty}$, $\Lambda_{y,M_\infty}$ is contained in the singular set $\mathbb R^{n-1}\times\{o_r\}$ of $Y_y$, where $o_r$ is the vertex of $CS^1_r$. From \eqref{ri2-n-deLaMinfty} and the proof of Lemma 11.5 in \cite{Gi}, we get \begin{equation}\aligned\label{Hn-2deyMinfrr} \mathcal{H}^{n-2+\delta}_\infty(\Lambda_{y,M_\infty}\cap B_{1}(0^{n-1},o_r))>0. \endaligned \end{equation} From Proposition 11.3 in \cite{Gi} again, there are a point $z\in\Lambda_{y,M_\infty}\setminus\{o_y\}\subset\mathbb R^{n-1}\times\{o_r\}$ and a sequence $\tau_i\rightarrow0$ such that \begin{equation}\aligned\label{abssss} \lim_{i\rightarrow\infty}\tau_i^{2-n-\delta}\mathcal{H}^{n-2+\delta}_\infty(\Lambda_{y,M_\infty}\cap B_{\tau_i}(z))>0. \endaligned \end{equation} Up to choose the subsequence, we assume that $\f1{\tau_i}(M_{y,\infty},z)$ converges to a metric cone $(M_{z,y,\infty},z)$ in $\mathbb R^{n-1}\times CS^1_r$ from Theorem \ref{Covcone}, and $\f1{\tau_i}(\Lambda_{y,M_\infty},z)$ converges to a closed set $(\Lambda_{z,y,\infty},z)$ in $\mathbb R^{n-1}\times\{o_r\}\subset\mathbb R^{n-1}\times CS^1_r$. Moreover, $M_{z,y,\infty}$ and $\Lambda_{z,y,\infty}$ both split off the same line (through $z$) isometrically. From \eqref{Hn-2deyMinfrr} and the proof of Lemma 11.5 in \cite{Gi}, we get \begin{equation}\aligned \mathcal{H}^{n-2+\delta}_\infty(\Lambda_{z,y,M_\infty}\cap B_{1}(z))>0. \endaligned \end{equation} By dimension reduction argument, there is a metric cone $C_y\subset\mathbb R^{n-1}\times CS^1_r$ with $\mathbb R^{n-1}\times \{o_r\}\subset C_y$, and $C_y$ splits off a factor $\mathbb R^{n-1}$ isometrically. Moreover, there are a sequence of $(n+1)$-dimensional complete Riemannian manifolds $Q_i$ with Ricci curvature $\ge-(n-1)R_i^{-2}$ on $B_{R_i}(y_i)\subset Q_i$ for some sequence $R_i\rightarrow\infty$ such that $(Q_i,y_i)$ converges to the cone $(Y_y,o_y)$ in the pointed Gromov-Hausdorff sense, and a sequence of area-minimizing hypersurfaces $\Sigma_i$ in $B_{R_i}(y_i)$ with $\p \Sigma_i\subset\p B_{R_i}(y_i)$ such that $(\Sigma_i,y_i)$ converges in the induced Hausdorff sense to $(C_y,o_y)$. In particular, $\p C_y=\emptyset$. There is a 1-dimensional cone $C_y'\subset CS^1_r$ with $\p C_y'=\emptyset$ such that $C_{y}=\mathbb R^{n-1}\times C_y'$. Let $\Omega$ be a domain (connected open set) in $CS^1_r$ with boundary in $C_y'$. Then $\p\Omega\cap\p B_1(o_r)$ are two points, denoted by $\alpha^+,\alpha^-$. Then there is a minimizing geodesic $\gamma$ in $\overline{\Omega}$ connecting $\alpha^+,\alpha^-$, and there is a constant $\theta_r>0$ so that the length of $\gamma$ satisfies $\mathcal{H}^1(\gamma)\le2-\theta_r$. Let $U$ denote the bounded domain in $\Omega\subset CS^1_r$ enclosed by $C_y'$ and $\gamma$. For any $\epsilon\in(0,1)$, let $\epsilon\gamma$ denote a minimizing geodesic connecting $\epsilon\alpha^+,\epsilon\alpha^-$, and $\epsilon U$ denote the bounded domain in $CS^1_r$ enclosed by $C_y'$ and $\epsilon\gamma$. For any $t>0$, we define a domain $\Omega_{t,\epsilon}$ in $Y_y\setminus(\mathbb R^{n-1}\times \{o_r\})$ by $$\Omega_{t,\epsilon}=\{(\xi,x)\in \mathbb R^{n-1}\times \Omega|\, \epsilon<d(x,o_r)<1,|\xi|<t\}.$$ From \eqref{MSHS} and the proof of Proposition \ref{MinMSinfty}, $C_y$ is an area-minimizing hypersurface in $\overline{\Omega_{t,\epsilon}}$ for any $t>0,\epsilon\in(0,1)$. Let $S_*=\{(\xi,x)\in \mathbb R^{n-1}\times CS^1_r|\, x\in\gamma\cup\epsilon\gamma,|\xi|<t\}$, and $S^*=\{(\xi,x)\in \mathbb R^{n-1}\times CS^1_r|\, x\in \overline{U}\setminus\epsilon U,|\xi|=t\}$. Let $S=S_*\cup S^*$ be a set in $\overline{\Omega_{t,\epsilon}}$, then \begin{equation}\aligned \p S=&\left\{(\xi,x)\in \mathbb R^{n-1}\times \overline{\Omega}|\, \epsilon\le d(x,o_r)\le1,|\xi|=t\right\}\\ &\cup\left\{(\xi,x)\in \mathbb R^{n-1}\times \overline{\Omega}|\, d(x,o_r)=\epsilon\ or\ 1,|\xi|\le t\right\}=C_y\cap\p\Omega_{t,\epsilon}. \endaligned \end{equation} Minimizing $C_y$ in $\overline{\Omega_{t,\epsilon}}$ implies \begin{equation}\aligned 2(1-\epsilon)\omega_{n-1}t^{n-1}=&\mathcal{H}^{n}\left(C_y\cap \overline{\Omega_{t,\epsilon}}\right)\le\mathcal{H}^{n}(S)=\mathcal{H}^{n}(S_*)+\mathcal{H}^{n}(S^*)\\ \le&(1+\epsilon)(2-\theta_r)\omega_{n-1}t^{n-1}+\pi(n-1)\omega_{n-1}t^{n-2}. \endaligned \end{equation} The above inequality is impossible for the suitable large $t>0$ and the suitable small $\epsilon>0$. This completes the proof. \end{proof} \section{Appendix I} Compared with Schoen-Yau's argument on Laplacian of distance functions from fixed points (see Proposition 1.1 in \cite{SY2}), the inequality \eqref{Hxtge} holds in the distribution sense as follows. \begin{lemma}\label{DerMge*} Let $N$ be an $(n+1)$-dimensional complete Riemannnian manifolds with Ricci curvature $\ge-n\delta^2_N$ on $B_R(p)$. If $V$ is an $n$-rectifiable stationary varifold in $B_{R}(p)$ with $M=\mathrm{spt} V\cap B_R(p)$, then \begin{equation}\aligned\label{DerMge} \Delta_N\rho_M\le n\delta_N\tanh\left(\delta_N\rho_M\right)\qquad on\ B_{R-t}(p)\cap B_t(M)\setminus M \endaligned \end{equation} for all $0<t<R$ in the distribution sense. \end{lemma} \begin{proof} For $\mathcal{H}^n$-a.e. $x\in \mathrm{spt}V\cap B_R(p)$, the tangent cone of $V$ at $x$ is a hyperplane in $\mathbb R^{n+1}$ with multiplicity $\theta_x>0$. Let $\theta$ denote the multiplicity function of $V$, then $\theta(x)=\theta_x$ for $\mathcal{H}^n$-a.e. $x$. Let $\mu_V$ denote the Radon measure associated with $V$ defined by $\mu_V=\mathcal{H}^n\llcorner\theta$, then $\lim_{r\rightarrow0}r^{-n}\mu_V(B_r(x))=\theta_x\omega_n$. Note that the density of stationary varifolds in Euclidean space is upper semicontinuous (see \cite{S}). Similarly, we can let $\theta$ be the density of $V$, and $\theta$ is upper semicontinuous on $\mathrm{spt}V\cap B_R(p)$. By Allard's regularity theorem, $M$ is smooth in a neighborhood of $x$. From constancy theorem (see \cite{S}), $M$ is a minimal hypersurface in a neighborhood of $x$. Hence, the singular set $\mathcal{S}_M$ of $M$ is closed in $B_R(p)$. For any $i\in \mathbb{N}^+$, there is a covering $\{B_{r_{i,j}}(x_{i,j})\}_{j=1}^{l_i}$ of $\mathcal{S}_M$ with $\lim_{i\rightarrow\infty}l_i=\infty$ such that $\omega_n\sum_{j=1}^{l_i} r_{i,j}^n<2^{-i}$. Hence there is a sequence of smooth embedded hypersurfaces $M_i$ converging to $M$ in the Hausdorff sense with $M_i=M$ outside $\bigcup_{j=1}^{l_i} B_{r_{i,j}}(x_{i,j})$. Let $\rho_{M_i}$ be the distance function from $M_i$. Let $K$ be a closed set in $B_{R-t}(p)\cap B_t(M)\setminus M$ for some fixed $t\in(0,R)$. We claim \begin{equation}\label{equivMiM} \rho_{M_i}=\rho_{M}\qquad \mathrm{on} \ \ K \end{equation} for the sufficiently large $i$ depending on $n,t,R,K$. Let us prove \eqref{equivMiM}. For any $x\in K$, there is a point $y_x\in M$ such that $\rho_M(x)=d(x,y_x)$. Then any tangent cone of $M$ at $y_x$ is on one side of $\mathbb R^n$ in $\mathbb R^{n+1}$. By maximum principle of (singular) minimal hypersurfaces, the tangent cone of $M$ at $y_x$ is $\mathbb R^n$, which implies $y_x\in \mathcal{R}_{M}= M\setminus \mathcal{S}_M$. Then $y_x\varsubsetneq \bigcup_{j=1}^{l_i} B_{r_{i,j}}(x_{i,j})$ for the sufficiently large $i$ depending on $n,t,R,K$, which implies $y_x\in M_i$ and $d(x,y)>\rho_M(x)$ for any $y\in\bigcup_{j=1}^{l_i} B_{r_{i,j}}(x_{i,j})$. Hence it follows that $\rho_{M_i}(x)=d(x,y_x)=\rho_M(x)$, and this confirms the claim \eqref{equivMiM}. Let $\mathcal{C}_{M}$ denote the cut locus of $\rho_{M}$, the set including all the points which are joined to $M$ by two minimizing geodesics at least. From the above argument and \eqref{Hxtge}, we immediately have \begin{equation}\label{DeMideN} -\Delta\rho_{M}\ge-n\delta_N\tanh\left(\delta_N\rho_{M}\right)\qquad \mathrm{on}\ B_{R-t}(p)\cap B_t(M)\setminus(M\cup\mathcal{C}_{M}) \end{equation} for all $0<t<R$. Let $\mathcal{C}_{M_i}$ denote the cut locus of $\rho_{M_i}$ for each $i$. For any closed set $K\subset B_{R-t}(p)\cap B_t(M)\setminus M$, $\mathcal{C}_{M_i}\cap K$ is a closed set of Hausdorff dimension $\le n$ and $\mathcal{C}_{M_i}\cap K$ is smooth up to a zero set of Hausdorff dimension $\le n$ (see \cite{MM} for instance). From \eqref{equivMiM}, $\mathcal{C}_{M}\cap K=\mathcal{C}_{M_i}\cap K$ for the sufficiently large $i$. Hence, $\mathcal{C}_{M}$ is closed in $B_{R-t}(p)\cap B_t(M)\setminus M$ of Hausdorff dimension $\le n$ and $\mathcal{C}_{M}$ is smooth up to a zero set of Hausdorff dimension $\le n$. Let $\mathcal{E}_s\triangleq B_s(\mathcal{C}_{M})$ denote the $s$-neighborhood of $\mathcal{C}_{M}$ in $N$, then (as the Hausdorff measure is Borel-regular) \begin{equation}\aligned\label{VOlKt0} \lim_{s\rightarrow0}\mathcal{H}^{n+1}(\mathcal{E}_s)=0. \endaligned \end{equation} From co-area formula and Theorem 4.4 in \cite{Gi}, $\p\mathcal{E}_s$ is countably $n$-rectifiable for almost all $s>0$. For any point $y\in\p\mathcal{E}_s$, there is a unique normalized geodesic $\gamma_y\subset N$ connecting $\gamma(0)\in M$ and $\gamma_y(\rho_M(y)+s)\in \mathcal{C}_{M}$ with $y=\gamma_y(\rho_M(y))$. In particular, (compared with the argument of the proof of Proposition 1.1 in \cite{SY2}) \begin{equation}\aligned\label{nuijge0} \langle \nu_{s},\nabla\rho_{M}\rangle\ge0 \qquad \mathcal{H}^n-a.e.\ \ \mathrm{on}\ \p \mathcal{E}_s, \endaligned \end{equation} where $\nu_s$ is the inner unit normal vector $\mathcal{H}^n$-a.e. to $\p \mathcal{E}_s$. Let $U$ be an open set in $B_{R-t}(p)\cap B_t(M)\setminus M$. Let $\phi$ be a nonnegative Lipschitz function on $U$ with $\phi=0$ on $\p U$, then with \eqref{VOlKt0} \begin{equation}\aligned \int_U\left\langle\nabla\rho_{M},\nabla\phi\right\rangle=\lim_{s\rightarrow0}\int_{U\setminus \mathcal{E}_s}\left\langle\nabla\rho_{M},\nabla\phi\right\rangle. \endaligned \end{equation} With \eqref{DeMideN} and \eqref{nuijge0}, integrating by parts implies \begin{equation}\aligned \int_{U\setminus \mathcal{E}_s}\left\langle\nabla\rho_{M},\nabla\phi\right\rangle=&\int_{U\cap\p \mathcal{E}_s}\phi\langle \nu_s,\nabla\rho_{M}\rangle-\int_{U\setminus \mathcal{E}_s}\phi\Delta\rho_{M}\\ \ge&-n\delta_N\int_{U\setminus \mathcal{E}_s}\phi\tanh\left(\delta_N\rho_M\right). \endaligned \end{equation} Letting $s\rightarrow0$ in the above inequality infers \begin{equation}\aligned \int_U\left\langle\nabla\rho_{M},\nabla\phi\right\rangle\ge-n\delta_N\int_U\phi\tanh\left(\delta_N\rho_{M}\right). \endaligned \end{equation} This completes the proof. \end{proof} In general, the inequality \eqref{DerMge} cannot hold true in $B_{R-t}(p)\cap B_t(M)$. For example, let $N$ be the standard Euclidean space, $M$ be the $\mathbb R^n\times\{0\}\subset\mathbb R^{n+1}=N$, then $\rho_M(x_1,\cdots,x_{n+1})=|x_{n+1}|$. Clearly, $|x_{n+1}|$ is not a superharmonic function on $\mathbb R^{n+1}$ in the distribution sense. \section{Appendix II} Let $N_i$ be a sequence of $(n+1)$-dimensional smooth Riemannian manifolds with $\mathrm{Ric}\ge-n\kappa^2$ on the metric ball $B_{1+\kappa'}(p_i)\subset N_i$ for constants $\kappa\ge0$, $\kappa'>0$. Up to choose the subsequence, we assume that $\overline{B_1(p_i)}$ converges to a metric ball $\overline{B_1(p_\infty)}$ in the Gromov-Hausdorff sense. Namely, there is a sequence of $\epsilon_i$-Hausdorff approximations $\Phi_i:\, B_1(p_i)\rightarrow B_1(p_\infty)$ for some sequence $\epsilon_i\rightarrow0$. Let $\nu_\infty$ denote the renormalized limit measure on $B_1(p_\infty)$ obtained from the renormalized measures as \eqref{nuinfty}. For any set $K$ in $\overline{B_1(p_\infty)}$, let $B_\delta(K)$ be the $\delta$-neighborhood of $K$ in $\overline{B_1(p_\infty)}$ defined by $\{y\in \overline{B_1(p_\infty)}|\ d(y,K)<\delta\}$. Here, $d$ denotes the distance function on $\overline{B_1(p_\infty)}$. Let us introduce several useful results as follows. For the completeness and self-sufficiency, we give the proofs here. \begin{lemma}\label{nuinftyKthj} Let $K$ be a closed subset of $B_1(p_\infty)$ with $\inf_{x\in K}d(x,\p B_1(p_\infty))=\epsilon_0>0$. For each $\epsilon\in(0,\epsilon_0/3]$, there is a sequence of mutually disjoint balls $\{B_{\theta_j}(x_j)\}_{j=1}^\infty$ with $x_j\in K$ and $\theta_j<\epsilon$ such that $K\subset \bigcup_{1\le j\le k}B_{\theta_j+2\theta_k}(x_j)$ for the sufficiently large $k$, and \begin{equation}\aligned\label{nuinfKUP} \nu_\infty(K)\le\sum_{j=1}^{\infty}\nu_\infty\left(B_{\theta_j}(x_j)\right). \endaligned \end{equation} Moreover, if $\mathcal{H}^m(K)<\infty$ and $\inf\{r^{-m}\mathcal{H}^m(K\cap B_r(x))|\, B_r(x)\subset B_1(p_\infty)\}>0$ for some integer $0<m\le n+1$, then \begin{equation}\aligned\label{Hkepkjinfthjk} \mathcal{H}^m_\epsilon(K)\le\omega_m\sum_{j=1}^{\infty}\theta_j^m. \endaligned \end{equation} \end{lemma} \begin{proof} For each $\epsilon>0$, let $\mathcal{U}_\epsilon$ be a collection of balls defined by $$\mathcal{U}_\epsilon=\left\{B_r(x)\subset B_1(p_\infty)\big|\ x\in K,\ 0<r\le\epsilon\right\}.$$ Now we adopt the idea in the proof of Vitali's covering lemma. First, we take $B_{\theta_1}(x_1)\subset\mathcal{U}_\epsilon$ such that $\theta_1$ is the largest radius of balls belonging to $\mathcal{U}_\epsilon$. Suppose that $B_{\theta_1}(x_1),B_{\theta_2}(x_2),\cdots$, $B_{\theta_{k-1}}(x_{k-1})$ have already been chosen. Then we select $B_{\theta_k}(x_k)\subset\mathcal{U}_\epsilon$ such that $\theta_k$ is the largest radius of balls belonging to $\mathcal{U}_\epsilon$ with $B_{\theta_k}(x_k)\cap B_{\theta_j}(x_j)=\emptyset$ for each $1\le j\le k-1$. Hence we can obtain an infinite sequence of mutually disjoint balls $\{B_{\theta_j}(x_j)\}_{j\ge1}$. From the choice of $\theta_j$, $\lim_{j\rightarrow\infty}\theta_j=0$ and $\theta_j\ge \theta_{j+1}$ for each $j\ge1$. Then there is an integer $N_\theta>0$ such that $\theta_{k}<\epsilon$ for all $k\ge N_\theta$. For a point $x\in K\setminus\bigcup_{j=1}^{k}B_{\theta_j}(x_j)$ with $k\ge N_\theta$, $\bigcup_{j=1}^{k}\overline{B_{\theta_j}(x_j)}\cap \overline{B_{\theta_k}(x)}\neq\emptyset$ according to the choice of $\{B_{\theta_j}(x_j)\}_{j=1}^k$, which implies $x\in\bigcup_{j=1}^{k}B_{\theta_j+2\theta_k}(x_j)$. Hence, \begin{equation}\aligned\label{detauFinfty} K\subset\bigcup_{j=1}^{k}B_{\theta_j+2\theta_k}(x_j). \endaligned \end{equation} Combining \eqref{detauFinfty} and Bishop-Gromov comparison, there is a constant $\beta_{n,\kappa}>0$ depending only on $n,\kappa$ such that \begin{equation}\aligned\label{nuinf*} \nu_\infty(K)\le&\sum_{j=1}^{k}\nu_\infty\left(B_{\theta_{j}+2\theta_k}(x_j)\right)\le\sum_{j=1}^{k}\f{V^{n+1}_\kappa(\theta_j+2\theta_k)}{V^{n+1}_\kappa(\theta_j)}\nu_\infty\left(B_{\theta_j}(x_j)\right)\\ <&\sum_{j=1}^{k}\left(1+\beta_{n,\kappa}\f{\theta_k}{\theta_j}\right)\nu_\infty\left(B_{\theta_j}(x_j)\right), \endaligned \end{equation} where $V^{n+1}_\kappa(r)$ denotes the volume of a geodesic ball with radius $r$ in an $(n+1)$-dimensional space form with constant sectional curvature $-\kappa^2$. For any $\delta>0$, from $\nu_\infty(B_1(p_\infty))=1$ there is a constant $N_\theta'\ge N_\theta$ so that $$\sum_{j=N_\theta'}^\infty \nu_\infty\left(B_{\theta_j}(x_j)\right)<\delta.$$ Combining $\theta_j\ge \theta_{j+1}$, we have \begin{equation}\aligned\label{knuinf000*} &\sum_{j=1}^{k}\f{\theta_k}{\theta_j}\nu_\infty\left(B_{\theta_j}(x_j)\right) \le\sum_{j=1}^{N_\theta'-1}\f{\theta_k}{\theta_j}\nu_\infty\left(B_{\theta_j}(x_j)\right)+\sum_{j=N_\theta'}^k\f{\theta_k}{\theta_j}\nu_\infty\left(B_{\theta_j}(x_j)\right)\\ \le&\f{\theta_k}{\theta_{N_\theta'}}\sum_{j=1}^{N_\theta'-1}\nu_\infty\left(B_{\theta_j}(x_j)\right)+\sum_{j=N_\theta'}^k\nu_\infty\left(B_{\theta_j}(x_j)\right) \le\f{\theta_k}{\theta_{N_\theta'}}+\delta \endaligned \end{equation} for each $k\ge N_\theta'$. With $\lim_{j\rightarrow\infty}\theta_j=0$, it is easy to see \begin{equation}\aligned\label{knuinf000} \lim_{k\rightarrow\infty}\sum_{j=1}^{k}\f{\theta_k}{\theta_j}\nu_\infty\left(B_{\theta_j}(x_j)\right)=0. \endaligned \end{equation} Combining \eqref{nuinf*}\eqref{knuinf000} we have \begin{equation}\aligned \nu_\infty(K)\le\lim_{k\rightarrow\infty}\sum_{j=1}^{k}\left(1+\beta_{n,\kappa}\f{\theta_k}{\theta_j}\right)\nu_\infty\left(B_{\theta_j}(x_j)\right) =\sum_{j=1}^{\infty}\nu_\infty\left(B_{\theta_j}(x_j)\right). \endaligned \end{equation} Now we assume $\mathcal{H}^m(K)<\infty$ for some integer $0<m\le n+1$, and $\delta_K\triangleq\inf\{r^{-m}\mathcal{H}^m(K\cap B_r(x))|\, B_r(x)\subset B_1(p_\infty)\}>0$. Then from \eqref{knuinf000*}\eqref{knuinf000}, we have \begin{equation}\aligned\label{ktoinfthjk=0} \lim_{k\rightarrow\infty}\sum_{j=1}^{k}\theta_k\theta_j^{m-1}\le\delta_K^{-1}\lim_{k\rightarrow\infty}\sum_{j=1}^{k}\f{\theta_k}{\theta_j}\mathcal{H}^m\left(B_{\theta_j}(x_j)\right)=0. \endaligned \end{equation} There is a constant $\beta_m$ depending on $m$ such that \begin{equation}\aligned\label{ktoinfthjk=0*} \mathcal{H}^m_\epsilon(K)\le&\omega_m\sum_{j=1}^{k}(\theta_{j}+2\theta_k)^m\le\omega_m\sum_{j=1}^{k}\left(1+\beta_m\f{\theta_k}{\theta_j}\right)\theta_j^m. \endaligned \end{equation} With \eqref{ktoinfthjk=0}\eqref{ktoinfthjk=0*}, we get \eqref{Hkepkjinfthjk}. \end{proof} \begin{lemma}\label{Cont*} Let $F_i$ be a subset in $B_1(p_i)\subset N_i$ such that $\Phi_i(F_i)$ converges in the Hausdorff sense to a closed set $F_\infty$ in $B_1(p_\infty)$, then $$\nu_\infty(F_\infty)\ge\limsup_{i\rightarrow\infty}\mathcal{H}^{n+1}(F_i)/\mathcal{H}^{n+1}(B_1(p_i)).$$ \end{lemma} \begin{proof} Note that $\nu_\infty$ is an outer measure defined as (1.9) in \cite{CCo1}. For any $\epsilon>0$, by finite covering lemma there exist a constant $\delta>0$, and a covering $\{B_{r_j}(x_{j})\}_{j=1}^{k_\delta}$ of $F_\infty$ with finite $k_\delta\in\mathbb N$ and $r_j\in(0,\delta)$ such that \begin{equation}\aligned\label{10.1} \nu_\infty(F_\infty)\ge\sum_{j=1}^{k_\delta}\nu_\infty(B_{r_j}(x_j))-\epsilon. \endaligned \end{equation} Without loss of generality, we assume that $\bigcup_{j=1}^{k_\delta}B_{r_{j}}(x_{j})$ contains $B_\epsilon(F_\infty)$ up to choose the sufficiently small $\epsilon>0$, where $B_\epsilon(F_\infty)$ is the $\epsilon$-neighborhood of $F_\infty$. Since $B_1(p_i)$ converges to $B_1(p_\infty)$ in the Gromov-Hausdorff sense, then there is a sequence $x_{i,j}\in B_1(p_i)$ with $x_{i,j}\rightarrow x_j$. Combining $\Phi_i(F_i)\rightarrow F_\infty$ in the Hausdorff sense, we get $F_i\subset \bigcup_{j=1}^{k_\delta}B_{r_j}(x_{i,j})$. With \begin{equation}\aligned \lim_{i\rightarrow\infty}\mathcal{H}^{n+1}(B_{r_j}(x_{i,j}))/\mathcal{H}^{n+1}(B_1(p_i))=\nu_\infty(B_{r_j}(x_j)) \endaligned \end{equation} for each $j$, we get \begin{equation}\aligned\label{10.2} \mathcal{H}^{n+1}(F_i)/\mathcal{H}^{n+1}(B_1(p_i))\le\sum_{j=1}^{k_\delta}\mathcal{H}^{n+1}(B_{r_j}(x_{i,j}))/\mathcal{H}^{n+1}(B_1(p_i))\le\epsilon+\sum_{j=1}^{k_\delta}\nu_\infty(B_{r_j}(x_j)) \endaligned \end{equation}for the sufficiently large $i$. Combining \eqref{10.1}\eqref{10.2} we have $$\nu_\infty(F_\infty)+2\epsilon\ge\limsup_{i\rightarrow\infty}\mathcal{H}^{n+1}(F_i)/\mathcal{H}^{n+1}(B_1(p_i)).$$ Letting $\epsilon\rightarrow0$ completes the proof. \end{proof} \begin{lemma}\label{subset} Let $F_i$ be a subset in $B_1(p_i)\subset N_i$ such that $\Phi_i(F_i)$ and $\Phi_i(\p F_i)$ converge in the Hausdorff sense to closed sets $F_\infty$ and $T_\infty$ in $B_1(p_\infty)$, respectively. Then $\p F_\infty\subset T_\infty$. \end{lemma} \begin{proof} For any $\epsilon>0$, there is an $i_0=i_0(\epsilon)>0$ such that $\Phi_i(\overline{F_i})\subset B_\epsilon(F_\infty)$ for all $i\ge i_0$. For any $y\in \p F_\infty$, if there is no sequence in $\p F_i$ converging to $y$, then there are a constant $\delta>0$ and a sequence $y_i\in F_i$ converging to $y$ such that the distance to $\p F_i$ at $y_i$ satisfies $d(y_i,\p F_i)\ge\delta$, which implies $B_\delta(y_i)\subset F_i$. Hence, we get $B_\delta(y)\subset F_\infty$, which is a contradiction to $y\in \p F_\infty$. So there is a sequence $z_i\in\p F_i$ so that $z_i\rightarrow y$, which implies $y\in\lim_{i\rightarrow\infty}\p F_i=T_\infty$. We complete the proof. \end{proof} \end{document}
arXiv
Special & General Relativity Futurism & Space Travel Gregory Leal Calculating the Arc Length of a Curve In this lesson, we'll use the concept of the definite integral to calculate the arc length of a curve. Let's say that the curve \(P_1P_n\) in Figure 1 is any arbitrary curve. Let's subdivide this curve into an \(n\) number of smaller arcs \(P_1P_2,P_2P_3,…,P_{n-1}P_n\) as illustrated in Figure 1. By drawing a straight line from \(P_i\) to \(P_{i+1}\) for each arc \(P_iP_{i+1}\) (where \(i=1,…,n\)), we can draw an \(n\) number of chords which will roughly be equal to the length of each arc \(P_iP_{i+1}\) if \(n\) is very large. If we use the notation \(L_i(P_iP_{i+1})\) to represent the length of each chord drawn in Figure 1, then by summing the lengths of the chords, \(P_1P_2,P_2P_3,…,P_{n-1}P_n\), we can obtain a rough estimate of the total arc length of the curve \(P_1P_n\). Thus, $$\text{Arc length of }P_1P_n≈L_1(P_1P_2)+…+L_n(P_{n-1}P_n)$$ or, equivalently, $$\text{Arc length of }P_1P_n≈\sum_{i=1}^nL_i(P_iP_{i+1}).\tag{1}$$ Figure 1: The curve \(P_1P_n\) split up into an \(n\) number of chords \(P_iP_{i+1}\) where \(i=1,...,n\). By taking the sum of the lengths of each chord (represented by \(\sum_{i=1}^nL(P_iP_{i+1})\)) and then taking the limit as \(n→∞\), we obtain the exact arc length \(s\) of the entire curve. Let's briefly review the concept of a limit. If I write the limit, $$\lim_{z→k}\text{ ('something')}=?,$$ then the thing that the limit is equal to is whatever the "something" gets closer and closer to equaling as \(z\) approaches \(k\). Now, as \(n→∞\) (which is to say, as the number of subdivisions in the arc \(P_1P_n\) approaches infinity), what does the sum \(\sum_{i=1}^nL_i(P_iP_{i+1})\) get closer and closer to equaling? Answering this question is very important since whatever \(\sum_{i=1}^nL_i(P_iP_{i+1})\) gets closer and closer to equaling as \(n→∞\) must be the thing that the limit \(\lim_{n→∞}\sum_{i=1}^nL_i(P_iP_{i+1})\) is equal to. Well, as the number of subdivisions becomes greater and greater, the length of each chord, \(L_i(P_iP_{i+1})\), will get closer and closer to equaling the length of each arc \(P_iP_{i+1}\). Thus, as the number of subdivisions keeps increasing and as \(n→∞\), the sum \(\sum_{i=1}^nL_i(P_iP_{i+1})\) will get closer and closer to equaling the exact arc length of the curve \(P_1P_n\). Thus, $$\lim_{n→∞}\sum_{i=1}^nL_i(P_iP_{i+1})=\text{Arc length of curve }P_1P_n.$$ Since it is typical to denote the arc length of a curve with the letter \(s\), we'll rewrite the above equation as $$s=\lim_{n→∞}\sum_{i=1}^nL_i(P_iP_{i+1}).\tag{2}$$ Equation (2) represents, conceptually, how the arc length can be obtained by taking the infinite sum of the lengths of infinitesimally small chords. But since a definite integral involves taking the limit of a sum of the form $$S_n=\sum_{i=1}^ng(x_i)Δx,\tag{3}$$ we must find a way to represent Equation (1) of the same form as Equation (3). Then, after that, by taking the limit of the sum as \(n→∞\) as we did in Equation (2), we'll obtain a definite integral of the form $$\int_a^bg(x)dx.$$ Figure 2: A close up view of the \(i^{th}\) chord subdividing the curve \(P_1P_n\). Since the chord \(P_iP_{i+1}\) forms a right triangle, by using the Pythagorean theorem we can express \(L(P_iP_{i+1})\) as \(\sqrt{(Δx_i)^2+(Δy_i)^2}\). Let's try to represent the quantity \(L_i(P_iP_{i+1})\) in the same form as \(f(x_i)Δx_i\). In Figure 2, I have drawn a "zoomed in" image of the \(i^{th}\) chord \(P_iP_{i+1}\). As you can see from Figure 2, the typical chord \(P_iP_{i+1}\) can be expressed in terms of \(x\) and \(y\) as $$L_i(P_iP_{i+1})=\sqrt{(Δx_i)^2+(Δy_i)^2}.\tag{4}$$ Equation (4) brought us one step closer to expressing \(L_i(P_iP_{i+1})\) in the form \(f(x_i)\); the only problem is that \(Δy_i\) term in Equation (4). Let's try to represent the radical in Equation (4) entirely in terms of \(x\) by doing some algebraic manipulations. If we multiply the right-hand side of Equation (4) by \(\frac{\sqrt{(Δx_i)^2}}{\sqrt{(Δx_i)^2}}\) (\(=1\)), then we can simplify the radical in Equation (4) to $$\frac{\sqrt{(Δx_i)^2}}{\sqrt{(Δx_i)^2}}\sqrt{(Δx_i)^2+(Δy_i)^2}=\sqrt{\frac{1}{(Δx_i)^2}\biggl((Δx_i)^2+(Δy_i)^2\biggr)}Δx_i$$ $$=\sqrt{1+\biggl(\frac{Δy_i(x)}{Δx_i }\biggr)^2}Δx_i $$ Thus, $$L_i(P_iP_{i+1})=\sqrt{1+\biggl(\frac{Δy_i(x)}{Δx_i }\biggr)^2}Δx_i.\tag{5}$$ What is nice about Equation (5) is that it can be expressed entirely in terms of (x\) since the slope, \(Δy_i(x)/Δx_i\), of the chord \(P_iP_{i+1}\) can be represented entirely in terms of \(x\). Substituting Equation (5) into (2), we have $$S=\lim_{n→∞}\sum_{i=1}^n=\sqrt{1+\biggl(\frac{Δy_i(x)}{Δx_i }\biggr)^2}Δx_i.\tag{6}$$ The sum in Equation (6) is precisely the same form as Equation (3). Thus, the limit in Equation (6) must be equal to the definite integral $$\lim_{n→∞}\sum_{i=1}^n=\sqrt{1+\biggl(\frac{Δy_i(x)}{Δx_i }\biggr)^2}Δx_i=\int_a^b\sqrt{1+(f'(x))^2}dx.\tag{7}$$ (As the number \(n\) of subdivisions of the arc \(P_1P_n\) approaches infinity, the length of the typical chord, \(L_i(P_iP_{i+1})\), becomes infinitesimally small. This means that the other side lengths \(Δx_i\) and \(Δy_i\) in the right triangle in Figure 2 must also become infinitesimally small. In other words, as \(n→∞\), \(Δx_i→dx\) and \(Δy_i→dy\). Thus, \(Δy_i/Δx_i→dy/dx\) and the slope becomes the derivative \(f'(x)\). I thought I would take the time to explain this since it's a common source of confusion as to where the \(f'(x)\) in Equation (7) comes from.) The definite integral in Equation (7) is used to calculate the arc lenth \(s\) of any arbitrary curve. Provided that you know what the function \(f'(x)\) is, then you'll be able to determine the arc length so long as the definite integral in Equation (7) is solvable. If the definite integral in Equation (7) is unsolvable (indeed, most integrals are), then you'll have to resort to numerical methods (ideally using a computer) to estimate its value. Source: https://www.gregschool.org/gregschoollessons/2017/10/6/calculating-the-arc-length-of-a-curve-hk54b-l6tsz Newer PostFinding the integral of \(kx^m\) Older PostGravitational Force Exerted by a Rod Get the latest lessons, news and updates delivered to your inbox. Privacy Policy © 2019 Greg School Terms of Use Powered by Squarespace
CommonCrawl
Antimatter is matter that is composed of the antiparticles of those that constitute normal matter. In 1929-31, Paul Dirac put forward a theory that for each type of particle, there is an antiparticle for which each additive quantum number has the negative of the value it has for the normal matter particle. The sign reversal applies only to quantum numbers (properties) which are additive, such as charge, but not to mass, for example. So, the antiparticle of the normal electron is called the positron, as it has a positive charge, but the same mass as the electron. An atom of antihydrogen, for instance, is composed of a negatively-charged antiproton being orbited by a positively-charged positron. Paul Dirac's theory has been experimentally verified and today a wide range of antiparticles have been detected. This is one of the few examples of a fundamental particle being predicted in theory and later discovered by experiment. If a particle/antiparticle pair comes in contact with each other, the two annihilate and produce a burst of energy, which may manifest itself in the form of other particles and antiparticles or electromagnetic radiation. In these reactions, rest mass is not conserved, although (as in any other reaction), mass-energy is conserved. Scientists in 1995 succeeded in producing anti-atoms of hydrogen, and also antideuteron nuclei, made out of an antiproton and an antineutron, but no anti-atom more complex than antideuteron has been created yet. In principle, sufficiently large quantities of antimatter could produce anti-nuclei of other elements, which would have exactly the same properties as their positive-matter counterparts. However, such a "periodic table of anti-elements" is thought to be, at best, highly unlikely, as the quantities of antimatter required would be, quite literally, astronomical. Antiparticles are created elsewhere in the universe where there are high-energy particle collisions, such as in the center of our galaxy, but none have been detected that are residual from the Big Bang, as most normal matter is [1] (http://science.nasa.gov/headlines/y2000/ast29may_1m.htm). The unequal distribution between matter and antimatter in the universe has long been a mystery. The solution likely lies in the violation of CP-symmetry by the laws of nature [2] (http://news.bbc.co.uk/2/hi/science/nature/2159498.stm). Positrons and antiprotons can individually be stored in a device called a Penning trap, which uses a combination of magnetic field and electric fields to hold charged particles in a vacuum. Two international collaborations (ATRAP and ATHENA) used these devices to store thousands of slowly moving antihydrogen atoms in 2002. It is the goal of these collaborations to probe the energy level structure of antihydrogen to compare it with that of hydrogen as a test of the CPT theorem. One way to do this is to confine the anti-atoms in an inhomogenous magnetic field (one cannot use electric fields since the anti-atom is neutral) and interrogate them with lasers. If the anti-atoms have too much kinetic energy they will be able to escape the magnetic trap, and it is therefore essential that the anti-atoms are produced with as little energy as possible. This is the key difference between the antihydrogen that ATRAP and ATHENA produced, which was made at very low temperatures, and the antihydrogen produced in 1995 which was moving at a speed close to the speed of light. Antimatter/matter reactions have practical applications in medical imaging, see Positron emission tomography (PET). In some kinds of beta decay, a nuclide loses surplus positive charge by emitting a positron (in the same event, a proton becomes a neutron, and neutrinos are also given off). Nuclides with surplus positive charge are easily made in a cyclotron and are widely generated for medical use. 1 Notation 2 Antimatter as fuel 3 Antimatter in popular culture Physicists need a notation to distinguish particles from antiparticles. One way is to denote an antiparticle by adding a bar (or macron) over the symbol for the particle. For example, the proton and antiproton are denoted as p and <math>\bar{\mathrm{p}}<math>, respectively. Another convention is to distinguish particles by their electric charge. Thus, the electron and positron are denoted simply as e− and e+. Adding a bar over the e+ symbol would be redundant and is not done. Antimatter as fuel In antimatter-matter collisions, the entire rest mass of the particles is converted to energy. The energy per unit mass is about 10 orders of magnitude greater than chemical energy, and about 2 orders of magnitude greater than nuclear energy that can be liberated today using chemical reactions or nuclear fission/fusion respectively. The reaction of 1 kg of antimatter with 1 kg of matter would produce 1.8×1017 J of energy (by the equation E=mc²). In contrast, burning a kilogram of gasoline produces 4.2×107 J, and nuclear fusion of a kilogram of hydrogen would produce 2.6×1015 J. Not all of that energy can be utilized by any realistic technology, because as much as 50% of energy produced in reactions between nucleons and antinucleons is carried away by neutrinos, so, for all intents and purposes, it can be considered lost. [3] (http://gltrs.grc.nasa.gov/reports/1996/TM-107030.pdf) The scarcity of antimatter means that it is not readily available to be used as fuel, although it could be used in antimatter catalyzed nuclear pulse propulsion. Generating a single antiproton is immensely difficult and requires particle accelerators and vast amounts of energy—millions of times more than is released after it is annihilated with ordinary matter, due to inefficiencies in the process. Known methods of producing antimatter from energy also produce an equal amount of normal matter, so the theoretical limit is that half of the input energy is converted to antimatter. Counterbalancing this, when antimatter annihilates with ordinary matter energy equal to twice the mass of the antimatter is liberated—so energy storage in the form of antimatter could (in theory) be 100% efficient. Antimatter production is currently very limited, but has been growing at a nearly geometric rate since the discovery of the first antiproton in 1955[4] (http://ffden-2.phys.uaf.edu/212_fall2003.web.dir/tyler_freeman/history.htm). The current antimatter production rate is between 1 and 10 nanograms per year, and this is expected to increase dramatically with new facilities at CERN and Fermilab. With current technology, it is considered possible to attain antimatter for $25 billion per gram (roughly 1,000 times more costly than current space shuttle propellants) by optimizing the collision and collection parameters, given current electricity generation costs. Antimatter production costs, in mass production, are almost linearly tied in with electricity costs, so economical pure-antimatter thrust applications are unlikely to come online without the advent of such technologies as deuterium-deuterium fusion power. Since the energy density is vastly higher than these other forms, the thrust to weight equation used in antimatter rocketry and spacecraft would be very different. In fact, the energy in a few grams of antimatter is enough to transport an unmanned spacecraft to Mars in about a month—the Mars Global Surveyer took eleven months to reach Mars. It is hoped that antimatter could be used as fuel for interplanetary travel or possibly interstellar travel, but it is also feared that if humanity ever gets the capabilities to do so, there could be the construction of antimatter weapons. Antimatter in popular culture A famous fictional example of antimatter in action is in the science fiction franchise Star Trek, where it is a common energy source for starships. Antimatter engines also appear in various books of the Dragonriders of Pern series by Anne McCaffrey. Dan Brown explores the use of antimatter as a weapon in his novel Angels and Demons, where terrorists threaten to destroy the Vatican with an antimatter bomb stolen from CERN. In The Nights Dawn Trilogy by Peter F. Hamilton, antimatter is characterized as the most dangerous substance imaginable and outlawed across the Galaxy. Stückelberg-Feynman interpretationca:Antimatèria de:Antimaterie el:Αντιύλη es:Antimateria fi:Antimateria fr:Antimatière gl:Antimateria he:אנטי-חומר hu:Antianyag it:Antimateria ja:反物質 lt:Antimaterija nl:Antimaterie pl:Antymateria ru:Антивещество sl:Antimaterija zh:反物质 sv:Antimateria Retrieved from "https://academickids.com:443/encyclopedia/index.php/Antimatter" Categories: Antimatter
CommonCrawl
Computational Complexity: A Conceptual Perspective Oded Goldreich This book is rooted in the thesis that complexity theory is extremely rich in conceptual content, and that this contents should be explicitly communicated in expositions and courses on the subject. It focuses on several sub-areas of complexity theory, starting from the intuitive questions addresses by the sub-area. The exposition discusses the fundamental importance of these questions, the choices made in the actual formulation of these questions and notions, the approaches that underly the answers, and the ideas that are embedded in these answers. Published in US in May 2008. (see the publisher's page for this volume). Below is the book's tentative preface and Organization. Drafts (and additional related texts) are available from HERE. See also list of corrections and additions (to the print version). Last updated: June 2008. Preface (tentative, Jan. 2007) The strive for efficiency is ancient and universal, as time and other resources are always in shortage. Thus, the question of which tasks can be performed efficiently is central to the human experience. A key step towards the systematic study of the aforementioned question is a rigorous definition of the notion of a task and of procedures for solving tasks. These definitions were provided by computability theory, which emerged in the 1930's. This theory focuses on computational tasks, and considers automated procedures (i.e., computing devices and algorithms) that may solve such tasks. In focusing attention on computational tasks and algorithms, computability theory has set the stage for the study of the computational resources (like time) that are required by such algorithms. When this study focuses on the resources that are necessary for any algorithm that solves a particular task (or a task of a particular type), the study becomes part of the theory of Computational Complexity (also known as Complexity Theory). Complexity Theory is a central field of the theoretical foundations of Computer Science. It is concerned with the study of the intrinsic complexity of computational tasks. That is, a typical Complexity theoretic study looks at the computational resources required to solve a computational task (or a class of such tasks), rather than at a specific algorithm or an algorithmic schema. Actually, research in Complexity Theory tends to start with and focus on the computational resources themselves, and addresses the effect of limiting these resources on the class of tasks that can be solved. Thus, Computational Complexity is the study of the what can be achieved within limited time (and/or other limited natural computational resources). The (half-century) history of Complexity Theory has witnessed two main research efforts (or directions). The first direction is aimed towards actually establishing concrete lower bounds on the complexity of computational problems, via an analysis of the evolution of the process of computation. Thus, in a sense, the heart of this direction is a ``low-level'' analysis of computation. Most research in circuit complexity and in proof complexity falls within this category. In contrast, a second research effort is aimed at exploring the connections among computational problems and notions, without being able to provide absolute statements regarding the individual problems or notions. This effort may be viewed as a ``high-level'' study of computation. The theory of NP-completeness as well as the studies of approximation, probabilistic proof systems, pseudorandomness and cryptography all fall within this category. The current book focuses on the latter effort (or direction). We list several reasons for our decision to focus on the ``high-level'' direction. The first is the great conceptual significance of the known results; that is, many known results (as well as open problems) in this direction have an extremely appealing conceptual message, which can be appreciated also by non-experts. Furthermore, these conceptual aspects may be explained without entering excessive technical detail. Consequently, the ``high-level'' direction is more suitable for an exposition in a book of the current nature. Finally, there is a subjective reason: the ``high-level'' direction is within our own expertise, while this cannot be said about the ``low-level'' direction. The last paragraph brings us to a discussion of the nature of the current book, which is captured by the subtitle (i.e., ``a conceptual perspective''). Our main thesis is that complexity theory is extremely rich in conceptual content, and that this contents should be explicitly communicated in expositions and courses on the subject. The desire to provide a corresponding textbook is indeed the motivation for writing the current book and its main governing principle. This book offers a conceptual perspective on complexity theory, and the presentation is designed to highlight this perspective. It is intended to serve as an introduction to the field, and can be used either as a textbook or for self-study. Indeed, the book's primary target audience consists of students that wish to learn complexity theory and educators that intend to teach a course on complexity theory. The book is also intended to promote interest in complexity theory and make it acccessible to general readers with adequate background (which is mainly being comfortable with abstract discussions, definitions and proofs). We expect most readers to have a basic knowledge of algorithms, or at least be fairly comfortable with the notion of an algorithm. The book focuses on several sub-areas of complexity theory (see the following organization and chapter summaries). In each case, the exposition starts from the intuitive questions addresses by the sub-area, as embodied in the concepts that it studies. The exposition discusses the fundamental importance of these questions, the choices made in the actual formulation of these questions and notions, the approaches that underly the answers, and the ideas that are embedded in these answers. Our view is that these (``non-technical'') aspects are the core of the field, and the presentation attempts to reflect this view. We note that being guided by the conceptual contents of the material leads, in some cases, to technical simplifications. Indeed, for many of the results presented in this book, the presentation of the proof is different (and arguably easier to understand) than the standard presentations. Table of contents (tentative, May 2007) Preface, Orgnaization and Chapter Summaries, Acknowledgments Introduction and Preliminaries 1.1.1 A brief overview of Complexity Theory; 1.1.2 Characteristics of Complexity Theory; 1.1.3 Contents of this book; 1.1.4 Approach and style of this book; 1.1.5 Standard notations and other conventions. 1.2 Computational Tasks and Models 1.2.1 Representation; 1.2.2 Computational Tasks; 1.2.3 Uniform Models (Algorithms) [Turing machines, Uncomputable functions, Universal algorithms, Time and space complexity, Oracle machines, Restricted models]; 1.2.4 Non-uniform Models (Circuits and Advice) [Boolean Circuits, Machines that take advice, Restricted models]; 1.2.5 Complexity Classes P, NP and NP-Completeness 2.1 The P versus NP Question 2.1.1 The search version - finding versus checking; 2.1.2 The decision version - proving versus verifying; 2.1.3 Equivalence of the two formulations; 2.1.4 Two technical comments regarding NP; 2.1.5 The traditional definition of NP; 2.1.6 In support of P different from NP; 2.1.7 Philosophical meditations. 2.2 Polynomial-time Reductions 2.2.1 The general notion of a reduction; 2.2.2 Reducing optimization problems to search problems; 2.2.3 Self-reducibility of search problems; 2.2.4 Digest and general perspective. 2.3 NP-Completeness 2.3.1 Definitions; 2.3.2 The existence of NP-complete problems; 2.3.3 Some natural NP-complete problems [ Circuit and formula satisfiability - CSAT and SAT, Combinatorics and graph theory]; 2.3.4 NP sets that are neither in P nor NP-complete; 2.3.5 Reflections on complete problems. 2.4 Three relatively advanced topics 2.4.1 Promise Problems; 2.4.2 Optimal search algorithms for NP; 2.4.3 The class coNP and its intersection with NP. Variations on P and NP 3.1 Non-uniform polynomial-time (P/poly) 3.1.1 Boolean Circuits; 3.1.2 Machines that take advice. 3.2 The Polynomial-time Hierarchy (PH) 3.2.1 Alternation of quantifiers; 3.2.2 Non-deterministic oracle machines; 3.2.3 The P/poly-versus-NP Question and PH. More Resources, More Power? 4.1 Non-uniform complexity hierarchies 4.2 Time Hierarchies and Gaps 4.2.1 Time Hierarchies [The Time Hierarchy Theorem, Impossibility of speed-up for universal computation, Hierarchy theorem for non-deterministic time]; 4.2.2 Time Gaps and Speed-Up. 4.3 Space Hierarchies and Gaps Space Complexity 5.1 General preliminaries and issues 5.1.1 Important conventions; 5.1.2 On the minimal amount of useful computation space; 5.1.3 Time versus Space [Two composition lemmas, An obvious bound, Subtleties regarding space-bounded reductions, Serach vs Decision, Complexity hierarchies and gaps, Simultaneous time-space complexity]; 5.1.4 Circuit Evaluation. 5.2 Logarithmic Space 5.2.1 The class L; 5.2.2 Log-Space Reductions; 5.2.3 Log-Space uniformity and stronger notions; 5.2.4 Undirected Connectivity. 5.3 Non-Deterministic Space Complexity 5.3.1 Two models; 5.3.2 NL and directed connectivity [Completeness and beyond, Relating NSPACE to DSPACE, Complementation or NL=coNL]; 5.3.3 A retrospective discussion. 5.4 PSPACE and Games Randomness and Counting 6.1 Probabilistic Polynomial-Time 6.1.1 Basic modeling issues; 6.1.2 Two-sided error (BPP); 6.1.3 One-sided error (RP and coRP); 6.1.4 Zero-sided error (ZPP); 6.1.5 Randomized Log-Space [Definitional issues, the accidental tourist sees it all]. 6.2 Counting 6.2.1 Exact Counting [including completeness]; 6.2.2 Approximate Counting [for count-DNF and count-P]; 6.2.3 Searching for unique solutions; 6.2.4 Uniform generation of solutions [Relation to approximate counting, Direct uniform generation]. The Bright Side of Hardness 7.1 One-Way Functions 7.1.1 The concept of one-way functions; 7.1.2 Amplification of Weak One-Way Functions; 7.1.3 Hard-Core Predicates; 7.1.4 Reflections on Hardness Amplification. 7.2 Hard Predicates in E 7.2.1 Amplification wrt polynomial-size circuits [From worst-case hardness to mild average-case hardness, Yao's XOR Lemma, List decoding and hardness amplification]; 7.2.2 Amplification wrt exponential-size circuits [Hard regions, Hardness amplification via hard regions]. Pseudorandom Generators 8.1 The General Paradigm 8.2 General-Purpose Pseudorandom Generators 8.2.1 The basic definition; 8.2.2 The archetypical application; 8.2.3 Computational Indistinguishability; 8.2.4 Amplifying the stretch function; 8.2.5 Constructions; 8.2.6 Non-uniformly strong pseudorandom generators; 8.2.7 Stronger notions and conceptual reflections. 8.3 Derandomization of time-complexity classes 8.3.1 Defining canonical derandomizers; 8.3.2 Constructing canonical derandomizers; 8.3.3 Technical variations and conceptual reflections. 8.4 Space Pseudorandom Generators 8.4.1 Definitional issues; 8.4.2 Two constructions. 8.5 Special Purpose Generators 8.5.1 Pairwise-Independence Generators; 8.5.2 Small-Bias Generators; 8.5.3 Random Walks on Expanders. Probabilistic Proof Systems 9.1 Interactive Proof Systems 9.1.1 Motivation and Perspective; 9.1.2 Definition; 9.1.3 The Power of Interactive Proofs; 9.1.4 Variants and finer structure [Arthur-Merlin games a.k.a public-coin proof systems, Interactive proof systems with two-sided error, A hierarchy of interactive proof systems, Something completely different]; 9.1.5 On computationally bounded provers [How powerful should the prover be? Computational-soundness]. 9.2 Zero-Knowledge Proof Systems 9.2.1 Definitional Issues; 9.2.2 The Power of Zero-Knowledge; 9.2.3 Proofs of Knowledge - a parenthetical subsection. 9.3 Probabilistically Checkable Proof Systems 9.3.1 Definition; 9.3.2 The Power of Probabilistically Checkable Proofs [Proving that PCP[poly,O(1)] contains NP, Overview of the first proof of the PCP Theorem, Overview of the second proof of the PCP Theorem]; 9.3.3 PCP and Approximation; 9.3.4 More on PCP itself - an overview. Relaxing the Requirements 10.1 Approximation 10.1.1 Search or Optimization; 10.1.2 Decision or Property Testing. 10.2 Average Case Complexity 10.2.1 The basic theory [Definitional issues, Complete problems, Probabilistic versions]; 10.2.2 Ramifications [Search versus Decision, Simple versus sampleable distributions]. Apdx A - Glossary of Complexity Classes A.1 Preliminaries; A.2 Algorithm-based classes [Time complexity classes, Space complexity]; A.3 Circuit-based classes. Apdx B - On the Quest for Lower Bounds B.1 Preliminaries; B.2 Boolean Circuit Complexity [Basic Results and Questions, Monotone Circuits, Bounded-Depth Circuits, Formula Size]; B.3 Arithmetic Circuits [Univariate Polynomials, Multivariate Polynomials]; B.4 Proof Complexity [Logical, Algebraic, and Geometric Proof Systems]. Apdx C - On the Foundations of Modern Cryptography C.1 Introduction and Preliminaries; C.2 Computational Difficulty [One-Way Functions and Trapdoor Permutations]; C.3 Pseudorandomness [including Pseudorandom Functions]; C.4 Zero-Knowledge [The Simulation Paradigm, The Actual Definition, A General Result and a Generic Application, Definitional Varitions and Related Notions]; C.5 Encryption Schemes [Definitions, Constructions, Beyond Eavesdropping Security]; C.6 Signatures and Message Authentication [Definitions, Constructions]; C.7 General Cryptographic Protocols [The Definitional Approach and Some Models, Some Known Results, Construction Paradigms and Two Simple Protocols, Concluding Remarks]. Apdx D - Probabilistic Preliminaries and Advanced Topics in Randomization D.1 Probabilistic preliminaries [Notational Conventions, Three Inequalities]; D.2 Hashing [Definitions, Constructions, The Leftover Hash Lemma]; D.3 Sampling [including Hitters]; D.4 Randomness Extractors [The Main Definition, Extractors as averaging samplers, Extractors as randomness-efficient error-reductions, Other perspectives, Constructions]. Apdx E - Explicit Constructions E.1 Error Correcting Codes [A mildly explicit version of the existential proof, The Hadamard Code, The Reed-Solomon Code, The Reed-Muller Code, Binary codes of constant relative distance and constant rate, Two additional computational problems, A list decoding bound]; E.2 Expander Graphs [Two Mathematical Definitions, Two levels of explicitness, Two properties, The Margulis-Gabber-Galil Expander, The Iterated Zig-Zag Construction]. Apdx F - Some Omitted Proofs F.1 Proving that PH reduces to count-P; F.2 Proving that IP[O(f)] is contained in AM[O(f)] and in AM[f] [Emulating general interactive proofs by AM-games, Linear speed-up for AM] Apdx G - Some Computational Problems G.1 Graphs G.2 Formulae G.3 Algebra G.4 Number Theory List of Corrections and Additions The following list does not include errors that readers are likely to overcome by themselves. Partial lists of such errors appear in HERE and HERE. In retrospect, it would have been good to warn in Sec 1.1.5 (page 16) that in mathematical equations "poly" usually stands for a fixed (but unspecified) polynomial that may change without warning; for example, the text sometimes uses $poly(n^2) = poly(n)$ etc. (This is most relevant to highly technical parts such as Sec 7.2.) An additional comment regarding the abstract RAM model (pages 25-26): The current presentation is over-simplified and aimed at the question of computability, while ighnoring the question of complexity. To properly account for the complexity of computation in the model, one has to either restrict the size/length of the registers or charge each operation according to the length of the integer actually sorted and manipulated. In the context of polynomial-time computation, one typically postulates that all registers and memory cell can hold integers that of logarithmic length (i.e., logarithmic in terms of the input length). In such a case, operations like multiplication place the least significant part of the result in one register and its most significant part in another. [Rainer Plaga] Typo on page 26, Footnote 12: "Turning machine" should be "Turing machine". [Ondrej Lengal] The proof of Thm 1.4 (on page 27) leaves some gaps including (1) proving the sets of reals and the set of reals in [0,1] have the same cardinality, (2) proving that the set of all functions (from strings to strings) and the set of Boolean functions have the same cardinality, and (3) dealing with the problem of non-uniqueness of binary representation of some (countablly many) reals. It would have been better to assert as the main result that the set of Boolean functions and the power set of the integers have the same cardinality. [Michael Forbes] Also, in the theorem statement aleph should be replaced by ${\aleph_0}$. [Jerry Grossman] The current proof of Thm 1.5 is flawed, since it is not necessarily the case that the output of $M''$ on input $\ang{M''}$ equals its output on input $\ang{M'}$. This can be fixed by changing $M''$ such that, in Step 1, it discards $x$ and uses $\ang{M'}$ instead. That is, Steps 2-4 should refer to an execution of $M'$ on input $\ang{M'}$. [Hamoon Mousavi] On the last line of page 32, the phrase "has time complexity at least T" should be replaced by "cannot have time complexity lower than T". [Jerry Grossman] Typo in the paragraph regarding universal machines on page 34: The machine $U'$ halts in $poly(|X|+t)$ steps (and not in $poly(|X|)$ steps, as stated). [Igor Carboni Oliveira] Errata regarding Def 1.11 (on page 35): While this simplifies definition suffices in the context of mere computability, it is not good enough for the context of resource bounded computation (e.g., time complexity). The problem is that the way the definition is formulated allows the machine to invoke the oracle on the answer provided by a previous call at unit time (rather than in time that is related to writing the new query). An easy fix is to use two oracle tapes, one on which queries are written, and the other on which the orcale answers are obtained. (This is exactly the convention used in the context of space complexity.) An alternative solution is to use the convention that the query (i.e., the actual contents of the tape when invoking the oracle) is the contents of the cells till the location of the machine's head on the single tape, and that the following configuration (which corresponds to the oracle spoke state) has the machine's head located on the leftmost cell. [Yosef Pogrow] Regarding Section 2.2.1.2: Karp-reductions are often referred to as (polynomial-time) many-to-one reductions. In fact, such reference can be found also in later chapters of this book. [Michal Koucky] Typo on line 9 of page 66: The references to the ``advanced comment'' is meant to refer to Section 2.2.3.2. Similarly, the last sentence in Section 2.2.3.1 should just refer to Section 2.2.3.2. Minor error in the advanced comment on page 70: Not every search problem is Karp-reducible to $R_H$; this holds only for the class of search problems that refer to relations for which the membership problem is decidable. Also $R_H$ is neither solvable nor is there an algorithm deciding membership in $R_H$. Typo in Def. 2.30: the set $P$ mentioned at the end should be $S_R$. Clarification for reducibility among promise problems (top of p. 90): Although the running-time of algorithms solving promise problems was defined (in Def 2.29) with respect to inputs that satisfy the promise, whenever the time bound is "time constructible" (per Def 4.2), we can guarantee that this time bound holds for all inputs. This convention is important when proving results as Exer 2.35. [Noam Cohen-Avidan, Romi Lifshitz, Eugene Pekel, Aviv Ruimi, and Susheel Shankar] Typo on line 11 of page 96: a close parenthesis is missing in the math expression. Addition to the Chapter Notes (of Ch. 2, page 98). The definition of the search problem classes PC and PF coincide with the standard definitions knows as FNP and FP, although the notation of the later classes in which `F' stands for function is misleading. As stated throughout the chapter, I am not happy with the term NP, since it refers to Definition 2.7, which I consider a conceptually wrong way of introducing this fundamental class. However, I felt that changing this extremely popular term is not feasible. In contrast, the term FNP is not as popular as NP, and there is hope to replace it. Comment regarding Exer. 2.13 (Part 2): In the guideline self reduction should be downwards self-reduction (the one guarantreed for $S$). The guideline falls short of addressing all issues. It is not clear how to emulate the execution of the reduction on a given input, since the reduction may be adaptive. One solution is to emulate all answers as `false' (and note that the first query that is yes-instance is not affected). This requires to define $R$ such that $x_{i+1}$ is in the set of queries made by the oracle machine on input $x_i$ when all prior queries are answered with `false' (rather than being answered according to $S$). The guideline assumes that there exists a constant $c$ such that the reduction makes queries on any input that is longer than $c$. The solution is to add a dummy query that is the shortest yes-instance (resp., no-instance), whenever the reduction halts accepting (resp., rejecting) without making any query. In general, note that (constructible) complexity bounds on reductions can be enforced regardless of the correctness of the answers. [Dima Kogan] Comment regarding Exer. 2.18: Another alternative proof of Theorem 2.16 just relies on Theorem 2.10 (which implies that R reduces to NP). Comment regarding Exer. 2.23: This exercise is meant to prove that the said problem is NP-hard. Proving that it is in NP is beyond the scope of this text; it requires showing that if a solution exists then a relatively short solution exists as well. Btw, there is a typo in the guideline: The inequality $x+(1-y) \geq 1$ simplies to $x-y \geq 0$. A propos Exer 2.28 [suggested by Fabio Massacci]: A natural notion, which is arises from viewing complete problems as encoding all problems in a class, is the notion of a ``universal (efficient) verification procedure''. Following Def. 2.5, a NP-proof system is called universal if verification in any other NP-proof system can reduced to it in the sense captured by the functions $f$ and $h$ of Exer 2.28. That is, verifying that $x\in S$ via the NP-witness $y$ is ``reduced'' to verifying that $f(x)$ is in the universal set via the NP-witness $h(x,y)$ (for the universal verifier). We mention that this notion (of a universal NP-proof system) and the fact that the standard NP-proof system for 3SAT is universal underlies the proofs of several ("furthermore") results in Section 9 (e.g., see Thm 9.11 and paragraph following Thm 9.16). Errata regarding Exer 2.29: The case of 3-Colorability is far from being an adequate exercise; see recent work by R.R. Barbanchon On Unique Graph 3-Colorability and Parsimonious Reductions in the Plane (TCS, Vol. 319, 2004, pages 455-482). [Michael Forbes] Errata and comment regarding Exer 2.31: In item 1 (as throughout), the functions are length-increasing, not length-preserving. More importantly, it seems that a more intuitive exposition is possible. Consider a bipartite graph between two copies on $\{0,1\}^*$, called the $S$-world and the $T$-world, and draw an edge from the string $x$ in the $S$-world (resp., $T$-world) to $f(x)$ in the $T$-world (resp., $g(x)$ in the $S$-world). Consider all these directed edges (and using the length-increasing condition), we get an infinite collection of infinite paths that cover all vertices in both worlds. We seek a perfect matching that is consistent with the directed edges, since such a matching will yield the desired isomorphism (provided we can efficiently determine the mate of each vertex). The matching is obtained by detecting the endpoint of the inifinite path on which a vertex resides; once the endpoint is detected, a perfect matching of the vertices on this path is uniquely defined. Comment regarding Exer 2.36: Note that Part 2 actually shows that the search problem of SAT reduces to xSAT; hence, PC reduces to the promise problem class NP intersection coNP. An additional exercise for Chap 2: Show that for all natural NPC problems, finding an alternative/second solution is NPC. Typo on page 111 (Def. 3.5): replace "these exists" by "there exists". On page 112 line 3 (just after Thm. 3.6), the function $t$ should be restricted to be at least linear. [Michal Koucky] Typo on page 116 (line -10): replace "In contract" by "In contrast". Minor clarification re the proof of Thm 3.12 (page 120): In representing $S\in\Pi_2$ via a set $S'\in\NP$ we use an analogue of Prop 3.9 (rather than using Prop 3.9 proper). A technical gap regarding Part 1 of Exer 3.11: As is evident from the guideline, this part refers to the flexible formulation of P/poly as outlined in Exercise 3.1. [Dylan McKay and Psi Vesely] Errata regarding Exer 3.12: Mahaney's result by which there are no sparse NP-complete sets (assuming that P is different from NP) is wrongly attributed to [81]. [Michael Forbes] Addendum regarding Exer 3.12: A nice proof of Mahaney's aforementioned result can be found in Cai's lecture notes for Introduction to Complexity Theory (see Sec. 4 in Lecture 11). Addendum regarding THM 4.1 (p. 128): The claim may be strengthened to assert that for every $\ell:\N\to\N$ such that $\ell(n)\leq 2^n$, it holds that $P/\ell$ strictly contains $P/(\ell-1)$. See proof and discussion. Typo in Corollary 4.4: "time-computable" should read "time-constructible". [Michael Forbes] Typo in Theorem 4.7: The function $g$ should be increasing, or rather satisfy $g(t) \geq t$ for all $t$. [Augusto Modanese] In the guideline to Exer 4.1, the upper bound should be $2^s \cdot {s\choose2}^s$, where the second factor counts the number of pairs of gates feeding internal gates. Note that in interbal gates we choose a gate type, whereas in the leaves we choose whether to negate the variable. [Group of Fall 2021] Clarification regarding Item 5 of the summary on page 145: The machine scans its output tape in one direction. [Michal Koucky] Addendum to the guideline of Exer. 4.13: the emulation should check that the emlated machine does not enter an infinite execution. Typo on page 156 (3rd line from bottom): intra-cycle should be inter-cycle. [Orr Paradise] Minor clarification regarding the 1nd paragraph of page 157 (A couple of technicalities): One better guaranteed that not only G1 but rather every connected component in the graph G1 is non-bipartite. Likewise, note that the fixed graph G (used for ZigZag at bottom of the page) is non-bipartite too. This is crucial for establishing the fact that the ZigZag product maintains the number of connected components (rather than only prevents them from decreasing). This fact (as well as preservation of non-bipartiteness) can be deduced from the behavior of the eigenvalue bounds, but a more direct proof consists of two steps: In each cloud that replaces a vertex, adjacent vertices (in the small expander used in this cloud) are connected in the resulting (ZigZag) graph. Furthermore, they are connected by a path of length two. Hint: Consider the edge i-j in the cloud of vertex v, and suppose that the j-th vertex of cloud v is connected to the k-th vertex of cloud u. Recalling that each vertex in the cloud has a self-loop, consider the two zig-zag edges that replace the 3-paths (v,i)-(v,j)-(u,k)-(u,k') and (u,k')-(u,k)-(v,j)-(v,j). Each path (resp., cycle) is the original big graph corresponds to a (not necessarily simple) path (resp., cycle) in the ZigZag graph. Furthermore, the parity of the length of the path (resp., cycle) is preserved. Using the correspondong inter-cloud edges, observe that their endpoints can be connected by even-length paths. [Tal Skverer] Minor correction for page 162, lemma 5.10: The space bound should have an additional term of $O(1)+\log t$ for general overhead (of the emulation) and maintaining the index of the recursion level (so that the emulator knows when it reaches the lowest level of reduction, which is important since at this point it should refer the genertated queries to its own oracle). Typo on page 167 (last sentence of Sec. 5.3.2.2). In the parenthetical comment, logarithmic depth should be replaced by log-square depth; that is, the alternative outlined in the parenthetical sentence should read ``by noting that any log-space reduction can be computed by a uniform family of bounded fan-in circuits of log-square depth.'' [Michal Koucky] Correction for page 192 (Fact 2 and Construction 6.4): Fact 2 is correct only when the quadratic reside is in the multiplicative group mod N. Hence, (Step 2 in) Construction 6.4 should be augmented so that this condition be checked (e.g., by computing the GCD of $r$ and $N$). [Luming Zhang] Regarding Chapters 6-10: As noted by Guillermo Morales-Luna, the notions of probability ensembles (i.e., sequence of distributions indexed by strings) and noticeable probabilities (i.e., a probability that decreases slower than the reciprocal of some positive polynomial in the relevant parameter) are not defined at their first usage. Indeed, it might have been a good idea to include a glossary of technical terms such as these as another appendix. Such an appendix may contain also other definitions (e.g., as of negligible probability) and notations such as $\{0,1\}^n$, $U_N$, $M^f(x)$. Regarding Section 6.1.3.2, in retrospect, it seems better to present the ideas in terms of MA2 vs MA1, where MA denotes a generic randomized versuion of NP, and MA2 (resp., MA1) are two-sided error (resp., one sided-error) version of it. Specifically, $S\in MA2$ (resp., $S\in MA1$) if there exists a probabilistic polynomial-time machine $V$ such that 1. If $x\in S$, then exists $y$ of $\poly(|x|)$ length such that $\Prob[V(x,y)=1] \geq 2/3$ (resp., equals 1); 2. If $x\not\in S$, then for every $y$ it holds that $\Prob[V(x,y)=1] \leq 1/3$ (resp., $\leq 1/2$). (Note that, like for BPP and RP, error-reduction holds here too.) Obviously, both BPP and MA1 are contained in MA2; and recall that it is not know whether BPP is in NP. The proof to be presented shows that MA2 is contained in MA1; that is, we can eliminate the error in the completeness condition. On the 4th line of page 201, the fact that BPL is contained in P does require a proof, so it was inadequate to label it as evident (i.e., at the same level as RL contained in NL contained in P). A possible proof consists of an algorithm that proceeds in iterations such that in the $i$-th iteration, the algorithm computes the probability of reaching each of the possible configurations in $i$ steps, where these probabilities are computed based on the probabilities of reaching each of the configurations in $i-1$ steps. Recall that the definition of BPL guarantees that the number of configurations as well as the number of steps is polynomial in the input length. [Gilad Ben-Uziahu] On page 221: As noted by a group of students at Weizmann, the definition of strongly parsimonious is slightly confusing, since $x$ is used in a different sent a line before. A better text may read: Specifically, we refer to search problems $R\in\PC$ such that $R'(x;y')\eqdef\{y'':(x,y'y'')\in R\}$ is {\em strongly parsimoniously reducible to $R$}, where a {\sf strongly parsimonious reduction of $R'$ to $R$} is a parsimonious reduction $g$ that is coupled with an efficiently computable function $h$ such that $h(x',\cdot)$ is a 1-1 mapping of $R(g(x'))$ to $R'(x')$ (i.e., $(g,h)$ is a Levin-reduction of $R'$ to $R$ such that $h$ is a 1-1 mapping of solutions). Clarification for Exercise 6.2 (page 230). In all items, we assume that the reductions only make queries of length that is polynomially related to the length of their input. This guarantees that error probabilities that are negligible in terms of the length of the queries are also negligible in terms of the length of the input. This comment is crucial only for search problems, where error reduction is not necessarily possible. [Ben Berger and Yevgeny Levanzov] Additional exercise for randomized log-space [suggested by Yam Eitan]: We say that a probabilistic machine always halts if there exists no infinite sequence of internal coin tosses for which the machine doesn't halt. Show that a log space probabilistic machine is admissable (i.e., always halt in polynomial-time) if and only if it always halts. Guideline: Show that both conditions are equivalent to requiring the digraph of all possible configurations has no directed cycle. Typo in Exercise 6.39, Part 1 (page 240). In the definition of $R'$, it should be $h(y)=0^i$ (not $h(x)=0^i$). [Noa Loewenthal] Typo on page 245. In Item 1 of Prop 7.2: NP should be PC. Pages 261-263: See comment regarding Sec 1.1.5 (page 16). Indeed, here "poly" usually stands for a fixed (but unspecified) polynomial that may change without warning; for example, the text sometimes uses $poly(n^2) = poly(n)$ etc. [Dima Kogan] Typo on page 264. In the paragraph introducing the simplified notations, one should set $Z=Z_{n-\ell}$ (rather than $Z=Y_{n-\ell}$ as stated). Another Typo on page 264: The inequality in the line above equation (7.11) should be with uppercase Z instead of lowercase; that is, it should be $Pr[C(Y,Z)=F(Y,Z)] > \rho(n) = \rho_1(\ell)\cdot \rho_2(n-\ell)+\epsilon$ (rather than $Pr[C(Y,z)=F(Y,z)] > \rho(n) = \rho_1(\ell)\cdot \rho_2(n-\ell)+\epsilon$). [Oree Leibowitz] Addition to Exer. 7.24: An alternative and less direct solution is showing that the existence of non-uniformly hard one-way functions implies that PC contains search problems that have no polynomial-size circuits (a.e.), which in turn implies that NP (and also E) is not contained in P/poly (a.e.). Note that the a.e. (almost everywhere) clause requires using slightly more care in the reductions. In Construction 8.17 (page 310), we mean $i_1<\cdots Typo on page 317 (footnote 32): replace "automata consider here" by "automata considered here". Better formulation for the 6th line following Def 8.20 (on page 318): replace $s(k)-k$ by $k-s(k)$; it reads more natural thisway. Clarification for Thm 8.21: While the theorem talks about any (space-constructible) function $s$, it is meaningless only for sub-linear functions, since otherwise $\ell(k)<1$. The focus is actually on $s(k)=\sqrt k$ (as stated in the theorem's title). [Laliv Tauber] On page 320 line 9, one better replace "$n \eqdef \Theta(s(k))$" by "$n \eqdef 100s(k)$" (or so). Page 322 lines 21-22 (sketch of proof of Thm 8.22): The two inputs of the extractor are presented here in reverse order. See comment below regarding Apdx D.4. Typo on page 322 line 27 (just before the itemized list): replace "similarly for $G_2$ with respect to (s_1,\eps_1) and $\ell_2$" by "similarly for $G_2$ with respect to (s_2,\eps_2) and $\ell_2$". Typo on page 327 line 4: the set of conditions has $j\in\{0,1,...,t-1\}$ (rather than $j\in\{1,...,t-1,t\}$). Typo on page 332, 3rd line of Sec 8.5.2.3: replace "characteristic" by "cardinality". Addendum to Exer. 8.18: Actually size $O(2^k \cdot k)$ suffices; the circuit may just hold the $(k+1)$-bit long prefices of the strings in the range of $G$. Typo on line 22 of page 355: replace "input common input" by "common input". Typo on page 361 (just after Eq (9.3)): replace "module" by "modulo". Typo on page 364 (header of Def 9.6): replace "proof" by "proofs". Better formulation for 2nd sentence of Sec 9.1.5: replace "conflicting" by "opposite". Minor error on page 387 (line -7): It does not follow that NP is contained in $PCP(q,O(1))$, where $q$ is a quadratic polynomial, but rather that the set of satisfied Quadratic Equations is in $PCP(q,O(1))$. Typo on page 391 (line 7): The time complexity is polynomial in the size of $C_\omega$ (and not in the size of $C_\omega^{-1}$). Typo on page 400 (last paragraph): add q as a superscript to gapGSAT. Also replace "can be reduce" by "can be reduced". Two typo on page 403 (last paragraph): In the 4th line following Thm 9.23, replace the 2nd "is" by "are". In the 3rd line from the bottom of the page, replace "PCPs" by "PCP". In Exer 9.4, the current guideline presumes that each variable appears in a constant number of clauses. The general statement can be proved by arithmetizing the Boolean formula by using $n'=n/\log n$ variables and looking at the sum taken over $[n]^{n'}$ rather than over $\{0,1\}^n$. This requires introducing polynomials that map $[n]$ to a sequence of $\log n$ values in $\{0,1\}$. In Exer 9.24, the current guideline does not yield a linear size instance, but rather an instance with a linear number of vertices/variables. To obtain a linear size instance either reduced from "restricted 3SAT" in which each variable appears in O(1) clauses (and you can show a linear-time reduction from 3SAT to this restricted form) or use consistency conditions only on a spanning tree of the set of clauses that share a variable. [Pointed out by Nitzan Uziely] Typo on page 431 (5th line of Step 2): replace "is that it very sensitive" by "is that it is very sensitive". Typo on page 434 (5th line): relace "practical significant" by "practical significance". [Michal Koucky] Copyeditor error on page 452: The heading "Approximations" should be unnumbered. Copyeditor error on page 471: The definition should be numbered B.1. Appendix D.4 - Randomness Extractors (pages 537-544): The two inputs of the extractor are presented here in reverse order. That is, usually the first input is the $n$-bit long source and the second input is the $d$-bit long seed, whereas in the text the order is reversed. (This reversal was unconscious. I guess the non-standard order strikes me as more natural, but the difference is not significant enough as to justify deviation from the standard.) Typo on page 556 (5th line in the paragraph on amplification): deceased should be decreased. [Orr Paradise] Addendum to last paragraph of Section E.2.1.2 (page 557): I forgot to credit Noga Alon for this suggestions, but currently I do not see a simple way to prove $c/2$-expansion. See a current memo, which only establishes $c/12$-expansion (using a more modular argument than the one envisioned above). Addendum to Lemma E.8 (page 558). In the special case of $B=V-A$, we can use $\sqrt{|A|\cdot|B|} \leq N/2$, and so the number of edges going out of $A$ is at least $((1-\rho(A))\cdot d - \lambda/2)\cdot|A|$. It follows that for $|A| \leq N/2$, it holds that $|\Gamma(A)-A| \geq (1-(\lambda/d))\cdot|A|/2$. Back to Oded Goldreich's homepage.
CommonCrawl
\begin{definition}[Definition:Continuous Mapping (Topology)/Set] Let $T_1 = \left({S_1, \tau_1}\right)$ and $T_2 = \left({S_2, \tau_2}\right)$ be topological spaces. Let $f: S_1 \to S_2$ be a mapping from $S_1$ to $S_2$. Let $S$ be a subset of $S_1$. The mapping $f$ is '''continuous on $S$''' {{iff}} $f$ is continuous at every point $x \in S$. \end{definition}
ProofWiki
Search IIETA Content -Any-ArticleBasic pageBlog entryJournalEventFeature Home Journals EJEE Fractional Order Direct Torque Control of Permanent Magnet Synchronous Machine EJEE Submisssion Achive Citation List CiteScore 2018: 0.18 ℹCiteScore: CiteScore is the number of citations received by a journal in one year to documents published in the three previous years, divided by the number of documents indexed in Scopus published in those same three years. SCImago Journal Rank (SJR) 2018: 0.117 ℹSCImago Journal Rank (SJR): The SJR is a size-independent prestige indicator that ranks journals by their 'average prestige per article'. It is based on the idea that 'all citations are not created equal'. SJR is a measure of scientific influence of journals that accounts for both the number of citations received by a journal and the importance or prestige of the journals where such citations come from It measures the scientific influence of the average article in a journal, it expresses how central to the global scientific discussion an average article of the journal is. Source Normalized Impact per Paper (SNIP): 0.415 ℹSource Normalized Impact per Paper(SNIP): SNIP measures a source's contextual citation impact by weighting citations based on the total number of citations in a subject field. It helps you make a direct comparison of sources in different subject fields. SNIP takes into account characteristics of the source's subject field, which is the set of documents citing that source. 240x200fu_ben_.jpg Fractional Order Direct Torque Control of Permanent Magnet Synchronous Machine Hachelfi Walid* | Rahem Djamel | Meddour Sami | Djouambi Abd Elbaki Electrical Engineering and Automatic Laboratory, University of Larbi Ben M'Hidi, Oum El-Bouaghi 04000, Algeria Corresponding Author Email: [email protected] https://doi.org/10.18280/ejee.210505 | Citation 21.05_05.pdf This paper designs a fractional order PID direct torque control strategy for permanent magnet synchronous machine (PSMS) based on on fractional calculus. The fractional order controller to control the speed of the machine was synthesized, referring to Bode's ideal transfer function. In the controller, the fractional order integrator was approximated by Charef's method. The fractional PID order control was compared with classical PID control, showing that the former has the better accuracy and robustness. Finally, MATLAB/ SIMULINK simulation proved the advantages of our control strategy under oscillating torque load or magnetic field. direct torque control (DTC), permanent magnet synchronous machine (PMSM), fractional order PID controller, classical PID controller, Bode's ideal transfer function, comparison In recent decades, many scientific applications used fractional calculus and fractional order control for industrial control systems. The research efforts in this domain have increased rapidly due to technological advances and high population density [1-4]. Therefore, many scientific applications such as mechatronics [5], biology [6], photovoltaic [6], automatic voltage regulator [4], robotics and renewable energy systems [7], have been the subject of much research in developed and developing countries. Among these, the fractional electrical machines control [8-10]. The main aim of the control machines and control dynamic process is to enhance the performances and robustness of process control. Therefore, it is necessary to concentrate on the investigation of other control strategies that include fractional calculus and fractional order controllers. Direct torque control (DTC) is one of the most control strategies, that used in electrical machine drive for different types and investigated in many literatures [11-14]. Unfortunately, this approach has a drawback such as underside torque and speed ripples due to some internal computational defects in control action, this includes switching frequency and voltage vector selection [15-18]. However, Fractional model control has appeared as an attractive and powerful control method in electrical machine drives [8-10]. Due to that, it can be used with several approaches. Permanent magnets synchronous machine (PMSM) drives play a vitally important role in high performance of the motion control applications [18-21]. The direct torque control or fractional order control is used in the design of PMSM to achieve the best performances [22, 23]. Unluckily, several electromechanical parameters variations are issues in the industrial control machines domain [24-26]. For this problem, several studies are reported [22, 23, 27], to improve the performance and robustness of this type of typical machine drives. This paper presents a direct torque fractional order control design to PMSM, in this approach, a fractional-order controller PIλDγ [5, 28, 29], is synthesized using Bode's ideal transfer function as a reference model [30-32]. The proposed technique of the PMSM speed control is compared to the conventional PID controller. Simulation results of the proposed method on a PMSM have been presented to validate the effectiveness of the fractional order direct torque control method. This paper is organized as follows: Section 2, presents the modelling of the PMSM. Section 3, presents the DTC strategy and the Two-level three-phase voltage source inverter (VSI). Section 4, demonstrate Bode's ideal transfer function and controller design. In section 5, some application examples of the proposed control strategy are shown. Finally, conclusions remarks are explained in Section 6. 2. Pemanent Magnet Synchonous Machine Model (PMSM) The stator voltage and current equations of the PMSM in the d-q reference is given by [18, 26]. $\left\{\begin{array}{l}{V_{d}=R_{s} i_{d s}+\frac{d \Phi_{d}}{d t}-\Phi_{q} \Omega_{r}} \\ {V_{q}=R_{s} i_{q s}+\frac{d \Phi_{q}}{d t}+\Phi_{d} \Omega_{r}}\end{array}\right.$ (1) The stator and rotor flux equation can be written in the reference d-q axis as $\left\{\begin{aligned} \Phi_{d} &=L_{d} i_{d}+\Phi_{m} \\ \Phi_{q} &=L_{q} i_{q} \end{aligned}\right.$ (2) The electromagnetic torque developed by the PMSM can expressed as $T_{e}=\frac{3}{2} p\left(\left(\Phi_{d} i_{q s}-\Phi_{q} i_{d s}\right)+\Phi_{m} i_{q s}\right)$ (3) The electromagnetic torque represented the dynamic behavior of machine can expressed as $T_{e}=J \frac{d \Omega_{r}}{d t}+f \Omega_{r}+T_{r}$ (4) 3. Direct Torque Control Strategy and Voltage Source Inverter In this section a conventional DTC scheme that was applied to PMSM will be discussed. The DTC is based on the theories of field oriented and direct self control. Field oriented control uses space vector theory to optimally control magnetic field orientation and direct self control establishes a unique frequency of inverter operation given a specific dc link voltage and specific stator flux level. The principle of DTC is to stator voltage vectors according to the differences between the reference torque and stator flux linkage and the actual values [22, 23]. The basic fundamental blocks of the DTC method for PMSM is given in Figure 1. It represents the DTC scheme applied for PMSM. That provides more precise speed control using a PID controller. In other hand, the instantaneous values of stator flux and torque producing are estimated and are controlled by hysteresis controlled directly and independently by properly selecting the inverter switching configuration, hence more responsive and accurate control to your set points [22, 23]. The used of two-level source inverter is based on the switching voltage DTC look-up. Figure 1. Basic of DTC scheme for PMSM According to the principle operation of the DTC, there are six non voltage vectors and two zero voltage vectors. The section of six-voltage vectors is made to maintain the torque and stator flux within the limits of two hysteresis bands. The switching selection table for voltage is shown in Table 1 [22, 23]. Table 1. Switching section of the classical DTC Section (Si, i=1 to 6) ΔTe The direct stator flux Фs is derived from Eq. (1). That can be expressed as $\overrightarrow{\Phi_{s}}=\int \overrightarrow{v_{s}}-R_{s} \overrightarrow{i_{s}}$ (5) The voltage drop term Rs can be neglected at average and high speed, stator flux variation can be written as $\frac{\overrightarrow{\Phi_{S}}}{d t}=\int \overrightarrow{V_{S}} \quad \rightarrow$ (6) Fixing the voltage vector $\vec{V}_{s}=\overrightarrow{0}$ of the Eq. (6), we obtain $\frac{\overrightarrow{\Phi_{d}}}{d t}=\overrightarrow{0}$ (7) The approximation magnitude of stator flux as $\left\{\begin{array}{l}{\Phi_{\alpha s}=\int_{0}^{t}\left(V_{\alpha s}-R_{s} i_{\alpha s}\right) d t} \\ {\Phi_{\beta s}=\int_{0}^{t}\left(V_{\beta s}-R_{s} i_{\beta s}\right) d t}\end{array}\right.$ (8) The stator flux linkage can be expressed as $\Phi_{s}=\sqrt{\Phi_{a s}^{2}+\Phi_{\beta s}^{2}}$ (9) The angular position of the stator flux vector can choose between appropriate vectors set that are depending on the flux position as $\theta_{S}=\tan ^{-1}\left(\frac{\Phi_{\beta s}}{\Phi_{a s}}\right)$ (10) The electromagnetic torque calculated by the stator currents and flux measurement as $T_{e}=\frac{3}{2} p\left(\left(\Phi_{\alpha} i_{\beta s}-\Phi_{\beta} i_{\alpha s}\right)+\Phi_{m} i_{\beta s}\right)$ (11) The simplified electromagnetic torque equation for an isotropic PMSM (equal direct and quadratic inductance (Ld=Lq) is, namely, can be expressed as $T_{e}=\frac{3}{2} p \Phi_{m} i_{\beta s}$ (12) 3.1 Voltage source inverter The three- phases voltage vector Van, Vbn, Vcn of machine are independent, that will be eights different states, so the vector transformation described as [23, 27]. $V_{s}=\sqrt{\frac{2}{3}} U_{c}+\left(S_{a}+S_{b}^{\frac{2 i \pi}{3}}+S_{c}^{\frac{4 i \pi}{3}}\right)$ (13) The three-phase voltage source inverter VSI are Va, Vband Vdc that alimented the PMSM. The combinations of the each inverter leg are commonly, for this, a logic state Si (i=a,b,c) represents each leg in order to choose an appropriate voltage vector. The simplified representation of inverter with PMSM as shown by Figure 2. Figure 2. Simplified representation of three-phase voltage inverter 4. Bode's Ideal Transfer Function and Fractional Controller de Design The reference model is based on ideal open loop transfer function used in feedback amplifier that gives the best performance on terms of robustness to the gain variation, the ideal transfer function of Bode is [30-32]. $G(S)=\frac{K}{S^{m}} \cdots 1<m<2 \in R$ 1˂m˂2∊R (14) where, m is the fractional order integrator. The Bode's ideal transfer function (14) exhibits the important properties such as Gain margin -20.m(dB/dec), and a constant phase margin –mπ/2(rad). Besides that, its leading to the iso-damping property. The feedback control system has an important robustness feature even so the variation of the gain K. Which is an important robustness feature of the feedback control system though the independent of the gain K. By consequence this robustness has motivated some research to consider the unity feedback control system whose forward path transfer function is the Bode's ideal transfer function, that as shown in Figure 3. Figure 3. Bode's ideal transfer function loop The fractional system exhibits in (14) is the closed-loop transfer function of the control system Eq. (15). Presented in Figure 3 is given by: $H(S)=\frac{Y(s)}{E(s)}=\frac{G(s)}{1+G(s)}=\frac{1}{1+\left(\frac{s}{w_{14}}\right)^{m}}$ 1˂m˂2∊R (15) where, the gain crossover frequency $w_{u}=K^{1 / m}$ and the fractional number 1˂m˂2 are fixed according to the desired closed loop performances. the asymptotic approximation of the equation (15), indicates that the magnitude and the phase asymptotically approach a horizontal straight line as (-m20(dB/dec)) and (-mπ/2(rad)), respectively. Therefore, the constant phase margin $\theta_{m}$ is depending of the fraction value m. $\theta_{m}=\left(1-\frac{m}{2}\right) \pi,(r a d)$ (16) In this paper, the model control scheme is considered as the Bode's ideal control loop. This is chosen in Figure 4: Figure 4. Feedback control system The transfer function control system in open loop is given as $G(s)=C(s) . G_{p}(s)$ (17) where, C(S) and Gp(S) are the controller's and process's transfer function respectively. The used fractional order controller has a similar structure with the classical PID controller that is proposed in [2, 24] given as $C(s)=K_{p}+\frac{K_{i}}{s^{\lambda}}+K_{\dot{d}} S^{\gamma}$ (λ and γ)∊R (18) The synthesized fractional order controller method is based on the interpretation of the open-loop transfer function C(S) [33], which ensures that the open loop control system Gp(S) behaves like the Bode's ideal loop as illustrated in Figure 5, thus, we can write [10, 28]. $G(s)=C(s) . G_{p}(s)=\left(K_{p}+\frac{K_{i}}{s^{\lambda}}+K_{d} S^{\gamma}\right) G_{p}(s)=\frac{K}{s^{m}}$ (19) where, C(s) represent the controller transfer function and Gp(s) represent the plant. 4.1 Fractional order λ, γ The asymptotic order of the plant Gp(S) at low and high frequency are nb and nh respectively, and the fractional orders λ and γ of the fractional order controller PIλDγcan be given by [33]: $\left\{\begin{array}{l}{\lambda=m-n_{b}} \\ {\gamma=n_{b}-m}\end{array}\right.$ (20) 4.2 Design of the Parameters Kp, Ki,Kd The fractional-order Kp, Ki, Kd are calculated by using the tuning method [33]: $K_{p}=\frac{\left|G_{p}\left(j w_{\max }\right)\right|^{-1}}{\left|1+T_{i}^{\prime}\left(j w_{u}\right)^{-\lambda}+T_{d}^{\prime}\left(j w_{u}\right)^{-\gamma}\right|}$ (21) $K_{i}=\frac{K_{u}}{\left|\sigma_{p}\left(j w_{m i n}\right)\right| w_{m i n}^{n}}$ (22) $K_{d}=\frac{K_{u}}{\left|\sigma_{p}\left(j w_{\max }\right)\right| w_{\max }^{\pi}}$ (23) 4.3 Fractional order integrator approximation The irrational transfer function of the fractional order integrator (14) can be approximated in the frequency band [ωmin, ωmax] by the following rational function [29, 33]. $C(s)=\frac{1}{s^{a}}=\frac{K I}{\left(1+\frac{s}{w_{w}}\right)^{\lambda}}=K I \frac{\Pi_{[-N}^{N-1}\left(1+\frac{s}{z_{l}}\right)}{\prod_{i=0}^{N}\left(1+\frac{s}{p_{s}}\right)}$ (24) where, piand zi are the poles and zeros of the approximation, 0˂λ˂1 is a positive number. $\begin{array}{l}{N=\text {Integer }\left[\frac{\log w_{\max }}{p_{0}}+1\right]+1 N=\operatorname{Integer}\left[\frac{\log w_{\max }}{p_{0}}+1\right]} \\ {w_{u}=\sqrt{10^{\left(\frac{\xi}{10} m\right)}-1}} \\ {\qquad K I=\frac{1}{w_{u}^{\lambda}}}\end{array}$ ξ (dB) is the tolerated error between the integration and his approximation. The singularities of poles pi and zeros zi are given by the flowing formula as $p_{i}=a b^{i} a p_{0}$ , (i=-1, 0, 1,…, N), $z_{i}=a b^{i} a p_{0}$ , (i=0, 1,…, N-1). The parameters a, b, p0 and z0 are: $\begin{aligned} a=10\left(\frac{y}{10(1-m)}\right), & b=10 \cdot a=10^{\left(\frac{y}{10 m}\right)} \\ p_{0}=w_{u} \sqrt{b}, z_{0}-a p_{0} \end{aligned}$ The irrational transfer fractional function of integrator (14) can be approximated as flowing rational transfer function as $C(s)=\frac{1}{s^{\lambda}}=\frac{K I}{\left(1+\frac{s}{w_{u}}\right)^{\lambda}}=K I \frac{\prod_{i=N}^{N-1}\left(1+\frac{s}{\left(a b^{i} a p_{0}\right)}\right)}{\prod_{i=0}^{N}\left(1+\frac{s}{\left(a b^{i} p_{0}\right)}\right.}$ (25) 5. Application and Simulation Results In this section, the parameters values of the PMSM are shown in Table 2, a functional scheme of fractional order control speed is presented, the optimal values of proportional, integral, and derivative gains of classical PID controller and fractional order PID controller are calculus to achieve desired performance ( $\theta_{m}$ , ωu) and meet design requirements. The proposed fractional order PID controller was compared against a PI controller with same improve performances. Thus, a fair comparison was established between the proposed PID controller to a classical PID controller. Table 2. Sizes for (PMSM) Motor Parameter Nominal power 1,1 (kw) Pole pairs Stator resistance 1.4 (Ω) Longitudinal inductance 0.0066 (H) Moment of rotor inertia 0.00176(Kg.m2) Quadratic inductance Friction Coefficient Flux linkage of rotor permanent-magnet Фm 0.1546 (Wb) The interested model consists on the closed loop control of speed rotation to follow the Bode's ideal loop. Thus, the representation of scheme with nominal parameters that are listed in Table 2 is given in Figure 5. Figure 5. Functional scheme of fractional order control speed The process Gp(s) of control system is given as: $\begin{aligned} G_{p}(s) &=K_{\Phi m} \cdot G_{p 1}(s) \\ G_{m}(s) &=\frac{w_{l}^{1.5}}{s^{1.5}}=\frac{70^{1.5}}{s^{1.5}}, S=j w \end{aligned}$ (26) kФm Design the mechanical torque generated by Electromagnetic torque of (PMSM) given in (3). $K_{\Phi m}=\frac{3}{2} p \Phi_{m}$ (27) The transfer function represented the dynamic system that given as $G_{p}(s)=\frac{K_{\$} m}{0.00176 . S+0.1}$ (28) So, the process Gp(s) can written as $G_{p}(s)=\frac{K_{\phi} m}{0.00176 . S+0.1}$ (29) In a given frequency band [10-4 104], the dynamic performance requirement of our system, can be satisfied for a phase margin θm=45° and a chosen gain crossover frequency $w_{u}=10(r a d / s)$ . As a result, the Bode's ideal transfer function can be given as $G m(s)=\frac{w_{u}^{1.5}}{s^{1.5}}=\frac{70^{1.5}}{s^{1.5}}, S=j w$ (30) Using (20), we can get $\left\{\begin{array}{c}{\lambda=1.5} \\ {\gamma=-0.5}\end{array}\right.$ (31) The values Kp, Ki, Kd of the parameters according to (21, 22 and 23) can be fixed by: Kp =0.0024; Ki=841.831; Kd=1.4816 Thus, the fractional order controller transfer function as $G_{F P I I}(s)=0.0024+\frac{84.1831}{s^{1.5}}+1.4816 S^{-0.5}$ (32) So, the obtained fractional order controller is a proportional parameter Kp, fractional order integrator (I1.5) and second fractional order integrator (I0.5). Hence from (29 and 32), the open loop transfer function GFPII(s) is given as $\begin{array}{c}{G_{F P I I}(s)=C_{F P I I} \cdot G(s)} \\ {G_{F P I I}(s)=\frac{0.0017 s^{1.5}+1.0307 S+58,668}{0.00176 S^{2.5}+0.15^{1.5}}}\end{array}$ (33) And the closed loop transfer function H(s)FPII of (33) is given as $\begin{array}{c}{H(s)=\frac{G_{F P I I}}{1+G P I(s)}} \\ {H_{F P I I}(s)=\frac{0.017 s^{15}+1.0307 s+58.668}{0.00176 s^{25}+0.1017 s^{5}+0.8745+50.9897}}\end{array}$ (34) 5.1 Performance of fractional PIλDγ controller VS the conventional The performances of the proposed controller are compared to a classical PI controller, which are designed for the same desired performance, $\theta_{m}=45^{\circ}$ and ωu=70 rad/s. Using the method of tuning of PID controller to instantly see the optimal parameters of the classical PI, that is a proprietary PID tuning algorithm developed by MATH- WORKS to meet the design objectives such as stability, performance, and robustness'. The obtained optimal PID parameters are given as Kp =0.02358; Ki=15.8802 And the mathematical equation of classical PI controller is given as $G_{F P I I}(s)=0.02358+\frac{15.8802}{s}$ (35) Consequently, the open loop transfer function GPI(S) is given as $\begin{array}{c}{G_{P I}(s)=C_{P I} \cdot G(s)} \\ {G_{P I}(s)=\frac{0.0164 S+11.0479}{0.00176 S^{2}+0.1 S}}\end{array}$ (36) And the closed loop transfer function H(s)PI of (36) is given as $\begin{aligned} H(s) &=\frac{G_{P I}}{1+G_{P I}(s)} \\ H_{P I}(s)=& \frac{0.0164 S+11.0479}{0.00176 S^{2}+0.2640 S+11.0479} \end{aligned}$ (37) 5.2 Simulation results Two simulations examples are presented, in the first test, comparing the fractional order controller, conventional controller and the Bode's ideal transfer function for various value of K, (K=1;5;10). In the second test, a numerical simulation example is presented by applying the direct torque fractional order control. The obtained results are compared to the conventional method under the variation of the rotor magnets field and the load torque. The reference speed of the machine is fixed at 100rad/s. The magnitude plots of reference model, plant transfer function, open loop transfer functions GFPII(s) and open loop transfer functions GPI(s) are shown in Figure 6. We observe that the fractional order control system is overlapped with reference model. Where is not the case with the classical PI. The Figure 7 Shows the step responses of Bode's ideal loop, closed loop control systems H(s)FPII and H(s)FPI, that is shown a similarity of step responses between the Bode's ideal loop and fractional order control system. Figure 6. Magnitude plot of the reference model Gm(s), open loop transfer functions GFPII(s) and open loop transfer functions GPI(s), for m=1.5 Figure 7. Step responses of the reference model Gm(s), open loop transfer functions GFPII(s) and open loop transfer functions GPI(s), for m=1.5 The step responses of the closed loop control systems H(s)FPII and H(s)FPI for various values of static gain K are shown in Figure 8 and Figure 9. It is clear that the first overshoot of the fractional order control system remains constant and realizes fast rise time with good robustness, that characterizes the considered fractional system. Where it is not the case of classical control system. Figure 8. Step responses of the fractional control system for various values of k Figure 9. Step responses of the classical control system for various values of k Figure 10. Speed response The reference of rotor speed of the machine is set to 100rad/s and a load torque is applied to 5N.m at t=0.3s, as shown in Figure 10 and Figure 11. It can observe that the performance indexes such as rise time, maximum overshoot, and steady state error are good level, with a certain improvement in the fractional order control response regarding the overshoot (almost null) and the control signal shape (less oscillations). After applying a load torque Tr=5(N.m) at t=0.3(s) the system response is maintained in a similar manner. The proposed fractional control strategy gives the less oscillatory system for flux magnet variation and application load. Figure 11. Evolution of electromagnetic torque The effect of changes in the rotor magnet flux Фm=Qm, (Qm =K. Qm) and application torque load at t=0.3s are shown in Figures. (10-13). It can be seen that the evolution of rotor speed and electromechanical torque for the both control methods. From Figure 12, the overshoot does not change, the response time has been much faster with the increasing flux magnet respectively. As the load torque applied at t=0.3s, the speed of response is improved. From Figure 13, the overshoot is variable with a long response time than in Figure 12. Also, it is shown that for load torque at t=0.3s, the speed response obtained is faster with a low overshoot. Figure 12. Speed responses for different values of Qm of the fractional order DTC Figure 13. Speed responses for different values of Qm of the classical DTC The Electromagnetic torque responses of fractional order DTC and classical DTC are shown in Figures (14 and 15) respectively. It can be seen that fractional order DTC offers fast transient responses, good oscillation and very good dynamic responses. But the classical DTC presented the ripple in torque and the torque oscillation is bigger. The oscillation of torque in fractional order DTC is reduced remarkably compared with to classical DTC. It concluded from this study that fractional order controller can be used to enhance the DTC to maintain the speed overshoot and to reduce the oscillation of the electromagnetic torque with a small ripple. Figure 14. Evolution of machine electromagnetic torque of the fractional order DTC Figure 15. Evolution of machine electromagnetic torque for different values Qm of the classical DTC This paper presents the design of the direct torque fractional order control to PMSM which includes the use of fractional order controller and the robust Bode's ideal transfer function. The design was simulated using software MATLAB/SIMULINK. Compared to the conventional DTC method, proposed strategy shows good performance and robustness. The speed overshoot is maintained at fixed value and torque ripple is decreased. a, b, p0, z0, y, N Approximation parameters Approximation error is(d,q) Stator current in d and q axis is(α/β) Stator current in α and β axis Inductances in d and q axis Fractional order nh, nb Asymptotic orders Zero and Pole number Set of point Pi, Zi Pole and Zero of the approximation S(a,b,c) Three switches of too level converter Electromagnetic torque Load torque Uan, Ubn, Ucn Tree phase voltage inverter DC-link voltage V(d/q) Voltage in d and q axis Stator voltage Vs(α/β) Stator voltage in α and β axis nb and nh low and high frequency Greek symbols λ, γ Non-integer orders θm Phase margin ωu Gain crossover frequency Ф (d/q) Flux linkage in d and q axis Permanent Magnitude flux Фs Stator flux Фs(α/β) Flux linkage in α and β axis $\Omega_{\mathrm{r}}$ Rotor speed Fractional order Control PMSM Permanent Magnet Synchronous Machine Proportional Integral Derivative Controller PIλDγ Fractional Proportional Integral Derivative Direct Torque Control Three-phase Voltage Source Inverter [1] Ladaci, S., Bensafia, Y. (2016). Indirect fractional order pole assignment based adaptive control. Engineering Science and Technology, an International Journal, 19(1): 518-530. https://doi.org/10.1016/j.jestch.2015.09.004 [2] Mani., P., Rajan., R., Shanmugam, L., Joo, Y.H. (2018). Adaptive fractional fuzzy integral sliding mode control for PMSM model. IEEE Transactions on Fuzzy Systems, 27(8): 1674-1686. https://doi.org/10.1109/TFUZZ.2018.2886169 [3] Rayalla, R., Ambati, R.S., Gara, B.U.B. (2019). An improved fractional filter fractional IMC-PID controller design and analysis for enhanced performance of non-integer order plus time delay processes. European Journal of Electrical Engineering, 21(2): 139-147. http://doi.org/10.18280/ejee.210203 [4] Aguila-Camacho, N., Duarte-Mermoud, M.A. (2013). Fractional adaptive control for an automatic voltage regulator. ISA Transactions, 52(6): 807-815. https://doi.org/10.1016/j.isatra.2013.06.005 [5] Lamba, R., Singla, S.K., Sondhi, S. (2017). Fractional order PID controller for power control in perturbed pressurized heavy water reactor. Nuclear Engineering and Design, 323: 84-94. https://doi.org/10.1016/j.nucengdes.2017.08.013 [6] Asjad, M.I. (2019). Fractional mechanism with power law (singular) and exponential (non-singular) kernels and its applications in bio heat transfer model. International Journal of Heat and Technology, 37(3): 846-852. http://doi.org/10.18280/ijht.370322 [7] Neçaibia, A., Ladaci, S., Charef, A., Loiseau, J.J. (2015). Fractional order extremum seeking approach for maximum power point tracking of photovoltaic panels. Frontiers Energy, 9(1): 43-53. http://doi.org/10.1007/s11708-014-0343-5 [8] Zouggar, E.O., Chaouch, S., Abdeslam, D., Abdelhamid, A. (2019). Sliding control with fuzzy Type-2 controller of wind energy system based on doubly fed induction generator. Instrumentation Mesure Métrologie, 18(2): 137-146. http://doi.org/10.18280/i2m.180207 [9] Liu, H., Pan, Y., Li, S., Chen, Y. (2017). Adaptive fuzzy backstepping control of fractional-order nonlinear systems. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 47(8): 2209-2217. https://doi.org/10.1109/TSMC.2016.2640950 [10] Singhal, R., Padhee, S., Kaur, G. (2012). Design of fractional order PID controller for speed control of DC motor. International Journal of Scientific and Research Publication, 2(6): 2250-3153. [11] Ameur, A., Mokhtari, B., Essounbouli, N., Mokrani, L. (2012). Speed sensorless direct torque control of a pmsm drive using space vector modulation based mras and stator resistance estimator. Var. Stator Resist, 1(5). https://doi.org/10.5281/zenodo.1075142 [12] Ammar, A. (2019). Performance improvement of direct torque control for induction motor drive via fuzzy logic-feedback linearization: Simulation and experimental assessment. COMPEL- Int. J. Comput. Math. Electr. Electron. Eng., 38(2): 672-692. https://doi.org/10.1108/COMPEL-04-2018-018 [13] Holakooie, M.H., Ojaghi, M., Taheri, A. (2018). Direct torque control of six-phase induction motor with a novel MRAS-based stator resistance estimator. IEEE Trans. Ind. Electron, 65(10): 7685-7696. https://doi.org/10.1109/TIE.2018.2807410 [14] Kim, J.H., Kim, R.Y. (2018). Sensorless direct torque control using the inductance inflection point for a switched reluctance motor. IEEE Trans. Ind. Electron, 65(12): 9336-9345. https://doi.org/10.1109/TIE.2018.2821632 [15] Araria, R., Negadi, K., Marignetti, F. (2019). Design and analysis of the speed and torque control of IM with DTC based ANN strategy for electric vehicle application. Tec. Ital.-Ital. J. Eng. Sci, 63(2-4): 181-188. http://doi.org/10.18280/ti-ijes.632-410 [16] Liu, Y. (2011). Space vector modulated direct torque control for PMSM. Advances in Computer Science. Intelligent System and Environment, Springer, pp. 225-230. http://doi.org/10.1007/978-3-642-23756-0_37 [17] Mesloub, H., Benchouia, M.T., Goléa, A., Goléa, N., Benbouzid, M.E.H. (2017). A comparative experimental study of direct torque control based on adaptive fuzzy logic controller and particle swarm optimization algorithms of a permanent magnet synchronous motor. Int. J. Adv. Manuf. Technol, 90(1-4): 59-72. http://doi.org/10.1007/s00170-016-9092-4 [18] Jin, S., Jin, W.H., Zhang, F.G., Jing, X.D., Xiong, D.M. (2018). Comparative of direct torque control strategies for permanent magnet synchronous motor. 1st International Conference on Electrical Machines and Systems (ICEMS). http://doi.org/10.23919/ICEMS.2018.8549341 [19] Medjmadj, S. (2019). Fault tolerant control of PMSM drive using Luenberger and adaptive back-EMF observers. European Journal of Electrical Engineering, 21(3): 333-339. http://doi.org/10.18280/ejee.210311 [20] Diao, S., Diallo, D., Makni, Z., Marchand, C., Bisson, J.F. (2015). A differential algebraic estimator for sensorless permanent-magnet synchronous machine drive. IEEE Trans. Energy Convers, 30(1): 82-89. https://doi.org/10.1109/TEC.2014.2331080 [21] Izadfar, H.R., Shokri, S., Ardebili, M. (2007). C. International Conference on Electrical Machines and Systems (ICEMS), pp. 670-674. [22] C, Xia., S, Wang., X, Gu., Y, Yan., T, Shi. (2016). Direct torque control for VSI-PMSM using vector evaluation factor table. IEEE Trans. Ind. Electron, 63(7): 4571-4583. [23] Niu, F., Wang, B., Babel, A.S., Li, K., Strangas, E.G. (2016). Comparative evaluation of direct torque control strategies for permanent magnet synchronous machines. IEEE Trans. Power Electron, 31(2): 1408-1424. https://doi.org/10.1109/TPEL.2015.2421321 [24] Ibtissam, B., Mourad, M., Ammar, M., Fouzi, G. (2014). Magnetic field analysis of Halbach permanent magnetic synchronous machine. In the Proceedings of the International Conference on Control, Engineering & Information Technology (CEIT'14), pp. 12-16. [25] El-Sousy, F.F.M. (2010). Hybrid ∞-neural-network tracking control for permanent-magnet synchronous motor servo drives. IEEE Transactions on Industrial Electronics, 57(9): 3157-3166. https://doi.org/10.1109/TIE.2009.2038331 [26] Harahap, C.R., Saito, R., Yamada, H., Hanamoto, T. (2014). Speed control of permanent magnet synchronous motor using FPGA for high frequency SiC MOSFET inverter. Journal of Engineering Science and Technology, 11-20. [27] Buja, G.S., Kazmierkowski, M.P., (2004). Direct torque control of PWM inverter-fed AC motors-a survey. IEEE Transactions on Industrial Electronics, 51(4): 744-757. https://doi.org/10.1109/TIE.2004.831717 [28] Charef, A., Assabaa, M., Ladaci, S., Loiseau, J.J. (2013). Fractional order adaptive controller for stabilised systems via high-gain feedback. IET Control Theory Appl, 7(6): 822-828. https://doi.org/10.1049/iet-cta.2012.0309 [29] Charef, A. (2006). Analogue realisation of fractional-order integrator, differentiator and fractional PIλDμ controller. IEE Proc.-Control Theory Appl, 153(6): 714-720. https://doi.org/10.1049/ip-cta:20050019 [30] Bode, H.W. (1945). Network Analysis and Feedback Amplifier Design. R. E. Krieger Pub. Co. [31] Dogruer, T., Tan, N. (2018). PI-PD controllers design using Bode's ideal transfer function. Proceedings of International Conference on Fractional Differentiation and its Applications (ICFDA) 2018. http://doi.org/10.2139/ssrn.3271384 [32] Al-Saggaf, U.M., Mehedi, I.M., Mansouri, R., Bettayeb, M. (2016). State feedback with fractional integral control design based on the Bode's ideal transfer function. International Journal of Systems Science, 47(1): 149-161. https://doi.org/10.1080/00207721.2015.1034299 [33] Djouambi, A., Charef, A., Bouktir, T. (2005). Fractional order robust control and PIα Dβ controllers. WSEAS Transactions on Circuits Systems, 4(8): 850-857. Latest News & Announcement Phone: + 1780 218 9926 Email: [email protected] IJHT MMEP IJES JESA EESRJ RCES AMA_A AMA_B AMA_C AMA_D MMC_A MMC_B MMC_C Please sign up to receive notifications on new issues and newsletters from IIETA Select Journal/Journals: IJHTMMEPACSMEJEEISII2MJESARCMARIATSIJESEESRJRCESAMA_AAMA_BAMA_CAMA_DMMC_AMMC_BMMC_CMMC_D Copyright © 2019 IIETA. All Rights Reserved.
CommonCrawl
Johann Jakob Rebstein Johann Jakob Rebstein (1840–1907) was a Swiss mathematician and surveyor. Johann Jakob Rebstein Johann Jakob Rebstein, c. 1900 Born(1840-05-04)4 May 1840 Töss, Switzerland Died14 March 1907(1907-03-14) (aged 66) Zürich, Switzerland NationalitySwiss Occupationmathematician Early life Rebstein was born on 4 May 1840 in Töss, Switzerland, to his father, a baker and his mother, a doctor.[1]: 131  Education and career Rebstein attended post-secondary school in Winterthur, and after graduating in 1860, went on to study for a year at Collège de France.[1]: 131  He was professor of mathematics and physics in Zürich from 1877 to 1898.[2] He was awarded his doctorate in 1895 from the Humboldt University of Berlin for his work Bestimmung aller reellen Minimalflächen, die eine Schaar ebener Curven enthalten, denen auf der Gauss'schen Kugel die Meridiane entsprechen.[lower-alpha 1][2][3] He is best known for his work in surveying, and for introducing the traverse method in Switzerland. Throughout his career, Rebstein was appointed as surveying expert for a number of cantons, including Thurgau (1863–1881), St. Gallen (1881–1894), Zürich (1886–1892), and Luzern (1894–1907).[2]: 131  In 1868 he was elected to the Swiss Concordat of Geometers, and served as its president from 1887 until his death in 1907.[1]: 132  In 1905 he was awarded an honorary doctorate from the University of Zürich, for "outstanding contributions to actuarial sciences".[2] Rebstein was a member of the organizing committee for the first meeting of the International Congress of Mathematicians.[1]: 79  Death Rebstein suffered from kidney disease for the last several years of his life, and died in 1907 in Zürich.[1]: 133  Publications Rebstein's publications included:[1]: 132  • Lehrbuch über praktische Geometrie mit besonderer Berücksichtigung der Theodolitmessung (1868) • Die Kartographie der Schweiz, dargestellt in ihrer historischen Entwicklung (1883) • Mitteilungen über die Stadtvermessung von Zürich1 (1892) See also • List of German-language philosophers Notes 1. English: Determining All Real Minimal Surfaces That Contain a Family of Planar Curves, Which Correspond to the Meridians on the Gaussian Sphere[2] References 1. Eminger, Stefanie Ursula. "Carl Friedrich Geiser and Ferdinand Rudio : the men behind the first International Congress of Mathematicians". St Andrews Research Repository. hdl:10023/6536. Retrieved 31 May 2017. 2. "Johann Jakob Rebstein". MacTutor History of Mathematics archive. Retrieved 31 May 2017. 3. "Jacob Rebstein". Mathematics Genealogy Project. Retrieved 31 May 2017. Authority control International • VIAF National • Germany Academics • MathSciNet • Mathematics Genealogy Project • zbMATH
Wikipedia
News and Events/ Applied Maths Research Seminars/ 2015-16 Archive/ 2015-16 Archive Applied Maths Research Seminars 2016 Spring Seminars and Abstracts 2016 Spring Seminars and Abstracts Why Are Owls So Quiet And What Have They Got To Do With Wind Turbines Date: Monday 26 January, 2pm, (ARTS 01.02) Speaker: Lorna Ayton (DAMTP, University of Cambridge) Abstract: Owls are well-known to be almost silent as they fly. Several features of their wings, including a porous, flexible trailing edge, have been proposed as key reasons why. In this talk I will discuss how a poroelastic (i.e. both porous and flexible) extension to a finite rigid plate can reduce sound scattering, and how this design feature could be implemented to reduce the noise generated by wind turbines. The effects of the length, porosity, and flexibility of the extension are discussed in an attempt to identify the optimal poroelastic extension for noise reduction. Analytical results are obtained using the Wiener-Hopf method. School Colloquium Divergent Series: From Thomas Bayes' Bewilderment to Today's Resurgence Via The Rainbow Date: Wednesday 3rd February, 4pm, (LT3) Speaker: Michael Berry (University of Bristol) Abstract: Following the discovery by Bayes in 1747 that Stirling's series for the factorial is divergent, the study of asymptotic series has today reached the stage of enabling summation of the divergent tails of many series with an accuracy far beyond that of the smallest term. Several of these advances sprang from developments of Airy's theory of waves near optical caustics such as the rainbow. Key understandings by Euler, Stokes, Dingle and Écalle unify the different series corresponding to different parameter domains, culminating in the concept of resurgence: quantifying the way in which the low orders of such series reappear in the high orders. Vortex Dynamics in a Superfluid Date: Monday 15th February, 2pm, (ARTS 2.02) Speaker: Cecilia Rorai (University of Cambridge) Abstract: Quantized vortices in superfluids are mobile and interacting topological defects. Two non-parallel quantized vortices annihilate or propagate in two dimensions, whereas they can reconnect in three dimensions. The nature of vortex dynamics is quantum mechanical, involving the atomically thin vortex cores, but it also influences the large scale dynamics of quantum turbulence, causing a tangle of quantum vortices to evolve in time, and eventually decay. I will present results on vortex dipoles and vortex reconnection obtained by integrating the Gross-Pitaevskii equation and compare them to some experimental data. Some Aspects of Vortices in Light- and Matter Waves Date: Monday 22nd February, 2pm, (ARTS 01.02) Speaker: Fabian Maucher (Durham University) Abstract: The first part of the talk will present recent work on nonlinear light propagation in the presence of competing local and nonlocal nonlinearities. Such system could be realized in a gas of thermal alkaline atoms. Apart from spatial soliton formation, the different length scales of the nonlocality can give rise to filamentation and subsequent self-organised lattice formation in the beam profile, akin to the superfluid-supersolid phase transition in Bose-Einstein condensates (BECs). The particular role of optical vorticity in the process of the pattern formation will be emphasized. The second part will focus on exciting knotted vortex lines in BECs. We suggest using a light field containing a knotted vortex line as probe field of a Raman-pulse that drives a coherent two-photon Raman transition of three-level atoms with Lambda-level configuration. We elaborate on experimental feasibility as well as on subsequent dynamics of the matter wave. Probing Cosmic Superfluids Date: Monday 29th February, 2pm, (ARTS 01.02) Speaker: Nils Andersson (University of Southampton) Abstract: Neutron stars are the exotic remnants left over after the supernova explosions in which massive stars end their lives. They are associated with a range of phenomena observed in radio, X-rays, gamma-rays and hopefully soon gravitational waves as well. Because of the extreme densities of these stars, where more than the mass of our Sun is compressed inside a 10 kilometre radius, they represent exotic physics that cannot be tested in the laboratory. The state of matter in these systems is hotly debated, but it is generally accepted that they are cold enough to contain superfluid components. There is strong observational support for this idea, but we are still far away from having truly quantitative models. In this talk I will describe the nature of the problem, summarize the key physics and outline the computational modelling required to make progress. PhD Student Talks Date: Monday 7th March, 2pm, (Queens 1.03) Speaker: Awatif Alhowaity Title: Solidification Caused by Under-Cooling Abstract: Many crude oils contain dissolved waxes that can precipitate out of solution and become deposited on the internal walls oil pipes. The waxy oils are transported through very long pipelines from warm walls to cooler conditions in the pipe. An important phenomenon occurring during the under-cooling of the pipeline is the formation of solid matter inside the pipe. The wax deposition is one of the most serious problems, potentially restricting flow and plugging the pipe. However, the wax deposits begin to form when the temperature is below the wax appearance temperature (WAT). We model the particle's growth in the oil pipe once the temperature falls below the WAT. We determine the temperature distribution, formulate and solve the self-similar problem of wax particle growth from a single point. A numerical method is used to compute the solution of initial value problem of diffusion and transport of wax towards the particle. The numerical solution is compared to the self-similar solution. Speaker: Tanmay Inamdar Title: Measures and Slaloms Abstract: : Any compact subset of the reals has the following properties:. 1) It is compact, 2) It has a linear order which generates its topology, 3) It has a countable dense set, i.e., it is separable. A classical result of G. Cantor implies that these properties in fact characterise compact subsets of the reals. This led M. Suslin to ask if separability above can be somewhat weakened to the property of having the countable chain condition, i.e., that every disjoint family of open sets is countable. This question is now known to be independent of ZFC. General topologists have, however, tried to strengthen Suslin's hypothesis in various ways in order to obtain another characterisation of the compact subsets of the reals. A promising candidate until recently consisted of a strengthening of 'has a linear order' with 'does not map continuously onto an uncountable power of [0,1]', until S. Todorcevic constructed a counterexample in ZFC. We consider a further strengthening of this statement, where 'ccc' is replaced by 'supports a measure'. We show that under an extra axiom, Todorcevic's space can be constructed so as to have a measure, whereas under another (necessarily contradictory) extra axiom, no such space can carry a measure. We also show, without assuming any extra axiom, that such a space can be constructed so as to not support a measure. Various other non-separable compactifications of the natural numbers are constructed. All of this proceeds by an analysis of the effect of forcing with a measure algebra on certain families of subsets of the natural numbers known as 'slaloms' which were used by Todorcevic in his construction. Joint work with Piotr Borodulin-Nadzieja (Wroclaw). Critically Balanced Rotating and Stratified Turbulence Date: Monday 18th April, 2pm, (JSC 3.02) Speaker: Alexander Schekochihin (University of Oxford) Abstract: I will discuss the principle of critical balance in strong turbulence: the idea that, in media where both nonlinear interactions and linear wave propagation are supported, turbulent cascades organise themselves in such a way that the characteristic times associated with linear propagation and nonlinear decorrelation are comparable scale by scale. This idea originates from theories of magnetohydrodynamic and plasma turbulence, but appears to lead to interesting and plausible conclusions when applied in hydrodynamic contexts as well. I will outline a theory of strongly rotating turbulence as an example of the application of the critical balance. If time permits, I will also show how it works in stratified turbulence. The latter has interesting applications to turbulence in the intergalactic medium, which I may or may not have time to cover. Nazarenko & Schekochihin, JFM 677, 134 (2011) Zhuravleva et al., Nature 515, 85 (2014) A Generalized Model for Optimal Transport of Images Including Dissipation and Density Modulation Date: Monday 16th May, 2pm, (SCI 0.31) Speaker: Carola-Bibiane Schönlieb (Cambridge) Abstract: In this talk I will present a new model in which the optimal transport and the metamorphosis perspectives are combined. For a pair of given input images geodesic paths in the space of images are defined as minimizers of a resulting path energy. To this end, the underlying Riemannian metric measures the rate of transport cost and the rate of viscous dissipation. Furthermore, the model is capable to deal with strongly varying image contrast and explicitly allows for sources and sinks in the transport equations which are incorporated in the metric related to the metamorphosis approach by Trouv'e and Younes. In the non-viscous case with source term existence of geodesic paths is proven in the space of measures. The proposed model is explored on the range from merely optimal transport to strongly dissipative dynamics. For this model a robust and effective variational time discretization of geodesic paths is proposed. This requires to minimize a discrete path energy consisting of a sum of consecutive image matching functionals. These functionals are defined on corresponding pairs of intensity functions and on associated pairwise matching deformations. Existence of time discrete geodesics is demonstrated. Furthermore, a finite element implementation is proposed and applied to instructive test cases and to real images. In the non-viscous case this is compared to the algorithm proposed by Benamou and Brenier including a discretization of the source term. Finally, the model is generalized to define discrete weighted barycentres with applications to textures and objects. This is joint work with Jan Maas, Martin Rumpf and Stefan Simon. Multiwavelets and Outlier Detection for Troubled-Cell Indication AMENDED Date: Tuesday 24th May, 3pm, (SCI 0.31) Speaker: Thea Vuik (Delft University of Technology) Abstract: In general, solutions of nonlinear hyperbolic PDEs contain shocks or develop discontinuities. One option for improving the numerical treatment of the spurious oscillations that occur near these artifacts is through the application of a limiter. The cells where such treatment is necessary are referred to as troubled cells, which are selected using a troubled-cell indicator. These indicators perform well as long as a suitable, problem-dependent parameter is chosen. The optimal parameter is chosen such that the minimal number of troubled cells is detected and the resulting approximation is free of spurious oscillations. In general, many tests are required to obtain this optimal parameter for each problem. In this presentation, I will introduce a new indication technique based on the multiwavelet decomposition of the approximation. I will show that the multiwavelet coefficients at the highest level can be used to detect discontinuities in the (derivatives of the) DG approximation. In addition, we will see that the sudden increase or decrease of the indicator value with respect to the neighboring values is important for detection. Indication basically reduces to detecting outliers, which is done using Tukey's boxplot approach. We provide an algorithm that can be applied to various troubled-cell indication variables. Using this technique, the problem-dependent parameter that the original indicator requires, is no longer necessary, as the parameter will be chosen automatically. Solitary Waves on a Ferrofluid Jet Date: Wednesday 8th June, 1:30pm, (SCI 0.31) Speaker: Dag Nilsson (Lund University) Abstract: We consider a current carrying rod surrounded by a ferromagnetic fluid in the presence of a magnetic field. In such a setup on can consider waves on the ferrofluid jet, and in particular solitary waves. In this talk I will present some new results regarding existence of solitary waves on a ferrofluid jet. As in the case of water waves, the governing equations for this problem can be written as a free boundary value problem with nonlinear boundary conditions. Instead of working with these equations directly, we use a spatial dynamics approach and formulate the problem as an evolution equation. This equation is then studied by using the center manifold theorem. These techniques have previously been applied successfully in order to find solitary water waves. This talk is based on a joint work with Professor Mark Groves from Saarland University Autumn 2015 Seminars and Abstracts Autumn 2015 Seminars and Abstracts Research Challenges in Applied Mathematics to Wave Power Date: Monday 28th September, 2pm, (SCI 0.31) Speaker: Dr Emiliano Renzi (Loughborough) Abstract: Deep understanding and detailed investigation of ocean waves cannot be achieved without the help of mathematics. In this seminar, I will attempt to show how accurate formulation, solution and analysis of the solution provide a significant physical insight on wave power extraction. My talk will focus on the Oyster wave energy converter developed by Aquamarine Power, which is believed to have generated the highest sustained power output of any wave machine in the world (www.aquamarinepower.com). Optimal Error Estimates for Discontinuous Galerkin Methods Based On Upwind-Biased Fluxes for Linear Hyperbolic Equations Date: Monday 5th October, 2pm, (ARTS 01.02) Speaker: Dr Xiong Meng (UEA) Abstract: In this talk, we will analyze discontinuous Galerkin methods using upwind-biased numerical fluxes for time-dependent linear conservation laws. In one dimension, optimal a priori error estimates of order $k+1$ are obtained for the semi-discrete scheme when piecewise polynomials of degree at most $k$ ($k \geq 0$) are used. Our analysis is valid for arbitrary nonuniform regular meshes and for both periodic boundary conditions and for initial-boundary value problems. We extend the analysis to the multidimensional case on Cartesian meshes when piecewise tensor product polynomials are used, and to the fully discrete scheme with explicit Runge--Kutta time discretization. Numerical experiments are shown to demonstrate the theoretical results. This is a joint work with Chi-Wang Shu and Boying Wu. Complex Solutions of the Navier-Stokes Equations Date: Monday 12th October, 2pm, (ARTS 01.02) Speaker: Prof. Jonathon Mestel (Imperial) Abstract: It is well known that low-Reynolds-number flows ($\mathit{Re} \ll 1$) have unique solutions, but this statement may not be true if complex solutions are permitted. We begin by considering Stokes series, where a general steady velocity field is expanded as a power series in the Reynolds number. At each order, a linear problem determines the coefficient functions, providing an exact closed form representation of the solution for all Reynolds numbers. However, typically the convergence of this series is limited by singularities in the complex $\mathit{Re}$ plane. We employ a generalised Padé approximant technique to continue analytically the solution outside the circle of convergence of the series. This identifies other solutions branches, some of them complex. These new solution branches can be followed as they boldly go where no flow has gone before. Sometimes these complex solution branches coalesce giving rise to real solution branches. It is shown that often, an unforced, nonlinear complex "eigensolution" exists, which implies a formal non-uniqueness, even for small and positive $\mathit{Re}$ plane. Extensive reference will be made to Dean flow in a slowly curved pipe, but also to flows between concentric, differentially rotating spheres, and to convection in a slot. In addition, certain fundamental exact solutions are shown to possess extra complex solutions. This is joint work with Florencia Boshier. Ellipsoidal Harmonics and their Applications Date: Monday 19 October, 2pm, (ARTS 01.02) Speaker: Prof. Ioannis Chatzigeorgiou (UEA) Abstract: Ellipsoidal harmonics are called the products of the separable solutions of the Laplace equation in an ellipsoidal coordinate system. Use of the separation of variables of the Laplace equation in an ellipsoidal system, results in the so-called Lame equations which accordingly provide the Lame functions and finally the ellipsoidal harmonics. The problem with Lame functions (apart from the limited literature on the subject) is that they cannot be calculated in closed forms. The reason is that their derivation requires the computation of their characteristic values which are given as solutions of characteristic polynomials. Therefore the only way to derive Lame functions for arbitrary degree and order is to apply an efficient numerical procedure. In the seminar the speaker will present such a methodology which is able to determine Lame functions (of the first and the second kind) for indefinitely large degree and order. Having calculated the Lame functions, one is able to apply them to various fields of applied mathematics to tackle practical problems. In this context, formulations and results will be presented for the hydrodynamics of fish-like shapes moving close to a wall or in the centre of a channel. Some results will also be presented for a complete different problem that deals with electroencephalography for brain imaging. Cell-Based Modelling for Wound Contraction and Angiogenesis Speaker: Fred Vermolen (TU Delft) Abstract: Wound contraction and angiogenesis are biological processes that often take place during healing of wounds and in tumor development. To model these processes, one distinguishes between different types of models, which are descriptive at several scales, ranging from cellular scale (micro-scale) to the tissue scale (macro-scale). The models are on the macro-scale are based on continuum hypotheses, which means that one sets up and solves partial differential equations with the associated boundary and initial conditions. On the smallest scale one models all kinds of cell phenomena on a molecular level. In this talk, we will consider colonies of cells, which are treated as discrete entities, as well as chemical and mechanical signals that are modelled as sets of partial differential equations. Hence, the current approach is a hybrid one. The process of angiogenesis, which is the formation of a vascular network in tissues, is often modelled by using principles based on cell densities in a continuum approach or on hybrid cellular-continuum level where one uses cellular automata (in particular cellular Potts) models. In this study, we abandon the lattice needed to model the cell positions in cellular automata modelling and instead, we apply a continuous cell-based approach to simulate three-dimensional angiogenesis. Next to the application of this modelling strategy to angiogenesis, we discuss the application of the formalism to wound contraction. The talk will describe some of the mathematical issues encountered in these models and further some animations will be shown to illustrate the potential merits of our approaches. RANS Based Boundary Layer Transition Modelling Using Laminar Kinetic Energy Concept for Wind Energy Applications Date: Monday 2nd November, 2pm, (ARTS 01.02) Speaker: Dr Zhengzhong Sun (City University London) Abstract: The talk presents a numerical investigation of boundary layer transition on a wind turbine airfoil DU91-W2-250 at chord-based Reynolds number $\mathit{Re}_c = 1 \times 10^6$ using the RANS-based transition model with laminar kinetic energy concept. The $kL$–$k_T$–$\omega$ transition model is first validated at the angle of attack of $6.24^\circ$ against wind tunnel measurement in terms of lift and drag coefficients, surface pressure distribution and transition location. Observation of flow field in the vicinity of transition location identifies a separation bubble, which attributes to the laminar-turbulent transition. Due to the low-Reynolds number nature for this transition model, study of the entire transition process is possible and the boundary layer evolution across the transition is analysed. The AoA effect on transition taking place on wind turbine airfoil is finally studied. By increasing the angle of attack from $3^\circ$ to $10^\circ$, the transition locations are predicted in close agreement with measurement. Lagrangian Modelling of Water Waves Date: Monday 9th November, 2pm, (ARTS 01.02) Speaker: Dr Eugeny Buldakov (UCL) Abstract: The main difficulty of Eulerian numerical solvers for water waves modelling is the changing shape of a computational domain. There are numerous methods of dealing with this difficulty, all of them invariably leading to considerable complication of solvers. The natural solution is using equations of fluid motion in Lagrangian form which -- though in some cases are more complicated than the Eulerian counterparts -- to be solved in the fixed domain of Lagrangian coordinates. The presentation discusses the development and application of a numerical solver which uses a simple finite-difference technique applied directly to Lagrangian equations of fluid motion. Stability of the developed numerical scheme and numerical dispersion relation are analysed and draw-backs of the scheme are discussed. A 2D version of the solver is developed and applied to a range of test cases including violent sloshing, tsunami runup, evolution of breaking wave groups and waves over sheared currents. Comparison with flume experiments demonstrates that the solver is able to model evolution of highly non-linear waves with good accuracy. However numerical dispersion may lead to a considerable phase error. Finally, directions of further development and methods to improve model accuracy are discussed. Ensemble Visualization and Uncertainty Characterization Using Generalized Notions of Data Depth Date: Wednesday 11th November, 2pm, (ARTS 01.02) Speaker: Mike Kirby (University of Utah) Abstract: When computational methods or predictive simulations are used to model complex phenomena such as dynamics of physical systems, researchers, analysts and decision makers are not only interested in understanding the data but also interested in understanding the uncertainty present in the data as well. In such situations, using ensembles is a common approach to account for the uncertainty or, in a broader sense, explore the possible outcomes of a model. Visualization as an integral component of data-analysis task can significantly facilitate the communication of the characteristics of an ensemble including uncertainty information. Designing visualization schemes suitable for exploration of ensembles is specifically challenging if the quantities of interest are derived feature-sets such as isocontours or streamlines rather than fields of data. In this talk, I will introduce novel ensemble visualization paradigms that use a class of nonparametric statistical analysis techniques called data depth to derive robust statistical summaries from an ensemble of feature-sets (from scalar or vector fields). This class of visualization techniques is based on the generalization of conventional univariate boxplots. Generalizations of boxplot provide an intuitive yet rigorous approach to studying variability while preserving the main features shared among the members. They also aid in highlighting descriptive information such as the most representative ensemble member (median) and potential outlying members. The nonparametric nature and robustness of data depth analysis and boxplot visualization makes it an advantageous approach to study uncertainty in various applications ranging from image analysis to fluid simulation to weather and climate modeling. This is joint work with Mahsa Mirzargar and Ross Whitaker An Introduction to the Hybridized Discontinuous Galerkin Method Date: Monday 16th November, 2pm, (ARTS 01.02) Speaker: Dr Liangyue Ji Abstract: The hybridized discontinuous Galerkin (HDG) method was first introduced for linear second-order parabolic equations by Prof. Bernardo Cockburn and his collaborators. The main favourable feature of these methods is that their approximate solutions can be expressed in an element-by-element fashion in terms of a numerical trace. In this framework, the globally coupled degrees of freedom, which only involves those of the numerical trace on the element face, is significantly reduced compared to standard DG methods, leading to an efficient implementation. Moreover, the numerical trace containing the super-convergence property allows computation of a new enhanced approximation by a different type of post-processing technique. In this talk, I will discuss this new characterization of the approximated solution given by the HDG method for second-order parabolic equations. A Unified Analysis of Algebraic Flux-Correction Schemes Speaker: Dr Gabriel Barrenechea (Strathclyde) Abstract: In this talk I will review recent results on the mathematical analysis of algebraic flux-correction (AFC) schemes. The schemes are designed mainly to preserve a discrete version of the maximum principle, and, unlike usual stabilised finite element schemes, AFC schemes do not start from a weak formulation, but rather they only "see" the linear system resulting from the discretisation. This lack of weak formulation is one of the main issues that have prevented from providing a mathematical analysis for this class of schemes. Then, the first step is to to write this scheme in a weak form, and then obtain stability results. Positivity of the scheme is proved under some appropriate conditions on the mesh, and convergence results are proved. Control of Falling Liquid Films Date: Monday 7th December, 2pm, (ARTS 01.02) Speaker: Dr Alice Thompson (Imperial) Abstract: The flow of a fluid layer with one interface exposed to the air and the other an inclined planar wall becomes unstable due to inertial effects when the fluid layer is sufficiently thick or the slope sufficiently steep. This free surface flow of a single fluid layer has industrial applications including coating and heat transfer, which benefit from smooth and wavy interfaces respectively. In this talk, I will discuss how the dynamics of the system can be altered by introducing deliberately spatially-varying or time-dependent perturbations via a shaped wall, chemical coatings, or the injection of fluid through the wall. I will focus on the case of fluid injection, and compare the effect on flow dynamics of choosing steady non-zero injection, or using the injection as a responsive control mechanism. I will show that applying steady spatially-periodic injection always leads to non-uniform states, can enable new bifurcations and complicated time-dependent behaviour, and significantly alters the trajectories of particles in the flow. In the second case, I will demonstrate that using injection as a feedback control mechanism based on real-time observations of the interface is remarkably effective, even when combined with localised feedback and actuation. Furthermore, the controls can be used to drive the system towards arbitrary steady states and travelling waves, and the qualitative effects are independent of the details of the flow modelling.
CommonCrawl
\begin{definition}[Definition:Concentric Circle Topology] :rightthumb280pxOpen Set of $T$ Let $C_1$ and $C_2$ be concentric circles in the Cartesian plane $\R^2$ such that $C_1$ is inside $C_2$. Let $S = C_1 \cup C_2$. Let $\BB$ be the set of sets consisting of: :all singleton sets of $C_2$ :all open intervals on $C_1$ each together with its projection from the center of the circles onto $C_2$ except for the midpoint. $\BB$ is then taken to be the sub-basis for a topology $\tau$ on $S$. Thus $\tau$ is referred to as the '''concentric circle topology'''. The topological space $T = \struct {S, \tau}$ is referred to as the '''concentric circle space'''. \end{definition}
ProofWiki
\begin{document} \RUNAUTHOR{Vayanos, Jin, and Elissaios} \RUNTITLE{ROC++: Robust Optimization in C++} \TITLE{RO\cpluspluslogo: Robust Optimization in \cpluspluslogo} \ARTICLEAUTHORS{ \AUTHOR{Phebe Vayanos,\textsuperscript{\textdagger} Qing Jin,\textsuperscript{\textdagger} and George Elissaios} \AFF{\textsuperscript{\textdagger}University of Southern California, CAIS Center for Artificial Intelligence in Society \\ {\texttt{\{phebe.vayanos,qingjin\}@usc.edu}}} } \ABSTRACT{ Over the last two decades, robust optimization techniques have emerged as a very popular means to address decision-making problems affected by uncertainty. Their success has been fueled by their attractive robustness and scalability properties, by ease of modeling, and by the limited assumptions they need about the uncertain parameters to yield meaningful solutions. Robust optimization techniques are available which can address both single- and multi-stage decision-making problems involving real-valued and/or binary decisions, and affected by both exogenous (decision-independent) and endogenous (decision-dependent) uncertain parameters. Many of these techniques apply to problems with either robust (worst-case) or stochastic (expectation) objectives and can thus be tailored to the risk preferences of the decision-maker. Robust optimization techniques rely on duality theory (potentially augmented with approximations) to transform a semi-infinite optimization problem to a finite program of benign complexity (the ``robust counterpart''). While writing down the model for a robust or stochastic optimization problem is usually a simple task, obtaining the robust counterpart requires expertise in robust optimization. To date, very few solutions are available that can facilitate the modeling and solution of such problems. This has been a major impediment to their being put to practical use. In this paper, we propose RO\cpluspluslogo{}, a \cpluspluslogo~based platform for automatic robust optimization, applicable to a wide array of single- and multi-stage stochastic and robust problems with both exogenous and endogenous uncertain parameters. Our platform naturally extends existing off-the-shelf deterministic optimization platforms. We also propose the ROB file format that generalizes the LP file format to robust optimization. We showcase the modeling power of RO\cpluspluslogo{} on several decision-making problems of practical interest. Our platform can help streamline the modeling and solution of stochastic and robust optimization problems for both researchers and practitioners. It comes with detailed documentation to facilitate its use and expansion. RO\cpluspluslogo{} is freely distributed for academic use at \url{https://sites.google.com/usc.edu/robust-opt-cpp/}. } \KEYWORDS{robust optimization, sequential decision-making under uncertainty, exogenous uncertainty, endogenous uncertainty, decision-dependent uncertainty, decision-dependent information discovery.} \maketitle \maketitle \section{Introduction} \label{sec:introduction} \subsection{Background \& Motivation} Decision-making problems involving \emph{uncertain} or \emph{unknown} parameters are faced routinely by individuals, firms, policy-makers, and governments. Uncertain parameters may correspond to prediction errors, measurement errors, or implementation errors, see e.g., \cite{BenTal_Book}. Prediction errors arise when some of the data elements have not yet materialized at the time of decision-making and must thus be predicted/estimated (e.g., future prices of stocks, future demand, or future weather). Measurement errors arise when some of the data elements (e.g., characteristics of raw materials) cannot be precisely measured (due to e.g., limitations of the technological devices available). Implementation errors arise when some of the decisions may not be implemented exactly as planned/recommended by the optimization (due to e.g., physical constraints). If all decisions must be made \emph{before} the uncertain parameters are revealed, the decision-making problem is referred to as \emph{static} or \emph{single-stage}. In contrast, if the uncertain parameters are revealed sequentially over time and decisions are allowed to adapt to the history of observations, the decision-making problem is referred to as \emph{adaptive} or \emph{multi-stage}. In sequential decision-making problems, the time of revelation of the uncertain parameters may either be known a-priori or it may be part of the decision space. Uncertain parameters whose time of revelation is known in advance, being \emph{in}dependent of the decision-maker's actions, are referred to as \emph{exogenous}. Uncertain parameters whose time of revelation can be controlled by the decision-maker are referred to as \emph{endogenous}. This terminology was originally coined by~\cite{Jonsbraten_thesis}. Examples of decision-making problems involving exogenous uncertain parameters are: financial portfolio optimization (see e.g., \cite{PortfolioSelection_Markowitz1952}), inventory and supply-chain management (see e.g., \cite{Scarf_Inventory}), vehicle routing (\cite{Bertsimas_1991_VRP}), unit commitment (see e.g., \cite{Takriti_1996}), and option pricing (see e.g., \cite{Kuhn_SwingOptions}). Examples of decision-making problems involving endogenous uncertain parameters are: R\&D project portfolio optimization (see e.g., \cite{Solak_RandDportfolios}), clinical trial planning (see e.g., \cite{Colvin_ClinicalTrials}), offshore oilfield exploration (see e.g., \cite{GoelGrossmanGasFields}), best box and Pandora's box problems (see e.g., \cite{Weitzman_1979}), and preference elicitation (see e.g., \cite{Vayanos_ActivePreferences}). \subsection{Stochastic \& Robust Optimization} Whether the decision-making problem is affected by exogenous and/or endogenous uncertain parameters, it is well known that ignoring uncertainty altogether when deciding on the actions to take usually results in suboptimal or even infeasible actions. To this end, researchers in stochastic and robust optimization have devised optimization-based models and solution approaches that explicitly capture the uncertain nature of these parameters. These frameworks model decisions as functions (\emph{decision rules}) of the history of observations, capturing the adaptive and non-anticipative nature of the decision-making process. Stochastic optimization, also known as stochastic programming, assumes that the distribution of the uncertain parameters is perfectly known, see e.g., \cite{StochasticProgrammingBook,SP_prekopa,Birge_Book}, and~\cite{SPshapiro_book}. This assumption is well justified in many situations. For example, this is the case if the distribution is stationary and can be well estimated from historical data. If the distribution of the uncertain parameters is discrete, the stochastic program admits a \emph{deterministic equivalent} that can be solved with off-the-shelf solvers potentially augmented with decomposition techniques, see e.g., \cite{benders1962}, or dedicated algorithms, see e.g., \cite{Rockafellar1991}. If the distribution of the uncertain parameters is continuous, the reformulation of the uncertain optimization problem may or not be computationally tractable since even evaluating the objective function usually requires computing a high-dimensional integral. If this problem is not computationally tractable, discretization approaches (such as the sample average approximation) may be employed. While discretization appears as a promising approach for smaller problems, it may result in a combinatorial state explosion when applied to large and medium sized problems. Conversely, using only very few discretization points can result in solutions that are suboptimal or may even fail to be implementable in practice. Over the last two decades, stochastic programming techniques have been extended to address problems involving endogenous uncertain parameters, see e.g., \cite{GoelGrossmanGasFields,GoelGrossmann_BandB_DDU,GoelGrossman_ClassStochastic_DDU,GoelGrossman_NovelBB_GasFields,GuptaGrossman_SolutionStrategies,TarhanGrossmanGoel_Nonconvex_DDU,Colvin_ClinicalTrials,Colvin_TestingTasks,Colvin_Pharmaceutical}. We refer the reader to~\cite{StochasticProgrammingBook,SP_prekopa,Birge_Book}, and~\cite{SPshapiro_book} for in-depth reviews of the field of stochastic programming. Robust optimization does not necessitate knowledge of the distribution of the uncertain parameters. Rather than modeling uncertainty by means of distributions, it merely assumes that the uncertain parameters belong in a so-called \emph{uncertainty set}. The decision-maker then seeks to be immunized against all possible realizations of the uncertain parameters in this set. The robust optimization paradigm gained significant traction starting in the late 1990s and early 2000s following the works of \cite{RSols_UncertainLinear_Programs_BT,BenTalNemirovski_RCO,RobustLP_UncertainData,AdjustableRSols_uncertainLP}, and~\cite{bertsimas2003robust,Price_Robustness,Tractable_R_SOCP}, among others. Over the last two decades, research on robust optimization has burgeoned, fueled by the limited assumptions it needs about the uncertain parameters to yield meaningful solutions, by its attractive robustness and scalability properties, and by ease of modelling, see e.g., \cite{TheoryApplicationRO,GYdH15:practical_guide}. Robust optimization techniques are available which can address both single- and multi-stage decision-making problems involving real-valued and/or binary decisions, and affected by exogenous and/or endogenous uncertain parameters. Robust optimization techniques for single-stage robust optimization rely on duality theory to transform a semi-infinite optimization problem to an equivalent finite program of benign complexity (the ``robust counterpart'') that is solvable with off-the-shelf solvers, see e.g.,~\cite{BenTal_Book}. In the multi-stage setting, the dualization step is usually preceded by an approximation step that transforms the multi-stage problem to a single-stage robust program. The idea is to restrict the space of the decisions to a subset of benign complexity based either on a decision rule approximation or a finite adaptability approximation. The decision rule approximation consists in restricting the adjustable decisions to those presenting e.g., linear, piecewise linear, or polynomial dependence on the uncertain parameters, see e.g.,~\cite{AdjustableRSols_uncertainLP,kuhn_primal_dual_rules,Bertsimas_polynomial_policies,ConstraintSampling_VKR,Angelos_Liftings}. The finite adaptability approximation consists in selecting a finite number of candidate strategies today and selecting the best of those strategies in an adaptive fashion once the uncertain parameters are revealed, see e.g., \cite{Caramanis_FiniteAdaptability,Hanasusanto2015}. The decision rule and finite adaptability approximations have been extended to the endogenous uncertainty setting, see~\cite{DDI_VKB,vayanos_ROInfoDiscovery}. While writing down the model for a robust optimization problem is usually a simple task (akin to formulating a deterministic optimization problem), obtaining the robust counterpart is typically tedious and requires expertise in robust optimization, see~\cite{BenTal_Book}. Robust optimization techniques have been extended to address certain classes of stochastic programming problems involving continuously distributed uncertain parameters and affected by both exogenous uncertainty (see \cite{kuhn_primal_dual_rules,Bodur2018}) and endogenous uncertainty (see~\cite{DDI_VKB}). Robust optimization techniques have been used successfully to address single-stage problems in inventory management (\cite{AJD16:inventory}), network optimization (\cite{bertsimas2003robust}), product pricing (\cite{Adida_DP_MP,Thiele_MPP_JRPM}), portfolio optimization (\cite{Robust_Portfolio_Management,RobustPortfolioSelection}), and healthcare (\cite{gupta2017maximizing,BTV_kidneys,ChanDefibrillator2018}). They have also been used to successfully tackle sequential problems in energy (\cite{ZWWG13:multistage_robust_unit_commitment,Jiang2014}), inventory and supply-chain management (\cite{BenTal_RSFC,mamani2016closed}), network optimization (\cite{Atamturk_NetworkDesign}), preference elicitation (\cite{Vayanos_ActivePreferences}), vehicle routing (\cite{Gounaris_RobustVehicleRouting}), process scheduling (\cite{Lappas2016}), and R\&D project portfolio optimization (\cite{vayanos_ROInfoDiscovery}). In spite of its success at addressing a diverse pool of problems in the literature, to date, very few platforms are available that can facilitate the modeling and solution of robust optimization problems, and those available can only tackle limited classes of robust problems. At the same time, and as mentioned above, reformulating such problems in a way that they can be solved by off-the-shelf solvers requires expertise. This is particularly true in the case of multi-stage problems and of problems affected by endogenous uncertainty. In this paper, we fill this gap by proposing RO\cpluspluslogo, a \cpluspluslogo~based platform for modeling, approximating, automatically reformulating, and solving general classes of robust optimization problems. Our platform provides several modeling objects (decision variables, uncertain parameters, constraints, optimization problems) and overloaded operators to allow for ease of modeling using a syntax similar to that of state-of-the-art \emph{deterministic optimization} solvers like CPLEX\footnote{\url{https://www.ibm.com/analytics/cplex-optimizer}} or Gurobi.\footnote{\url{https://www.gurobi.com}} While our platform is not exhaustive, it provides a framework that is easy to update and expand and lays the foundation for more development to help facilitate research in, and real-life applications of, robust optimization. \subsection{Related Literature} \paragraph{Tools for Modelling and Solving Deterministic Optimization Problems.} There exist many commercial and open-source tools for modeling and solving deterministic optimization problems. On the commercial front, the most popular solvers for conic (integer) optimization are~Gurobi,\footnote{See \url{https://www.gurobi.com}.} IBM CPLEX Optimizer,\footnote{See \url{https://www.ibm.com/analytics/cplex-optimizer}.} and Mosek.\footnote{See \url{https://www.mosek.com}.} These solvers provide interfaces for C/\cpluspluslogo, Python, and other commonly used high-level languages. Several tools such as AMPL,\footnote{See \url{https://ampl.com}} GAMS,\footnote{See \url{https://www.gams.com}.} and AIMMS\footnote{See \url{https://www.aimms.com}.} are based on dedicated modelling languages. They provide APIs for \cpluspluslogo, C\#, Java, and Python. They can also connect to commercial or open-source solvers. Finally, several commercial vendors provide modeling capabilities combined with built-in solvers, see e.g., Lindo Systems Inc.,\footnote{See \url{https://www.lindo.com}.} FrontlineSolvers,\footnote{See \url{https://www.solver.com/}.} and Maximal.\footnote{See \url{http://www.maximalsoftware.com/}.} On the open source side, the most popular solvers are GLPK,\footnote{See \url{https://www.gnu.org/software/glpk/}.} and Cbc\footnote{See \url{https://github.com/coin-or/Cbc}} and Clp\footnote{See \url{https://github.com/coin-or/Clp}} from COIN-OR.\footnote{See \url{https://www.coin-or.org}} Commercial and open-source solvers can also be accessed from several open-source modeling languages for mathematical optimization that are embedded in popular algebraic modeling languages. These include JuMP which is embedded in Julia, see \cite{JuMP}, and Yalmip and CVX which are embedded in MATLAB, see \cite{YALMIP} and \cite{cvx,gb08}, respectively. \paragraph{Tools for Modelling and Solving Stochastic Optimization Problems.} Several of the commercial vendors also provide modeling capabilities for stochastic programming, see e.g., Lindo Systems Inc.,\footnote{See \url{https://www.lindo.com/index.php/products/lingo-and-optimization-modeling}.} FrontlineSolvers,\footnote{See \url{https://www.solver.com/risk-solver-stochastic-libraries}.} Maximal,\footnote{See \url{http://www.maximalsoftware.com/maximal/news/stochastic.html}.} GAMS,\footnote{See \url{https://www.gams.com/latest/docs/UG_EMP_SP.html}.} AMPL (see \cite{SAMPL}), and AIMMS.\footnote{See \url{https://www.aimms.com}.} On the other hand, there are only two open-source platforms that we are aware of that provide such capabilities. The first one is FLOP\cpluspluslogo,\footnote{See \url{https://projects.coin-or.org/FlopC++}.} which is part of COIN-OR. It provides an algebraic modeling environment in \cpluspluslogo\ that is similar to languages such as GAMS and AMPL. The second one is PySP\footnote{See \url{https://pyomo.readthedocs.io/en/stable/modeling_extensions/pysp.html}.} which is based on the Python high-level programming language, see \cite{pysp2012}. To express a stochastic program in PySP, the user specifies both the deterministic base model and the scenario tree model in the Pyomo open-source algebraic modeling language, see~\cite{pyomo2009,pyomo2011,pyomo2012}. All the aforementioned tools assume that the distribution of the uncertain parameters in the optimization problem is discrete or provide mechanisms for generating samples from a continuous distribution to feed into the model. \paragraph{Tools for Modelling and Solving Robust Optimization Problems.} Our robust optimization platform RO\cpluspluslogo{} most closely relates to several open-source tools released in recent years. All of these tools present a similar structure: they provide a modeling platform combined with an approximation/reformulation toolkit that can automatically obtain the robust counterpart, which is then solved using existing open-source and/or commercial solvers. The majority of these platforms is based on the MATLAB modeling language. One tool builds upon YALMIP, see~\cite{lofberg2012}, and provides support for single-stage problems with exogenous uncertainty. A notable advantage of YALMIP is that the robust counterpart output by the platform can be solved using any one of a huge variety of open-source or commercial solvers. Other platforms, like ROME and RSOME are entirely motivated by the (stochastic) robust optimization modeling paradigm, see~\cite{ROME} and \cite{RSOME}, and provide support for both single- and multi-stage (distributionally) robust optimization problems affected by exogenous uncertain parameters. The robust counterparts output by ROME and RSOME can be solved with CPLEX, Mosek, and SDPT3.\footnote{See \url{http://www.math.cmu.edu/~reha/sdpt3.html}.} Recently, JuMPeR has been proposed as an add-on to JuMP. It can cater for single-stage problems with exogenous uncertain parameters. JuMPeR can be connected to a large variety of open-source and commercial solvers. On the commercial front, AIMMS is currently equipped with an add-on that can be used to model and automatically reformulate robust optimization problems. It can tackle both single- and multi-stage problems with exogenous uncertainty. A CPLEX license is needed to operate this add-on. To the best of our knowledge, none of the available platforms can address problems involving endogenous uncertain parameters. None of them can tackle problems presenting binary adaptive variables. Finally, none of these platforms can be used from \cpluspluslogo. \paragraph{File Formats for Specifying Optimization Problems.} To facilitate the sharing and storing of optimization problems, dedicated file formats have been proposed. The two most popular file formats for deterministic mathematical programming problems are the MPS and LP formats. MPS is an older format established on mainframe systems. It is not very intuitive to use as it is setup as if you were using punch cards. In contrast, the LP format is a lot more interpretable: it captures problems in a way similar to how it is modelled on paper. The SMPS file format is the most popular format for storing stochastic programs and mirrors the role MPS plays in the deterministic setting, see \cite{BirgDempGassGunnKingWall87,Gassmann_SLP}. To the best of our knowledge, no format exists in the literature for storing and sharing robust optimization problems. \subsection{Contributions} We now summarize our main contributions and the key advantages of our platform: \begin{enumerate}[label=\textit{(\alph*)}] \item We propose RO\cpluspluslogo{}, the first \cpluspluslogo\ based platform for modelling, automatically reformulating, and solving robust optimization problems. Our platform is the first capable of addressing both single- and multi-stage problems involving exogenous and/or endogenous uncertain parameters and real- and/or binary-valued adaptive variables. It can also be used to address certain classes of single- or multi-stage stochastic programs whose distribution is continuous and supported on a compact set. Our reformulations are (mixed-integer) linear or second-order cone optimization problems and thus any solver that can tackle such problems can be used to solve the robust counterparts output by our platform. We provide an interface to the commercial solver Gurobi. Our platform can easily be extended to support other solvers. We illustrate the flexibility and ease of use of our platform on several stylized problems. \item We propose the ROB file format, the first file format for storing and sharing general robust optimization problems. Our format builds upon the LP file format and is thus interpretable and easy to use. \item Our modeling language is similar to the one provided for the deterministic case by solvers such as CPLEX or Gurobi: it is easy to use for anyone familiar with these. \item Our platform comes with detailed documentation (created with Doxygen\footnote{See \url{https://www.doxygen.nl/index.html}.}) to facilitate its use and expansion. Our framework is open-source for educational, research, and non-profit purposes. The source code, installation instructions, and dependencies of RO\cpluspluslogo{} are available at \url{https://sites.google.com/usc.edu/robust-opt-cpp/}. \end{enumerate} \subsection{Organization of the Paper \& Notation} The remainder of this paper is organized as follows. Section~\ref{sec:modelling_background} describes the broad class of problems to which RO\cpluspluslogo{} applies. Section~\ref{sec:uncertainty_set} presents our model of uncertainty. Section~\ref{sec:approximation_schemes} lists the approximation schemes that are provided by RO\cpluspluslogo. Sample models created and solved using RO\cpluspluslogo{} are provided in Section~\ref{sec:numerical_results}. Section~\ref{sec:file_format} introduces the ROB file format. Section~\ref{sec:extensions} presents extensions to the core model that can also be tackled by RO\cpluspluslogo. \paragraph{Notation.} Throughout this paper, vectors (matrices) are denoted by boldface lowercase (uppercase) letters. The $k$th element of a vector ${\bm x} \in \ensuremath{\field{R}}^n$ ($k \leq n$) is denoted by ${\bm x}_k$. Scalars are denoted by lowercase or upper case letters, e.g., $\alpha$ or $N$. We let $\mathcal L_{k}^n$ denote the space of all functions from $\ensuremath{\field{R}}^k$ to $\ensuremath{\field{R}}^n$. Accordingly, we denote by $\mathcal B_{k}^n$ the space of all functions from $\ensuremath{\field{R}}^k$ to $\{0,1\}^n$. Given two vectors of equal length, ${\bm x}$, ${\bm y} \in \mathbb R^n$, we let ${\bm x} \circ {\bm y}$ denote the Hadamard product of the vectors, i.e., their element-wise product. Throughout the paper, we denote the uncertain parameters by ${\bm \xi} \in \ensuremath{\field{R}}^k$. We consider two settings: a robust setting and a stochastic setting. In the robust setting, we assume that the decision-maker wishes to be immunized against realizations of ${\bm \xi}$ in the uncertainty set $\Xi$. In the stochastic setting, we assume that the distribution $\mathbb P$ of the uncertain parameters is fully known. In this case, we let $\Xi$ denote its support and we let $\mathbb E(\cdot)$ denote the expectation operator with respect to $\mathbb P$. \section{Modelling Decision-Making Problems Affected by Uncertainty} \label{sec:modelling_background} We present the main class of problems that are supported by RO\cpluspluslogo. Our platform can handle general multi-stage decision problems affected by both exogenous and endogenous uncertainty over the finite planning horizon $\mathcal T: = \{1,\ldots,T\}$ (if $T=1$, then we recover single-stage decision-problems). The elements of the vector of uncertain parameters ${\bm \xi}$ are revealed sequentially over time. However, the sequence of their revelation need not be predetermined (exogenous). Instead, the time of information discovery can be controlled by the decision-maker via the binary \emph{measurement decisions} ${\bm w}$. The aim is to find the sequences of real-valued decisions ${\bm y} := ({\bm y}_1,\ldots,{\bm y}_T)$, binary decisions ${\bm z}:=({\bm z}_1,\ldots,{\bm z}_T)$, and measurement decisions ${\bm w}:=({\bm w}_1,\ldots,{\bm w}_T)$ that minimize a given (uncertain) cost function either in expectation (stochastic setting) or in the worst-case (robust setting). These decisions are constrained by a set of inequalities which are required to be obeyed robustly, i.e., for any realization of the parameters~${\bm \xi}$ in the set $\Xi$. A salient feature of our platform is that the decision variables are explicitly modelled as functions, or decision rules, of the history of observations. This feature is critical as it captures the ability of the decision-maker to adjust their decisions based on the realizations of the observed uncertain parameters. Decision-making problems of the type described here can be formulated as \begin{equation}\renewcommand{\pveqnstretch}{1} \begin{array}{cl} \ensuremath{\mathop{\mathrm{minimize}}\limits} & \quad \displaystyle \mathbb F \left[ \;\; \sum_{t \in \sets T} {\bm c}_t^\top {\bm y}_t({\bm \xi}) + {\bm d}_t({\bm \xi}) {\bm z}_t({\bm \xi}) + {\bm f}_t({\bm \xi}) {\bm w}_t({\bm \xi}) \right] \\ \text{\rm subject to} & \quad {\bm y}_t \in \sets L_k^{n_t}, \; {\bm z}_t \in \sets B_k^{\ell_t}, \; {\bm w}_t \in \sets B_k^k \quad \forall t \in \sets T \\ & \quad \!\!\!\!\left. \begin{array}{l} \displaystyle \sum_{\tau=1}^t {\bm A}_{t \tau}{\bm y}_\tau({\bm \xi}) + {\bm B}_{t \tau}({\bm \xi}){\bm z}_\tau({\bm \xi}) + {\bm C}_{t \tau}({\bm \xi}) {\bm w}_\tau({\bm \xi}) \; \leq \; h_t ({\bm \xi}) \\ {\bm w}_t({\bm \xi}) \in \sets W_t \\ {\bm w}_t({\bm \xi}) \geq {\bm w}_{t-1}({\bm \xi}) \end{array} \quad \right\} \quad \forall {\bm \xi} \in \Xi, t \in \sets T \\ & \quad \!\!\!\!\left. \begin{array}{l} {\bm y}_t({\bm \xi}) = {\bm y}_t({\bm \xi}') \\ {\bm z}_t({\bm \xi}) = {\bm z}_t({\bm \xi}') \\ {\bm w}_t({\bm \xi}) = {\bm y}_t({\bm \xi}') \end{array} \quad \quad \right\} \quad \forall t \in \sets T,\; \forall {\bm \xi},\; {\bm \xi}' \in \Xi : {\bm w}_{t-1}({\bm \xi}) \circ {\bm \xi} = {\bm w}_{t-1}({\bm \xi}') \circ {\bm \xi}', \end{array} \label{eq:main_problem} \end{equation} where $\mathbb F$ is a functional that maps the uncertain overall costs (across all possible realizations of ${\bm \xi}$) to a real number. In the robust setting, the functional $\mathbb F(\cdot)$ computes the worst-case (maximum) over ${\bm \xi} \in \Xi$. In the stochastic setting, it computes the expectation of the cost function under the distribution of the uncertain parameters (assumed to be known). In this problem, ${\bm y}_t({\bm \xi}) \in \ensuremath{\field{R}}^{n_t}$ (resp.\ ${\bm z}_t({\bm \xi}) \in \{0,1\}^{\ell_t}$) represent real-valued (resp.\ binary) decisions that are selected at the beginning of time period $t$. The variables ${\bm w}_t({\bm \xi}) \in \{0,1\}^k$ are binary measurement decisions that are made at time $t$ and that summarize the information base for time $t+1$. Specifically, ${\bm w}_{t}({\bm \xi})$ is a binary vector that has the same dimension as the vector of uncertain parameters; its $i$th element, ${\bm w}_{t,i}$, equals one if and only if the $i$th uncertain parameter, ${\bm \xi}_i$, has been observed at some time $\tau \in \{0,\ldots,t\}$, in which case it is included in the information basis for time $t+1$. Without loss of generality, we assume that ${\bm w}_0({\bm \xi})={\bm 0}$ for all ${\bm \xi} \in \Xi$ so that no uncertain parameter is known at the beginning of the planning horizon. The costs associated with the variables ${\bm y}_t({\bm \xi})$, ${\bm z}_t({\bm \xi})$, and ${\bm w}_t({\bm \xi})$ are ${\bm c}_t \in \ensuremath{\field{R}}^{n_t}$, ${\bm d}_t({\bm \xi}) \in \ensuremath{\field{R}}^{\ell_t}$, and ${\bm f}_t({\bm \xi}) \in \ensuremath{\field{R}}^k$, respectively. In particular, ${\bm f}_{t,i}({\bm \xi})$ represents the cost of including ${\bm \xi}_i$ in the information basis at time $t+1$. Without much loss of generality, we assume that the costs ${\bm d}_t({\bm \xi})$ and ${\bm f}_t({\bm \xi})$ are all linear in ${\bm \xi}$. The first set of constraints in the formulation above defines the decision variables of the problem and ensures that decisions are modelled as functions of the uncertain parameters. The second set of constraints, which involve the matrices ${\bm A}_{t\tau} \in \ensuremath{\field{R}}^{m_t \times n_\tau}$, ${\bm B}_{t\tau}({\bm \xi}) \in \ensuremath{\field{R}}^{m_t \times \ell_\tau}$, and ${\bm C}_{t\tau}({\bm \xi}) \in \ensuremath{\field{R}}^{m_t \times k}$ are the problem constraints. The set $\mathcal W_t$ in the third constraint may model requirements stipulating, for example, that a specific uncertain parameter can only be observed after a certain stage. If the $i$th uncertain parameter is exogenous (i.e., if its time of information discovery is \emph{not} decision-dependent) and if its time of revelation is $t$, it suffices to set ${\bm w}_{\tau,i}({\bm \xi})=0$, if $\tau <t$; $=1$, else, and ${\bm f}_{\tau,i}({\bm \xi})=0$ for all ${\bm \xi} \in \Xi$ and $\tau \in \sets T$. The fourth set of constraints is an information monotonicity constraint: it stipulates that information that has been observed cannot be forgotten. The last three sets of constraints are decision-dependent non-anticipativity constraints: they stipulate that the adaptive decision-variables must be constant in those uncertain parameters that have not been observed at the time when the decision is made. Without much loss of generality, we assume that the matrices ${\bm B}_{t\tau}({\bm \xi}) \in \ensuremath{\field{R}}^{m_t \times \ell_\tau}$ and ${\bm C}_{t\tau}({\bm \xi}) \in \ensuremath{\field{R}}^{m_t \times k}$ are both linear in ${\bm \xi}$. \section{Modelling Uncertainty} \label{sec:uncertainty_set} We now discuss our model for the set $\Xi$. Throughout this paper and in our platform, we assume that $\Xi$ is compact and admits a conic representation, i.e., it is expressible as \begin{equation} \Xi \; := \left\{ {\bm \xi} \in \ensuremath{\field{R}}^k \; : \; \exists {\bm \zeta}^s \in \ensuremath{\field{R}}^{k_s}, \; s=1,\ldots, S \; : \; {\bm P}^s {\bm \xi} + {\bm Q}^s {\bm \zeta}^s + {\bm q}^s \in \sets K^s, \; s=1,\ldots, S \right\} \label{eq:uncertainty_set} \end{equation} for some matrices ${\bm P}^s \in \ensuremath{\field{R}}^{r_s \times k}$ and ${\bm Q}^s \in \ensuremath{\field{R}}^{r_s \times k_s}$, and vector ${\bm q}^s \in \ensuremath{\field{R}}^{r_s}$, $s=1\ldots,S$, where $\sets K^s$, $s=1\ldots,S$, are closed convex pointed cones in $\ensuremath{\field{R}}^{r_s}$. Finally, we assume that the representation above is strictly feasible (unless the cones involved in the representation are polyhedral, in which case this assumption can be relaxed). In our platform, we focus on the cases where the cones $\sets K^s$ are either polyhedral, i.e., $\sets K^s = \ensuremath{\field{R}}_+^{r_s}$, or Lorentz cones, i.e., $ \sets K^s = \left\{ {\bm u} \in \ensuremath{\field{R}}^{r_s} \; :\; \sqrt{ {\bm u}_1^2 + \cdots + {\bm u}_{r_s-1}^2 } \leq {\bm u}_{r_s} \right\}. $ Uncertainty sets of the form~\eqref{eq:uncertainty_set} arise naturally from statistics or from knowledge of the distribution of the uncertain parameters. In the stochastic setting, the uncertainty set $\Xi$ can be constructed as the support of the distribution of the uncertain parameters, see e.g., \cite{kuhn_primal_dual_rules}. More often, it is constructed in a data-driven fashion to guarantee that constraints are satisfied with high probability, see e.g., \cite{Bertsimas2018}. More generally, disciplined methods for constructing uncertainty sets from ``random'' uncertainty exist, see e.g., \cite{bandi2012tractable,BenTal_Book}. We now discuss several uncertainty sets from the literature that can be modelled in the form~\eqref{eq:uncertainty_set}. \begin{example}[Budget Uncertainty Sets] Uncertainty sets of the form~\eqref{eq:uncertainty_set} can be used to model 1-norm and $\infty$-norm uncertainty sets with budget of uncertainty $\Gamma$, given by $\{ {\bm \xi} \in \ensuremath{\field{R}}^k : \|{\bm \xi}\|_1 \leq \Gamma \}$ and $\{ {\bm \xi} \in \ensuremath{\field{R}}^k : \|{\bm \xi}\|_\infty \leq \Gamma \}$, respectively. More generally, they can be used to impose budget constraints at various levels of a given hierarchy. For example, they can be used to model uncertainty sets of the form $$ \left\{ {\bm \xi} \in \ensuremath{\field{R}}^k \; : \; \sum_{i \in \sets H_h} |{\bm \xi}_i| \leq \Gamma_h {\bm \xi}_h \quad \forall h=1,\ldots, H \right\}, $$ where the sets $\sets H_h \subseteq \{1,\ldots,k\}$ collect the indices of all uncertain parameters in the $h$th level of the hierarchy and $\Gamma_h \in \ensuremath{\field{R}}_+$ is the budget of uncertainty for hierarchy $h$, see e.g., \cite{DSL2019}. \end{example} \begin{example}[Ellipsoidal Uncertainty Sets] Uncertainty sets of the form~\eqref{eq:uncertainty_set} capture as special cases ellipsoidal uncertainty sets, which arise for example as confidence regions from Gaussian distributions. These are expressible as $$ \left\{ {\bm \xi} \in \ensuremath{\field{R}}^k \; : \; ({\bm \xi}-\overline {\bm \xi})^\top {\bm P}^{-1} ({\bm \xi}-\overline {\bm \xi}) \; \leq \; 1 \right\}, $$ for some matrix ${\bm P} \in \mathbb S_+^{k}$ and vector $\overline {\bm \xi} \in \ensuremath{\field{R}}^k$, see e.g., \cite{BenTal_Book}. \end{example} \begin{example}[Central Limit Theorem Uncertainty Sets] Sets of the form~\eqref{eq:uncertainty_set} can be used to model Central Limit Theorem based uncertainty sets. These sets arise for example as confidence regions for large numbers of i.i.d.\ uncertain parameters and are expressible as $$ \left\{ {\bm \xi} \in \ensuremath{\field{R}}^k : \left| \sum_{i=1}^k {\bm \xi}_k - \mu k \right| \leq \Gamma \sigma \sqrt{k} \right\}, $$ where $\mu$ and $\sigma$ are the mean and standard deviation of the i.i.d.\ parameters ${\bm \xi}_i$, $i=1,\ldots,k$, see~\cite{bandi2012tractable,BTV_kidneys}. \end{example} \begin{example}[Uncertainty Sets based on Factor Models] Sets of the form~\eqref{eq:uncertainty_set} capture as special cases uncertainty sets based on factor models that are popular in finance and economics. These are expressible in the form $$ \left\{ {\bm \xi} \in \ensuremath{\field{R}}^k \; : \; \exists{\bm \zeta} \in \ensuremath{\field{R}}^{\kappa} : {\bm \xi} = {\bm \Phi} {\bm \zeta} + {\bm \phi}, \; \| {\bm \zeta}\|_2 \leq 1 \right\}, $$ for some vector ${\bm \phi} \in \ensuremath{\field{R}}^{k}$ and matrix ${\bm \Phi} \in \ensuremath{\field{R}}^{k \times \kappa}$. \end{example} \section{Interpretable Decision Rules \& Contingency Planning} \label{sec:approximation_schemes} In formulation~\eqref{eq:main_problem}, the recourse decisions are very hard to interpret since decisions are modelled as (potentially very complicated) functions of the history of observations. The functional form of the decisions combined with the infinite number of constraints involved in problem~\eqref{eq:main_problem} also imply that this problem cannot be solved directly. This has motivated researchers in the fields of stochastic and robust optimization to propose several tractable approximation schemes capable of bringing problem~\eqref{eq:main_problem} to a form amenable to solution by off-the-shelf solvers. Broadly speaking, these approximation schemes fall in two categories: interpretable decision rule approximations which restrict the functional form of the recourse decisions; and finite adaptability approximation schemes which yield a moderate number of contingency plans that are candidates to be implemented in the future. These approximations have the benefit of improving the tractability properties of the problem and of rendering decisions more \emph{interpretable}, which is a highly desirable property of any automated decision support system. We now describe the approximation schemes supported by our platform. Our choice of approximations is such that they apply both to problems with exogenous and endogenous uncertainty. A decision tree describing the main options available to the user of our platform based on the structure of their problem is provided in Figure~\ref{fig:decision_tree}. Extensions are available in RO\cpluspluslogo{} which can cater for more classes of problems, see Section~\ref{sec:extensions}. \begin{figure} \caption{Decision tree to help guide the choice of approximation scheme for multi-stage problems in RO\cpluspluslogo{}.} \label{fig:decision_tree} \end{figure} \subsection{Interpretable Decision Rules} \label{sec:decision_rules} \paragraph{Constant Decision Rule and Linear Decision Rule.} The most crude (and perhaps most interpretable) decision rules that are available in RO\cpluspluslogo{} are the \emph{constant decision rule} (CDR) and the \emph{linear decision rule} (LDR), see \cite{BenTal_Book,kuhn_primal_dual_rules}. These apply to binary and real-valued decision variables, respectively. Under the constant decision rule, the binary decisions ${\bm z}_t(\cdot)$ and ${\bm w}_t(\cdot)$ are no longer allowed to adapt to the history of observations -- it is assumed that the decision-maker will take the same action, independently of the realization of the uncertain parameters. Mathematically, we have $$ {\bm z}_t({\bm \xi}) = {\bm z}_t \text{ and } {\bm w}_t({\bm \xi}) = {\bm w}_t \quad \forall t \in \sets T, \; \forall {\bm \xi} \in \Xi, $$ for some vectors ${\bm z}_t \in \{0,1\}^{\ell_t}$ and ${\bm w}_t \in \{0,1\}^{k}$, $t \in \sets T$. Under the linear decision rule, the real-valued decisions are modelled as affine functions of the history of observations, i.e., $$ {\bm y}_t({\bm \xi}) = {\bm Y}_t {\bm \xi} + {\bm y}_t \quad \forall t \in \sets T, $$ for some matrix ${\bm Y}_t \in \ensuremath{\field{R}}^{n_t \times k}$ and vector ${\bm y}_t \in \ensuremath{\field{R}}^{n_t}$. The LDR model leads to very interpretable decisions -- we can think of this decision rule as a scoring rule which assigns different values (coefficients) to each uncertain parameter. These coefficients can be interpreted as the sensitivity of the decision variables to changes in the uncertain parameters. Under the CDR and LDR approximations the adaptive variables in the problem are eliminated and the quantities ${\bm z}_t$, ${\bm w}_t$, ${\bm Y}_t$, and ${\bm y}_t$ become the new decision variables of the problem. \paragraph{Piecewise Constant and Piecewise Linear Decision Rules.} In \emph{piecewise constant} (PWC) and \emph{piecewise linear} (PWL) decision rules, the binary (resp.\ real-valued) adjustable decisions are approximated by functions that are piecewise constant (resp.\ piecewise linear) on a preselected partition of the uncertainty set. Specifically, the uncertainty set $\Xi$ is split into hyper-rectangles of the form $ \Xi_{\bm s} \; := \; \left\{ {\bm \xi}\in \Xi \; : \; {\bm c}_{{\bm s}_i-1}^i \leq {\bm \xi}_i < {\bm c}_{{\bm s}_i}^i, \; i=1,\ldots,k \right\}, $ where ${\bm s} \in \sets S := \times_{i=1}^{k} \{1,\ldots,{\bm r}_i\} \subseteq \ensuremath{\field{Z}}^{k}$ and ${\bm c}_1^i \; < \; {\bm c}_2^i \; < \; \cdots \; < \; {\bm c}_{{\bm r}_i-1}^i$, $i=1,\ldots,k$ represent ${\bm r}_i-1$ breakpoints along the ${\bm \xi}_i$ axis. Mathematically, the binary and real-valued decisions are expressible as $$ {\bm z}_t({\bm \xi}) \; = \; \sum_{{\bm s}\in \sets S} \I{ {\bm \xi} \in \Xi_{\bm s} }{\bm z}_t^{\bm s}, \quad {\bm w}_t({\bm \xi}) \; = \; \sum_{{\bm s}\in \sets S} \I{ {\bm \xi} \in \Xi_{\bm s} }{\bm w}_t^{\bm s}, $$ $$ \text{ and } \quad {\bm y}_t({\bm \xi}) \; = \; \sum_{{\bm s}\in \sets S} \I{ {\bm \xi} \in \Xi_{\bm s} } ( {\bm Y}_t^{\bm s}{\bm \xi} + {\bm y}_t^{\bm s} ), $$ for some vectors ${\bm z}_t^{\bm s} \in \ensuremath{\field{R}}^{\ell_t}$, ${\bm w}_t^{\bm s} \in \ensuremath{\field{R}}^k$, ${\bm y}_t^{\bm s} \in \ensuremath{\field{R}}^{n_t}$ and matrices ${\bm Y}_t^{\bm s} \in \ensuremath{\field{R}}^{n_t \times k}$, $t\in \sets T$, ${\bm s} \in \sets S$. We can think of this decision rule as a scoring rule which assigns different values (coefficients) to each uncertain parameter; the score assigned to each parameter depends on the subset of the partition in which the uncertain parameter lies. Although less interpretable than CDR/LDR, the PWC/PWL approximation enjoys better optimality properties: it will usually outperform CDR/LDR, since the decisions that can be modelled are more flexible. The decision rule approximations offered by the RO\cpluspluslogo{} platform apply to multi-stage problems with both endogenous and exogenous uncertainty. They apply to both stochastic and robust problems, see Figure~\ref{fig:decision_tree}. \subsection{Contingency Planning via Finite Adaptability} \label{sec:Kadaptability} Another solution approach available in RO\cpluspluslogo{} is the so-called finite adaptability which applies to robust problems with binary decisions, see~\cite{vayanos_ROInfoDiscovery}. It consists in selecting a collection of contingency plans indexed in the set $\sets K := \times_{t\in \sets T} \{ 1 ,\ldots, K_t \}$ today and choosing, at each time $t \in \sets T$, one of the $K_t$ plans for time~$t$ to implement. Mathematically, given $K_t$, $t\in \sets T$, we select ${\bm z}_t^{k_1,\ldots,k_t} \in \{0,1\}^{\ell}_t$ and ${\bm w}_t^{k_1,\ldots,k_t} \in \{0,1\}^{k}$, $k_t \in \{1,\ldots,K_t\}$ for all $t \in \sets T$, in the first stage. At each stage $t \in \sets T$, we select one of the contingency plans $k_t \in \sets K_t$ for implementation. In particular, if up to time $t$, contingency plans $k_1,\ldots,k_t$ have been selected, then ${\bm w}_t^{k_1,\ldots,k_t}$ and ${\bm z}_t^{k_1,\ldots,k_t}$ are implemented at time $t$. Relative to the piecewise constant decision rule, the finite adaptability approach usually results in better performance in the following sense: the number of contingency plans needed in the finite adaptability approach to achieve a given objective value is never greater than the number of subsets needed in the piecewise constant decision rule to achieve that same value. However, the finite adaptability approximation does not apply to problems with an expectation objective and is thus less flexible in that sense. \section{Modelling and Solving Decision-Making Problems in RO\cpluspluslogo} \label{sec:numerical_results} In this section, we discuss several robust/stochastic optimization problems, present their mathematical formulations, and discuss how these can be naturally expressed in RO\cpluspluslogo. \newpv{We also illustrate how optimal solutions can be displayed using our platform.} \subsection{Retailer-Supplier Flexible Commitment Contracts (RSFC)} \label{sec:retailer} We model the two-echelon, multiperiod supply chain problem, known as the retailer-supplier flexible commitment (RSFC) problem from~\cite{BenTal_RSFC} in RO\cpluspluslogo. \subsubsection{Problem Description} \label{sec:RSFC_description} We consider a finite planning horizon of $T$ periods, $\mathcal T:=\{1,\ldots,T\}$. At the end of each period $t \in \sets T$, the demand ${\bm \xi}_t \in \ensuremath{\field{R}}_+$ for the product faced during that period is revealed. We collect the demands for all periods in the vector ${\bm \xi}:=({\bm \xi}_1,\ldots,{\bm \xi}_T)$. We assume that the demand is known to belong to the box uncertainty set $$ \Xi := \left\{ {\bm \xi} \in \ensuremath{\field{R}}^{T} \; : \; {\bm \xi} \in [ \overline{\bm \xi} (1-\rho), \overline {\bm \xi} (1+\rho) ] \right\}, $$ where $\overline{\bm \xi} := \ensuremath{{\rm \mathbf e}} \overline \xi$, $\overline \xi$ is the nominal demand, and $\rho \in [0,1]$ is the budget of uncertainty. As the time of revelation of the uncertain parameters is exogenous, the information base, encoded in the vectors ${\bm w}_t \in \{0,1\}^T$, $t=0,\ldots,T$, is an input of the problem (data). In particular, it is defined through ${\bm w}_0:={\bm 0}$ and ${\bm w}_{t} := \sum_{\tau=1}^{t} \ensuremath{{\rm \mathbf e}}_\tau$ for each $t \in \sets T$: the information base for time $t+1$ only contains the demand for the previous periods $\tau \in \{1,\ldots,t\}$. At the beginning of the planning horizon, the retailer holds an inventory ${\bm x}_1^{\rm i}$ of the product (assumed to be known). At that point, they must specify their commitments ${\bm y}^{\rm{c}} = ( {\bm y}_1^{\rm{c}},\ldots,{\bm y}_{T}^{\rm{c}})$, where ${\bm y}_t^{\rm{c}} \in \ensuremath{\field{R}}_+$ represents the amount of the product that they forecast to order at the beginning of time $t \in \sets T$ from the supplier. A penalty cost ${\bm c}^{\rm{dc}+}_t$ (resp.\ ${\bm c}^{\rm dc-}_t$) is incurred for each unit of increase (resp.\ decrease) between the amounts committed for times $t$ and $t-1$. The amount committed for the last order before the beginning of the planning horizon is given by ${\bm y}_0^{\rm c}$. At the beginning of each period, the retailer orders a quantity ${\bm y}_t^{\rm o}({\bm \xi}) \in \ensuremath{\field{R}}_+$ from the supplier at unit cost ${\bm c}_t^{\rm{o}}$. These orders may deviate from the commitments made at the beginning of the planning horizon; in this case, a cost ${\bm c}^{\rm dp+}_t$ (resp.\ ${\bm c}^{\rm dp-}_t$) is incurred for each unit that orders ${\bm y}_t^{\rm o}({\bm \xi})$ overshoot (resp.\ undershoot) the plan ${\bm y}_t^{\rm c}$. Given this notation, the inventory of the retailer at time $t+1$, $t \in \sets T$, is expressible as $$ {\bm x}^{\rm i}_{t+1}({\bm \xi}) \; = \; {\bm x}^{\rm i}_t({\bm \xi}) + {\bm y}^{\rm o}_t({\bm \xi}) - {\bm \xi}_{t}. $$ A holding cost ${\bm c}^{\rm h}_{t+1}$ is incurred for each unit of inventory held on hand at time $t+1$, $t \in \sets T$. A shortage cost ${\bm c}^{\rm s}_{t+1}$ is incurred for each unit of demand lost at time $t+1$, $t \in \sets T$. The amounts of the product that can be ordered in any given period are constrained by lower and upper bounds denoted by $\underline{\bm y}_t^{\rm o}$ and $\overline {\bm y}_t^{\rm o}$, respectively. Similarly, cumulative orders up to and including time $t \in \sets T$ are constrained to lie in the range $\underline{\bm y}_t^{\rm co}$ to $\overline{\bm y}_t^{\rm co}$. Thus, we have $$ \underline{\bm y}_t^{\rm o} \leq {\bm y}_t^{\rm o}({\bm \xi}) \leq \overline{\bm y}_t^{\rm o} \quad \text{ and } \quad \underline{\bm y}_t^{\rm co} \leq \sum_{\tau = 1}^t {\bm y}_\tau^{\rm o}({\bm \xi}) \leq \overline{\bm y}_t^{\rm co}. $$ The objective of the retailer is to minimize their worst-case (maximum) costs. We introduce three sets of auxiliary variables used to linearize the objective function. For each $t\in \sets T$, we let ${\bm y}_t^{\rm dc}$ represent the smallest number that exceeds the costs of deviating from commitments, i.e., $$ {\bm y}_t^{\rm dc} \geq {\bm c}_t^{\rm dc+} ( {\bm y}_t^{\rm c} - {\bm y}_{t-1}^{\rm c} ) \quad \text{ and } \quad {\bm y}_t^{\rm dc} \geq {\bm c}_t^{\rm dc-} ({\bm y}_{t-1}^{\rm c} - {\bm y}_{t}^{\rm c} ). $$ For each $t\in \sets T$ and ${\bm \xi} \in \Xi$, we denote by ${\bm y}_t^{\rm dp}({\bm \xi})$ the smallest number that exceeds the deviations from the plan for time $t$, i.e., $$ {\bm y}_t^{\rm dp}({\bm \xi}) \geq {\bm c}_t^{\rm dp+} ({\bm y}_t^{\rm o}({\bm \xi}) - {\bm y}_{t}^{\rm c}) \quad \text{ and } \quad {\bm y}_t^{\rm dp}({\bm \xi}) \geq {\bm c}_t^{\rm dp-} ({\bm y}_t^{\rm c} - {\bm y}_{t}^{\rm o}({\bm \xi})) $$ Similarly, for each $t\in \sets T$ and ${\bm \xi} \in \Xi$, we denote by ${\bm y}_{t+1}^{\rm hs}({\bm \xi})$ the smallest number that exceeds the overall holding and shortage costs at time $t+1$, i.e., $$ {\bm y}_{t+1}^{\rm hs}({\bm \xi}) \geq {\bm c}_{t+1}^{\rm h}{\bm x}^{\rm i}_{t+1}({\bm \xi}) \quad \text{ and } \quad {\bm y}_{t+1}^{\rm hs}({\bm \xi}) \geq -{\bm c}_{t+1}^{\rm s} {\bm x}^{\rm i}_{t+1}({\bm \xi}). $$ The objective of the retailer is then expressible compactly as $$ \min \;\; \max_{ {\bm \xi} \in \Xi } \;\; \sum_{t \in \sets T} \; {\bm c}_t^{\rm o} {\bm y}_{t}^{\rm o}({\bm \xi}) + {\bm y}_t^{\rm dc}({\bm \xi}) + {\bm y}_t^{\rm dp}({\bm \xi}) + {\bm y}_{t+1}^{\rm hs}({\bm \xi}). $$ The full model for this problem can be found in Electronic Companion~\ref{sec:EC_RSFC_formulation}. \subsubsection{Model in RO\cpluspluslogo} We now present the RO\cpluspluslogo{} model of the RSFC problem. We assume that the data of the problem have been defined in \cpluspluslogo. The \cpluspluslogo{} variables associated with the problem data are detailed in Table~\ref{tab:RSFCparams}. For example, the lower bounds on the orders $\underline{\bm y}_t^{\rm o}$, $t \in \sets T$, are stored in the map \texttt{OrderLB} which maps each time period to the double representing the lower bound for that period. We discuss how to construct the key elements of the problem here. The full code can be found in Electronic Companion~\ref{sec:EC_RSFC_code}. \begin{table}[h!] \centering\renewcommand{\pveqnstretch}{0.75} \begin{tabular}{c|c|c|c} \hline Parameter/Index & \cpluspluslogo{} Name & \cpluspluslogo{} Type & \cpluspluslogo{} Map Keys \\ \hline \hline $T$ ($t$) & \texttt{T} (\texttt{t}) & \texttt{uint} & NA \\ ${\bm x}_1^{\rm i}$ & \texttt{InitInventory} & \texttt{double}& NA \\ ${\bm y}_0^{\rm c}$ & \texttt{InitCommit} & \texttt{double} & NA \\ $\overline \xi$ & \texttt{NomDemand} & \texttt{double}& NA \\ $\rho$ & \texttt{rho} & \texttt{double} & NA \\ $\underline{\bm y}_t^{\rm o}$, $t\in \sets T$ & \texttt{OrderLB} & \texttt{map<uint,double>} & \texttt{t=1\ldots T} \\ $\overline{\bm y}_t^{\rm o}$, $t\in \sets T$ & \texttt{OrderUB} & \texttt{map<uint,double>} & \texttt{t=1\ldots T} \\ $\underline{\bm y}_t^{\rm co}$, $t\in \sets T$ & \texttt{CumOrderLB} & \texttt{map<uint,double>}& \texttt{t=1\ldots T} \\ $\overline{\bm y}_t^{\rm co}$, $t\in \sets T$ & \texttt{CumOrderUB} & \texttt{map<uint,double>}& \texttt{t=1\ldots T} \\ ${\bm c}^{\rm o}_{t}$, $t\in \sets T$ & \texttt{OrderCost} & \texttt{map<uint,double>} & \texttt{t=1\ldots T} \\ ${\bm c}^{\rm h}_{t+1}$, $t\in \sets T$ & \texttt{HoldingCost} & \texttt{map<uint,double>} & \texttt{t=2\ldots T+1} \\ ${\bm c}^{\rm s}_{t+1}$, $t\in \sets T$ & \texttt{ShortageCost} & \texttt{map<uint,double>} & \texttt{t=2\ldots T+1} \\ ${\bm c}^{\rm dc+}_{t}$, $t\in \sets T$ & \texttt{CostDCp} & \texttt{map<uint,double>}& \texttt{t=1\ldots T} \\ ${\bm c}^{\rm dc-}_{t}$, $t\in \sets T$ & \texttt{CostDCm} & \texttt{map<uint,double>} & \texttt{t=1\ldots T} \\ ${\bm c}^{\rm dp+}_{t}$, $t\in \sets T$ & \texttt{CostDPp} & \texttt{map<uint,double>}& \texttt{t=1\ldots T} \\ ${\bm c}^{\rm dp-}_{t}$, $t\in \sets T$ & \texttt{CostDPm} & \texttt{map<uint,double>} & \texttt{t=1\ldots T} \\ \hline \end{tabular} \caption{List of model parameters and their associated \cpluspluslogo{} variables for the RSFC problem.} \label{tab:RSFCparams} \end{table} The RSFC problem is a multi-stage robust optimization problem involving only exogenous uncertain parameters. We begin by creating a model, \texttt{RSFCModel}, in RO\cpluspluslogo{} that will contain our formulation. All models are pointers to the interface class \texttt{ROCPPOptModelIF}. In this case, we instantiate an object of type \texttt{ROCPPUncOptModel} which is derived from \texttt{ROCPPOptModelIF} and which can model multi-stage optimization problems affected by exogenous uncertain parameters only. The first parameter of the \texttt{ROCPPUncOptModel} constructor corresponds to the maximum time period of any decision variable or uncertain parameter in the problem: in this case, $T+1$. The second parameter of the \texttt{ROCPPUncOptModel} constructor is of the enumerated type \texttt{uncOptModelObjType} which admits two possible values: \texttt{robust}, which indicates a min-max objective; and, \texttt{stochastic}, which indicates an expectation objective. The robust RSFC problem can be initialized as: \lstinputlisting[firstline=1,lastline=2]{retailer.cc} We note that in RO\cpluspluslogo{} all optimization problems are minimization problems. Next, we discuss how to create the RO\cpluspluslogo{} variables associated with uncertain parameters and decision variables in the problem. The correspondence between variables is summarized in Table~\ref{tab:RSFCdvs} for convenience. \begin{table}[ht!] \centering \renewcommand{\pveqnstretch}{0.75} \begin{tabular}{c|c|c|c} \hline Variable & \cpluspluslogo{} Name & \cpluspluslogo{} Type & \cpluspluslogo{} Map Keys \\ \hline \hline ${\bm \xi}_t$, $t\in \sets T$ & \texttt{Demand} & \texttt{map<uint,ROCPPUnc\_Ptr>} & \texttt{t=1\ldots T} \\ ${\bm y}_t^{\rm o}$, $t\in \sets T$ & \texttt{Orders} & \texttt{map<uint,ROCPPVarIF\_Ptr>} & \texttt{t=1\ldots T} \\ ${\bm y}_t^{\rm c}$, $t\in \sets T$ & \texttt{Commits} & \texttt{map<uint,ROCPPVarIF\_Ptr>} & \texttt{t=1\ldots T} \\ ${\bm y}_t^{\rm dc}$, $t\in \sets T$ & \texttt{MaxDC} & \texttt{map<uint,ROCPPVarIF\_Ptr>} & \texttt{t=1\ldots T} \\ ${\bm y}_t^{\rm dp}$, $t\in \sets T$ & \texttt{MaxDP} & \texttt{map<uint,ROCPPVarIF\_Ptr>} & \texttt{t=1\ldots T} \\ ${\bm y}_{t+1}^{\rm hs}$, $t\in \sets T$ & \texttt{MaxHS} & \texttt{map<uint,ROCPPVarIF\_Ptr>} & \texttt{t=2\ldots T+1} \\ ${\bm x}_{t+1}^{\rm i}$, $t\in \sets T$ & \texttt{Inventory} & \texttt{ROCPPExp\_Ptr} & NA \\ \hline \hline \end{tabular} \caption{List of model variables and uncertainties and their associated RO\cpluspluslogo{} variables for the RSFC problem.} \label{tab:RSFCdvs} \end{table} The uncertain parameters of the RSFC problem are ${\bm \xi}_t$, $t \in \sets T$. We store these in the \texttt{Demand} map, which maps each period to the associated uncertain parameter. Each uncertain parameter is a pointer to an object of type \texttt{ROCPPUnc}. The constructor of the \texttt{ROCPPUnc} class takes two input parameters: the name of the uncertain parameter and the period when that parameter is revealed (first time stage when it is observable). \lstset{firstnumber=3} \lstinputlisting[firstline=3,lastline=8]{retailer.cc} The main decision variables of the RSFC problem are ${\bm y}_t^c$ and ${\bm y}_t^o$, $t\in \sets T$. The commitment variables ${\bm y}_t^c$ are all static. We store these in the \texttt{Commits} map which maps each time period to the associated commitment decision. The order variables ${\bm y}_t^o$ are allowed to adapt to the history of observed demand realizations. We store these in the \texttt{Orders} map which maps the time period at which the decision is made to the order decision for that period. In RO\cpluspluslogo, the decision variables are pointers to objects of type \texttt{ROCPPVarIF}. Real-valued static (adaptive) decision variables are modelled using objects of type \texttt{ROCPPStaticVarReal} (\texttt{ROCPPAdaptVarReal}). The constructor of \texttt{ROCPPStaticVarReal} admits three input parameters: the name of the variable, its lower bound, and its upper bound. The constructor of \texttt{ROCPPAdaptVarReal} admits four input parameters: the name of the variable, the time period when the decision is made, and the lower and upper bounds. \lstset{firstnumber=9} \lstinputlisting[firstline=9,lastline=19]{retailer.cc} The RSFC problem also involves the auxiliary variables ${\bm y}_t^{\rm dc}$, ${\bm y}_t^{\rm dp}$, and ${\bm y}_{t+1}^{\rm hs}$, $t\in \sets T$. We store the ${\bm y}_t^{\rm dc}$ variables in the map \texttt{MaxDC}. These variables are all static. We store the ${\bm y}_t^{\rm dp}$ variables in the map \texttt{MaxDP}. We store the ${\bm y}_t^{\rm hs}$ variables in the map \texttt{MaxHS}. Since the orders placed and inventories held change based on the demand realization, the variables stored in \texttt{MaxDP} and \texttt{MaxHS} are allowed to adapt to the history of observations. All maps map the index of the decision to the actual decision variable. The procedure to build these maps exactly parallels the approach above and we thus omit it. We refer the reader to lines \ref{ln:retailer_vars_2_start}-\ref{ln:retailer_vars_2_end} in Section~\ref{sec:EC_RSFC_code} for the code to build these maps. Having defined the model, the uncertain parameters, and the decision variables of the problem, we are now ready to formulate the constraints. To express our constraints in an intuitive fashion, we create an expression variable (type \texttt{ROCPPExpr}), which we call \texttt{Inventory} that stores the amount of inventory held at the beginning of each period. This is computed by adding to the initial inventory \texttt{InitInventory} the amount ordered at each period and subtracting the demand faced. Similarly, we create an \texttt{ROCPPExpr} to store the cumulative orders placed. This is obtained by adding orders placed at each period. Constraints can be created using the operators ``\lstinline{<=}'', ``\lstinline{>=}'', or ``\lstinline{==}'' and added to the problem using the \texttt{add\_constraint()} function. We show how to construct the cumulative order constraints and the lower bounds on the shortage and holding costs. The code to add the remaining constraints can be found in lines~\ref{ln:retailer_extra_cstr_start}-\ref{ln:retailer_extra_cstr_end} of Section~\ref{sec:EC_RSFC_code}. \lstset{firstnumber=35} \lstinputlisting[firstline=35,lastline=52]{retailer.cc} The objective function of the RSFC problem consists in minimizing the sum of all costs over time. We create the \texttt{ROCPPExpr} expression \texttt{RSFCObj} to which we add all terms by iterating over time. We then set \texttt{RSFCObj} as the objective function of the \texttt{RSFCModel} model by using the \texttt{set\_objective()} function. \lstset{firstnumber=68} \lstinputlisting[firstline=68,lastline=73]{retailer.cc} Finally, we create a box uncertainty set for the demand. \lstset{firstnumber=74} \lstinputlisting[firstline=74,lastline=78]{retailer.cc} Having formulated the RSFC problem in RO\cpluspluslogo, we turn to solving it. \subsubsection{Solution in RO\cpluspluslogo} \label{sec:RSFC_solution} From~\cite{BenTal_RSFC}, LDRs are optimal in this case. Thus, it suffices to approximate the real-valued adaptive variables in the problem by linear decision rules, then robustify the problem using duality theory, and finally solve it using an off-the-shelf deterministic solver. This process is streamlined in RO\cpluspluslogo{}. \lstset{firstnumber=79} \lstinputlisting[firstline=79,lastline=86]{retailer.cc} We consider the instance of RSFC detailed in Electronic Companion~\ref{sec:EC_RSFC_instance}. The following output is displayed when executing the above code on this instance. \begin{lstlisting}[numbers=none,language=bash] ========================================================================= =================== APPROXIMATING USING LDR AND CDR ===================== ========================================================================= 11 of 119 constraints robustified ... 110 of 119 constraints robustified Total time to approximate and robustify: 0 seconds ========================================================================= \end{lstlisting} This states that the problem has 119 constraints in total and that the time it took to approximate it and obtain its robust counterpart was under half a second. Next, we showcase how optimal solutions to the problem can be retrieved in RO\cpluspluslogo. \lstset{firstnumber=87} \lstinputlisting[firstline=87,lastline=90]{retailer.cc} The following output is displayed when executing the above code. \begin{lstlisting}[numbers=none,language=bash] Order_10 = + 0*Demand_1 + 0*Demand_2 + 0*Demand_3 + 0*Demand_4 + 0*Demand_5 + 0*Demand_6 + 0*Demand_7 + 0*Demand_8 + 1*Demand_9 - 0.794 \end{lstlisting} Thus, the optimal linear decision rule for the amount to order at stage 10 is ${\bm y}^{\rm o}_{10} ({\bm \xi}) = {\bm \xi}_9 - 0.794$ for this specific instance. To get the optimal objective value of \texttt{RSFCModelLDR}, we can use the following command, which returns $13531.7$ in this instance. \lstset{firstnumber=92} \lstinputlisting[firstline=92,lastline=92]{retailer.cc} \subsubsection{Variant: Ellipsoidal Uncertainty Set} In~\cite{BenTal_RSFC}, the authors also investigated ellipsoidal uncertainty sets for the demand. These take the form $$ \Xi := \{ {\bm \xi} \in \ensuremath{\field{R}}_+^{T} : \| {\bm \xi} - \overline {\bm \xi} \|_2 \leq \Omega \}, $$ where $\Omega$ is a safety parameter. Letting \texttt{Omega} represent $\Omega$, this ellipsoidal uncertainty set can be used instead of the box uncertainty set by replacing lines \ref{ln:uset_start}-\ref{ln:uset_end} in the RO\cpluspluslogo{} code for the RSFC problem with the following code: \begin{lstlisting}[numbers=none] // Create a vector that will contain all the elements of the norm term vector<ROCPPExpr_Ptr> EllipsoidElements; // Populate the vector with the difference between demand and nominal demand for (uint t=1; t<=T; t++) EllipsoidElements.push_back(Demand[t+1] - NominalDemand); // Create the norm term boost::shared_ptr<ConstraintTermIF> EllipsTerm(new NormTerm(EllipsoidElements)); // Create the ellipsoidal uncertainty constraint RSFCModel->add_constraint_uncset(EllipsTerm <= Omega); \end{lstlisting} The solution approach from Section~\ref{sec:RSFC_solution} applies as is with this ellipsoidal set. The time it takes to robustify the problem is again under half a second. In this case, the optimal objective value under LDRs is $14,814.3$. The optimal linear decision rule is given by: \begin{lstlisting}[numbers=none,language=bash] Order_10 = + 0.0305728*Demand_1 + 0.0567*Demand_2 + 0.0739*Demand_3 + 0.0887*Demand_4 + 0.101*Demand_5 + 0.115*Demand_6 + 0.142*Demand_7 + 0.179*Demand_8 + 0.231*Demand_9 - 3.33 \end{lstlisting} \subsection{Robust Pandora's Box Problem} \label{sec:pandora} We consider a robust variant of the celebrated stochastic Pandora Box (PB) problem due to~\cite{Weitzman_1979}. This problem models selection from a set of unknown, alternative options, when evaluation is costly. \subsubsection{Problem Description} \label{sec:RPBdescription} There are~$I$ boxes indexed in~$\mathcal{I}:=\{1,\ldots,I\}$ that we can choose or not to open over the planning horizon $\mathcal T:=\{1,\ldots,T\}$. Opening box $i \in \sets I$ incurs a cost ${\bm c}_i \in \ensuremath{\field{R}}_+$. Each box has an unknown value ${\bm \xi}_i \in \ensuremath{\field{R}}$, $i\in \sets I$, which will only be revealed if the box is opened. At the beginning of each time $t \in \sets T$, we can either select a box to open or keep one of the opened boxes, earn its value (discounted by $\theta^{t-1}$), and stop the search. We assume that the box values are restricted to lie in the set $$ \Xi := \left\{ \bm{\xi} \in \ensuremath{\field{R}}^{I} \; : \; \exists {\bm \zeta} \in [-1, 1]^M , \; {\bm \xi}_i \; = \; (1+{\bm \Phi}_i^\top {\bm \zeta}/2) \overline{\bm \xi}_i \quad \forall i \in \sets I \right\}, $$ where ${\bm \zeta} \in \ensuremath{\field{R}}^M$ represent $M$ risk factors, and $\overline{\bm \xi} \in \ensuremath{\field{R}}^I$ collects the nominal box values. In this problem, the box values are endogenous uncertain parameters whose time of revelation can be controlled by the box open decisions. Thus, the information base, encoded by the vector ${\bm w}_{t}({\bm \xi}) \in \{0,1\}^I$, $t\in \sets T$, is a decision variable. In particular, ${\bm w}_{t,i}({\bm \xi}) = 1$ if and only if box $i \in \sets I$ has been opened on or before time $t \in \sets T$ in scenario ${\bm \xi}$. We assume that ${\bm w}_0({\bm \xi})={\bm 0}$ so that no box is opened before the beginning of the planning horizon. We denote by ${\bm z}_{t,i}({\bm \xi}) \in \{0,1\}$ the decision to keep box $i \in \sets I$ and stop the search at time $t \in \sets T$. The requirement that at most one box be opened at each time $t\in \sets T$ and that no box be opened if we have stopped the search can be captured by the constraint \begin{equation} \sum_{i \in \sets I} ( {\bm w}_{t,i}({\bm \xi}) - {\bm w}_{t-1,i}({\bm \xi}) ) \; \leq \; 1 - \sum_{\tau=1}^t \sum_{i \in \sets I} {\bm z}_{t,i}({\bm \xi}) \quad \forall t \in \sets T. \label{eq:PB_constraint_1box} \end{equation} The requirement that only one of the opened boxes can be kept is expressible as \begin{equation} {\bm z}_{t,i}({\bm \xi}) \; \leq \; {\bm w}_{t-1,i}({\bm \xi}) \quad \forall t \in \sets T, \; \forall i \in \sets I. \label{eq:PB_constraint_keep} \end{equation} The objective of the PB problem is to select the sequence of boxes to open and the box to keep so as to maximize worst-case net profit. Since the decision to open box $i$ at time $t$ can be expressed as the difference $({\bm w}_{t,i}-{\bm w}_{t-1,i})$, the objective of the PB problem is $$ \max \;\; \min_{{\bm \xi} \in \Xi} \;\; \sum_{t \in \sets T} \sum_{i \in \sets I} \theta^{t-1} {\bm \xi}_i {\bm z}_{t,i}({\bm \xi}) - {\bm c}_i ({\bm w}_{t,i}({\bm \xi})-{\bm w}_{t-1,i}({\bm \xi})). $$ The mathematical model for this problem can be found in Electronic Companion~\ref{sec:EC_PB_formulation}. \subsubsection{Model in RO\cpluspluslogo} \label{sec:RPBcode} We present the RO\cpluspluslogo{} model for the PB problem. We assume that the data of the problem have been defined in \cpluspluslogo{} as summarized in Table~\ref{tab:PBparams}. \begin{table}[ht!] \centering \renewcommand{\pveqnstretch}{0.75} \begin{tabular}{c|c|c|c} \hline Model Parameter & \cpluspluslogo{} Name & \cpluspluslogo{} Variable Type & \cpluspluslogo{} Map Keys \\ \hline \hline $\theta$ & \texttt{theta} & \texttt{double} & NA\\ $T$ ($t$) & \texttt{T} (\texttt{t}) & \texttt{uint} & NA \\ $I$ ($i$) & \texttt{I} (\texttt{i}) & \texttt{uint} & NA \\ $M$ ($m$) & \texttt{M} (\texttt{m}) & \texttt{uint} & NA \\ ${\bm c}_i$,\; $i\in\sets I$ & \texttt{CostOpen} & \texttt{map<uint,double>}& \texttt{i=1\ldots I} \\ $\overline{\bm \xi}_i$,\; $i\in\sets I$ & \texttt{NomVal} & \texttt{map<uint,double>} & \texttt{i=1\ldots I}\\ ${\bm \Phi}_{im}$, $i\in\sets I$,\;$m\in\sets M$ & \texttt{FactorCoeff} & \texttt{map<uint,map<uint,double> >} & \texttt{i=1\ldots I, m=1\ldots M}\\ \hline \end{tabular} \caption{List of model parameters and their associated \cpluspluslogo{} variables for the PB problem.} \label{tab:PBparams} \end{table} The PB problem is a multi-stage robust optimization problem involving uncertain parameters whose time of revelation is decision-dependent. Such models can be stored in the \texttt{ROCPPDDUOptModel} class which is derived from \texttt{ROCPPOptModelIF}. \lstset{firstnumber=1} \lstinputlisting[firstline=1,lastline=2]{pandora.cc} Next, we create the RO\cpluspluslogo{} variables associated with uncertain parameters and decision variables in the problem. The correspondence between variables is summarized in Table~\ref{tab:PBvars}. \begin{table}[ht!] \centering \renewcommand{\pveqnstretch}{0.75} \begin{tabular}{c|c|c|c} \hline Parameter & \cpluspluslogo{} Nm. & \cpluspluslogo{} Type & \cpluspluslogo{} Map Keys \\ \hline \hline ${\bm z}_{t,i}$,\; $i\in\sets I$,\;$t\in\sets T$ & \texttt{Keep} & \texttt{map<uint,map<uint,ROCPPVarIF\_Ptr> >}& \texttt{1\ldots T, 1\ldots I} \\ ${\bm w}_{t,i}$,\; $i\in\sets I$,\;$t\in\sets T$ & \texttt{MeasVar} & \texttt{map<uint,map<uint,ROCPPVarIF\_Ptr> >}& \texttt{1\ldots T, 1\ldots I} \\ ${\bm \zeta}_m$,\; $m\in\sets M$ & \texttt{Factor} & \texttt{map<uint,ROCPPUnc\_Ptr>}& \texttt{m=1\ldots M} \\ ${\bm \xi}_i$,\; $i\in\sets I$ & \texttt{Value} & \texttt{map<uint,ROCPPUnc\_Ptr>}& \texttt{i=1\ldots I} \\ \hline \end{tabular} \caption{List of model variables and uncertainties and their associated \cpluspluslogo{} variables for the PB problem.} \label{tab:PBvars} \end{table} The uncertain parameters of the PB problem are ${\bm \xi} \in \ensuremath{\field{R}}^I$ and ${\bm \zeta} \in \ensuremath{\field{R}}^M$. We store the RO\cpluspluslogo{} variables associated with these in the \texttt{Value} and \texttt{Factor} maps, respectively. Recall that the first and second (optional) parameters of the \texttt{ROCPPUnc} constructor are the name of the parameter and the time period when it is observed. As~${\bm \xi}$ has a time of revelation that is decision-dependent, we can omit the second parameter when we construct the associated RO\cpluspluslogo{} variables. The \texttt{ROCPPUnc} constructor also admits a third (optional) parameter with default value \texttt{true} that indicates if the uncertain parameter is observable. As ${\bm \zeta}$ is an auxiliary uncertain parameter, we set its time period as being, e.g., 1 and indicate through the third parameter in the constructor of \texttt{ROCPPUnc} that this parameter is not observable. \lstset{firstnumber=3} \lstinputlisting[firstline=3,lastline=10]{pandora.cc} The decision variables of the problem are the measurement variables~${\bm w}$ and the variables~${\bm z}$ which decide on the box to keep. We store these in the maps \texttt{MeasVar} and \texttt{Keep}, respectively. In RO\cpluspluslogo, the measurement variables are created automatically for all time periods in the problem by calling the \texttt{add\_ddu()} function which is a public member of \texttt{ROCPPOptModelIF}. This function admits four input parameters: an uncertain parameter, the first and last time period when the decision-maker can choose to observe that parameter, and the cost for observing the parameter. In this problem, cost for observing ${\bm \xi}_i$ is equal to~${\bm c}_i$. The measurement variables constructed in this way can be recovered using the \texttt{getMeasVar()} function which admits as inputs the name of an uncertain parameter and the time period for which we want to recover the measurement variable associated with that uncertain parameter. The boolean \texttt{Keep} variables can be built in RO\cpluspluslogo{} using the constructors of the \texttt{ROCPPStaticVarBool} and \texttt{ROCPPAdaptVarBool} classes for the static and adaptive variables, respectively, see lines \ref{ln:keep_vars_start}-\ref{ln:keep_vars_end} in Section~\ref{sec:EC_PB_code}. We omit this here. \lstset{firstnumber=11} \lstinputlisting[firstline=11,lastline=18]{pandora.cc} Having created the decision variables and uncertain parameters, we turn to adding the constraints to the model. To this end, we use the \texttt{StoppedSearch} expression, which tracks the running sum of the \texttt{Keep} variables, to indicate if at any given point in time, we have already decided to keep one box and stop the search. We also use the \texttt{NumOpened} expression which, at each period, stores the expression for the total number of boxes that we choose to open in that period. Since the construction of the constraints is similar to that used in the retailer problem, we omit it here and refer the reader to lines \ref{ln:pandora_cstr_start}-\ref{ln:pandora_cstr_end} in Section~\ref{sec:EC_PB_code}. Next, we create the uncertainty set and the objective function. \lstset{firstnumber=46} \lstinputlisting[firstline=46,lastline=65]{pandora.cc} We emphasize that the observation costs were automatically added to the objective function when we called the \texttt{add\_ddu()} function. \subsubsection{Solution in RO\cpluspluslogo} \label{sec:pandora_solution} The PB problem is a multi-stage robust problem with decision-dependent information discovery, see~\cite{DDI_VKB, vayanos_ROInfoDiscovery}. RO\cpluspluslogo{} offers two options for solving this class of problems: finite adaptability and piecewise constant decision rules, see Section~\ref{sec:approximation_schemes}. Here, we illustrate how to solve PB using the finite adaptability approach, see Section~\ref{sec:Kadaptability}. Letting \texttt{uint} \texttt{K} store the number of contingency plans $K$ per period, the process of computing the optimal contingency plans is streamlined in RO\cpluspluslogo{}. \lstset{firstnumber=66} \lstinputlisting[firstline=66,lastline=69]{pandora.cc} We consider the instance of PB detailed in Electronic Companion~\ref{sec:EC_PB_instance} for which $T=4$, $M=4$, and $I=5$. For $K=1$ (resp.\ $K=2$ and $K=3$), the problem takes under half a second (resp.\ under half a second and 6 seconds) to approximate and robustify. Its objective value is 2.12 (resp.\ 9.67 and 9.67). Note that with $T=4$ and $K=2$ (resp.\ $K=3$), the total number of contingency plans is $K^{T-1} = 8$ (resp.\ 27). Next, we showcase how optimal contingency plans can be retrieved in RO\cpluspluslogo. \lstset{firstnumber=73} \lstinputlisting[firstline=73,lastline=76]{pandora.cc} When executing this code, the values of all variables ${\bm w}^{t,k_1 \ldots k_t}$ for all contingency plans $(k_1,\ldots,k_t) \in \times_{\tau=1}^t \sets K^\tau$ are printed. We show here the subset of the output associated with contingency plans where ${\bm z}_{2,4}({\bm \xi})$ equals 1 (for the case $K=2$). \begin{lstlisting}[numbers=none,language=bash] Value of variable Keep_4_2 under contingency plan (1-1-2-2) is: 1 \end{lstlisting} Thus, at time 4, we will keep the second box if and only if the contingency plan we choose is $(k_1,k_2,k_3,k_4)=(1,1,2,2)$. We can display the first time that an uncertain parameter is observed using the following RO\cpluspluslogo{} code. \lstset{firstnumber=76} \lstinputlisting[firstline=76,lastline=76]{pandora.cc} When executing this code, the time when ${\bm \xi}_4$ is observed under each contingency plan $(k_1,\ldots,k_T) \in \times_{\tau \in \sets T} \sets K^t$ is printed. In this case, part of the output we get is as follows. \begin{lstlisting}[numbers=none,language=bash] Parameter Value_4 under contingency plan (1-1-2-1) is observed at time 3 Parameter Value_4 under contingency plan (1-2-1-1) is never observed \end{lstlisting} Thus, in an optimal solution, ${\bm \xi}_4$ is opened at time 3 under contingency plan $(k_1,k_2,k_3,k_4)=(1,1,2,1)$. On the other hand it is never opened under contingency plan $(1,2,1,1)$. \subsection{Stochastic Best Box Problem with Uncertain Observation Costs} \label{sec:best_box} We consider a variant of Pandora's Box problem, known as Best Box (BB), in which observation costs are uncertain and subject to a budget constraint. We assume that the decision-maker is interested in maximizing the expected value of the box kept. \subsubsection{Problem description} \label{sec:best_box_description} There are~$I$ boxes indexed in~$\mathcal{I}:=\{1,\ldots,I\}$ that we can choose or not to open over the planning horizon $\mathcal T:=\{1,\ldots,T\}$. Opening box $i \in \sets I$ incurs an uncertain cost ${\bm \xi}_i^{\rm c} \in \ensuremath{\field{R}}_+$. Each box has an unknown value ${\bm \xi}_i^{\rm v} \in \ensuremath{\field{R}}$, $i\in \sets I$. The value of each box and the cost of opening it will only be revealed if the box is opened. The total cost of opening boxes cannot exceed budget $B$. At each period $t \in \sets T$, we can either open a box or keep one of the opened boxes, earn its value (discounted by $\theta^{t-1}$), and stop the search. We assume that box values and costs are uniformly distributed in the set $ \Xi := \left\{ {\bm \xi}^{\rm v} \in \ensuremath{\field{R}}_+^{I}, \; {\bm \xi}^{\rm c} \in \ensuremath{\field{R}}_+^{I} \; : \; {\bm \xi}^{\rm v} \; \leq \; \overline{\bm \xi}^{\rm v}, \; {\bm \xi}^{\rm c} \; \leq \; \overline{\bm \xi}^{\rm c} \right\}, $ where $\overline{\bm \xi}^{\rm v}, \; \overline{\bm \xi}^{\rm c} \in \ensuremath{\field{R}}^I$. In this problem, the box values and costs are endogenous uncertain parameters whose time of revelation can be controlled by the box open decisions. For each $i\in \sets I$, and $t\in \sets T$, we let ${\bm w}_{t,i}^{\rm v}({\bm \xi})$ and ${\bm w}_{t,i}^{\rm c}({\bm \xi})$ indicate if parameters ${\bm \xi}_i^{\rm v}$ and ${\bm \xi}_i^{\rm c}$ have been observed on or before time $t$. In particular ${\bm w}_{t,i}^{\rm v}({\bm \xi}) = {\bm w}_{t,i}^{\rm c}({\bm \xi})$ for all $i$, $t$, and ${\bm \xi}$. We assume that ${\bm w}_0({\bm \xi})={\bm 0}$ so that no box is opened before the beginning of the planning horizon. We denote by ${\bm z}_{t,i}({\bm \xi}) \in \{0,1\}$ the decision to keep box $i \in \sets I$ and stop the search at time $t \in \sets T$. The requirement that at most one box be opened at each time $t\in \sets T$ and that no box be opened if we have stopped the search can be captured in a manner that parallels constraint~\eqref{eq:PB_constraint_1box}. Similarly, the requirement that only one of the opened boxes can be kept can be modelled using a constraint similar to~\eqref{eq:PB_constraint_keep}. The budget constraint and objective can be expressed compactly as $$ \sum_{i \in \sets I} {\bm \xi}_i^{\rm c} {\bm w}_{T,i}^{\rm v} ({\bm \xi}) \leq B, \qquad \text{ and } \qquad \max \;\; \mathbb{E} \; \; \left[ \sum_{t \in \sets T} \sum_{i \in \sets I} \;\; \theta^{t-1} {\bm \xi}_i^{\rm v} {{\bm z}_{t,i}}({\bm \xi}) \right], $$ respectively. The full model for this problem can be found in Electronic Companion~\ref{sec:EC_BB_formulation}. \subsubsection{Model in RO\cpluspluslogo} \label{sec:BBcode} We assume that the data, decision variables, and uncertain parameters of the problem have been defined as in Tables~\ref{tab:BBparams} and~\ref{tab:BBvars}. \begin{table}[ht!] \centering \renewcommand{\pveqnstretch}{0.75} \begin{tabular}{c|c|c|c} \hline Model Parameter & \cpluspluslogo{} Variable Name & \cpluspluslogo{} Variable Type & \cpluspluslogo{} Map Keys \\ \hline \hline $B$ & \texttt{B} & \texttt{double} & NA \\ $\overline{\bm \xi}_i^{\rm c}$,\; $i\in\sets I$ & \texttt{CostUB} & \texttt{map<uint,double>}& \texttt{i=1\ldots I} \\ $\overline{\bm \xi}_i^{\rm v}$,\; $i\in\sets I$ & \texttt{ValueUB} & \texttt{map<uint,double>} & \texttt{i=1\ldots I} \\ \hline \end{tabular} \caption{List of model parameters and their associated \cpluspluslogo{} variables for the BB problem. The parameters $T(t)$ and $I(i)$ are as in Table~\ref{tab:PBparams} and we thus omit them here.} \label{tab:BBparams} \end{table} \begin{table}[ht!] \centering \renewcommand{\pveqnstretch}{0.75} \begin{tabular}{c|c|c|c} \hline Parameter & \cpluspluslogo{} Nm. & \cpluspluslogo{} Type & \cpluspluslogo{} Map Keys \\ \hline \hline ${\bm w}_{t,i}^{\rm c}$, $i\in\sets I$, $t\in\sets T$ & \texttt{MVcost} & \texttt{map<uint,map<uint,ROCPPVarIF\_Ptr> >}& \texttt{1\ldots T, 1\ldots I} \\ ${\bm w}_{t,i}^{\rm v}$, $i\in\sets I$, $t\in\sets T$ & \texttt{MVval} & \texttt{map<uint,map<uint,ROCPPVarIF\_Ptr> >}& \texttt{1\ldots T, 1\ldots I} \\ ${\bm \xi}_i^{\rm c}$, $i\in\sets I$ & \texttt{Cost} & \texttt{map<uint,ROCPPUnc\_Ptr>}& \texttt{i=1\ldots I} \\ ${\bm \xi}_i^{\rm v}$, $i\in\sets I$ & \texttt{Value} & \texttt{map<uint,ROCPPUnc\_Ptr>}& \texttt{i=1\ldots I} \\ \hline \end{tabular} \caption{List of model variables and uncertainties and their associated \cpluspluslogo{} variables for the BB problem. The variables ${\bm z}_{i,t}$, $i\in\sets I$, $t\in\sets T$, are as in Table~\ref{tab:PBvars} and we thus omit them here.} \label{tab:BBvars} \end{table} We create a stochastic model with decision-dependent information discovery as follows. \lstset{firstnumber=1} \lstinputlisting[firstline=1,lastline=2]{bestbox.cc} To model the requirement that ${\bm \xi}_i^{\rm c}$ and ${\bm \xi}_i^{\rm v}$ must be observed simultaneously, the function \texttt{pair\_uncertainties()} may be employed in RO\cpluspluslogo. \lstset{firstnumber=23} \lstinputlisting[firstline=23,lastline=24]{bestbox.cc} To build the budget constraint we use the RO\cpluspluslogo{} expression \texttt{AmountSpent}. \lstset{firstnumber=53} \lstinputlisting[firstline=53,lastline=57]{bestbox.cc} The construction of the remaining constraints and of the objective parallels that for the Pandora Box problem and we thus omit it. We refer to~\ref{sec:EC_BB_code} for the full code. \subsubsection{Solution in RO\cpluspluslogo} \label{sec:BB_solution} The BB problem is a multi-stage stochastic problem with decision-dependent information discovery, see~\cite{DDI_VKB}. We thus propose to solve it using PWC decision rules. We consider the instance of BB detailed in Electronic Companion~\ref{sec:EC_BB_instance}, which has $T=4$ and $I=5$. To employ a breakpoint configuration ${\bm r}=(1,1,1,1,1,3,3,1,3,1)$ for the PWC approximation, we use the following code. \lstset{firstnumber=74} \lstinputlisting[firstline=74,lastline=80]{bestbox.cc} RO\cpluspluslogo{} can approximate and robustify the problem in 1 second using lines~\ref{ln:BB_robustify_solve_start}-\ref{ln:BB_robustify_solve_end} in~\ref{sec:EC_BB_code}. Under this breakpoint configuration, the optimal profit is $934.2$, compared to $792.5$ for the static decision rule. The optimal solution can be printed to screen using the \texttt{printOut} function, see lines~\ref{ln:BB_print_start}-\ref{ln:BB_print_end} in~\ref{sec:EC_BB_code}. Part of the resulting output is \begin{lstlisting}[numbers=none,language=bash] On subset 1111131111: Keep_4_1 = 1 Uncertain parameter Value_4 on subset 1111112131 is observed at stage 3 \end{lstlisting} Thus, on subset ${\bm s}=(1,1,1,1,1,1,3,1,1,1)$, the first box is kept at time~4. On subset ${\bm s}=(1,1,1,1,1,1,2,1,3,1)$, box 4 is opened at time 3 (resp.\ 2). \section{ROB File Format} \label{sec:file_format} Given a robust/stochastic optimization problem expressed in RO\cpluspluslogo{}, our platform can generate a file displaying the problem in human readable format. We use the Pandora Box problem from Section~\ref{sec:pandora} to illustrate our proposed format, with extension ``.rob''. The file starts with the \texttt{Objective} part that presents the objective function of the problem: to minimize either expected or worst-case costs, as indicated by \texttt{E} or \texttt{max}, respectively. For example, since the PB problem optimizes worst-case profit, we obtain the following. \begin{lstlisting}[numbers=none, language=bash] Objective: min max -1 Keep_1_1 Value_1 -1 Keep_1_2 Value_2 -1 Keep_1_3 Value_3 ... \end{lstlisting} Then come the \texttt{Constraints} and \texttt{Uncertainty Set} parts, which list the constraints using interpretable ``\lstinline{<=}'', ``\lstinline{>=}'', and ``\lstinline{==}'' operators. We list one constraint for each part here. \begin{lstlisting}[numbers=none, language=bash] Constraints: c0: -1 mValue_2_1 +1 mValue_1_1 <= +0 ... Uncertainty Set: c0: -1 Factor_1 <= +1 ... \end{lstlisting} The next part, \texttt{Decision Variables}, lists the decision variables of the problem. For each variable, we list its name, type, whether it is static or adaptive, its time stage, and whether it is a measurement variable or not. If it is a measurement variable, we also display the uncertain parameter whose time of revelation it controls. \begin{lstlisting}[numbers=none, language=bash] Decision Variables: Keep_1_1: Boolean, Static, 1, Non-Measurement mValue_2_2: Boolean, Adaptive, 2, Measurement, Value_2 \end{lstlisting} The \texttt{Bounds} part then displays the upper and lower bounds for the decision variables. \begin{lstlisting}[numbers=none, language=bash] Bounds: 0 <= Keep_1_1 <= 1 \end{lstlisting} Finally, the \texttt{Uncertainties} part lists, for each uncertain parameter, its name, whether the parameter is observable or not, its time stage, if the parameter has a time of revelation that is decision-dependent, and the first and last stages when the parameter can be observed. \begin{lstlisting}[numbers=none, language=bash] Uncertainties: Factor_4: Not Observable, 1, Non-DDU Value_1: Observable, 1, DDU, 1, 4 \end{lstlisting} \section{Extensions} \label{sec:extensions} \subsection{Integer Decision Variables} \label{sec:integer_variables} RO\cpluspluslogo{} can solve problems involving integer decision variables. In the case of the CDR/PWC approximations, integer adaptive variables are directly approximated by constant/piecewise constant decisions that are integer on each subset. In the case of the finite adaptability approximation, bounded integer variables are automatically expressed as finite sums of binary variables before the finite adaptability approximation is applied. \subsection{Decision-Dependent Uncertainty Sets} \label{sec:ddus} RO\cpluspluslogo{} can solve problems involving decision-dependent uncertainty sets of the form \begin{equation*} \Xi({\bm z}) \; := \left\{ {\bm \xi} \in \ensuremath{\field{R}}^k \; : \; \exists {\bm \zeta}^s \in \ensuremath{\field{R}}^{k_s}, \; s=1,\ldots, S \; : \; {\bm P}^s({\bm z}) {\bm \xi} + {\bm Q}^s({\bm z}) {\bm \zeta}^s + {\bm q}^s({\bm z}) \in \sets K^s, \; s=1,\ldots, S \right\}, \end{equation*} where ${\bm z}$ are static binary variables and ${\bm P}^s({\bm z}) \in \ensuremath{\field{R}}^{r_s \times k}$, ${\bm Q}^s({\bm z}) \in \ensuremath{\field{R}}^{r_s \times k_s}$, and ${\bm q}^s({\bm z}) \in \ensuremath{\field{R}}^{r_s}$, are all linear in ${\bm z}$, and $\sets K^s$, $s=1\ldots,S$, are closed convex pointed cones in $\ensuremath{\field{R}}^{r_s}$. \subsection{Limited Memory Decision Rules} \label{sec:limited_memory} For problems involving long time-horizons ($>100$), the LDR/CDR and PWL/PWC decision rules can become computationally expensive. Limited memory decision rules approximate adaptive decisions by linear functions of the \emph{recent} history of observations. The \texttt{memory} parameter of the \texttt{LDRCDRApproximator} can be used in RO\cpluspluslogo{} to trade-off optimality with computational complexity. \subsection{Warm Start} \label{sec:warm_start} \paragraph{Finite Adaptability.} RO\cpluspluslogo{} provides the ability to warm start the solution to a finite adaptability problem with $(K_1,\ldots,K_T) \in \ensuremath{\field{Z}}_{+}^T$ contingency plans using the solution to a finite adaptability problem with fewer contingency plans $(K_1',\ldots,K_T') \in \ensuremath{\field{Z}}_{+}^T$, where $K_t' \leq K_t$ for all $t \in \sets T$ and $K_t' < K_t$ for at least one $t \in \sets T$. \paragraph{PWC/PWL Decision Rule.} Similarly, RO\cpluspluslogo{} provides the ability to warm start the solution to a PWC/PWL decision rule problem with breakpoint configuration ${\bm r} = ({\bm r}_1,\ldots,{\bm r}_k)$ using the solution to a PWC/PWL decision rule problem with breakpoint configuration ${\bm r}' = ({\bm r}_1',\ldots,{\bm r}_k')$. The only requirement is that ${\bm r}_i \geq {\bm r}_i'$ and ${\bm r}_i / {\bm r}_i'$ be a multiple of 2 for all $i \in \{1,\ldots,k\}$ such that ${\bm r}_i > {\bm r}_i'$. \def\small{\small} \theendnotes \ACKNOWLEDGMENT{ This work was supported in part by the Operations Engineering Program of the National Science Foundation under NSF Award No.\ 1763108. The authors are grateful to Daniel Kuhn, Ber{\c{c}} Rustem, and Wolfram Wiesemann, for valuable discussions that helped shape this work. } \input{main.bbl} \ECSwitch \ECHead{E-Companion} \section{Supplemental Material: Retailer-Supplier Problem} \label{sec:EC_RSFC} \subsection{Retailer-Supplier Problem: Mathematical Formulation} \label{sec:EC_RSFC_formulation} Using the notation introduced in Section~\ref{sec:retailer}, the robust RSFC problem can be expressed mathematically as: \begin{equation*}\renewcommand{\pveqnstretch}{1} \begin{array}{cl} \ensuremath{\mathop{\mathrm{minimize}}\limits} & \quad \displaystyle \max_{{\bm \xi} \in \Xi} \; \sum_{t \in \sets T} {\bm c}_t^{\rm o} {\bm y}_t^{\rm o}({\bm \xi}) + {\bm y}_t^{\rm dc}({\bm \xi}) + {\bm y}_t^{\rm dp}({\bm \xi}) + {\bm y}_{t+1}^{\rm hs}({\bm \xi}) \\ \text{\rm subject to} & \quad {\bm y}_t^{\rm c} \in \ensuremath{\field{R}}_+ \quad \forall t \in \sets T \\ & \quad {\bm y}_t^{\rm o} , \; {\bm y}_t^{\rm dc}, \; {\bm y}_t^{\rm dp}, \; {\bm y}_{t+1}^{\rm hs} \in \mathcal L_T^1 \quad \forall t \in \sets T \\ & \quad \left. \!\!\!\begin{array}{l} {\bm x}^{\rm i}_{t+1}({\bm \xi}) \; = \; {\bm x}^{\rm i}_t({\bm \xi}) + {\bm y}^{\rm o}_t({\bm \xi}) - {\bm \xi}_{t+1} \\ \underline{\bm y}_t^{\rm o} \leq {\bm y}_t^{\rm o}({\bm \xi}) \leq \overline{\bm y}_t^{\rm o} , \;\; \underline{\bm y}_t^{\rm co} \leq \sum_{\tau = 1}^t {\bm y}_\tau^{\rm o}({\bm \xi}) \leq \overline{\bm y}_t^{\rm co} \\ {\bm y}_t^{\rm dc} \geq {\bm c}_t^{\rm dc+} ( {\bm y}_t^{\rm c} - {\bm y}_{t-1}^{\rm c} ) \\ {\bm y}_t^{\rm dc} \geq {\bm c}_t^{\rm dc-} ({\bm y}_{t-1}^{\rm c} - {\bm y}_{t}^{\rm c} ) \\ {\bm y}_t^{\rm dp} \geq {\bm c}_t^{\rm dp+} ({\bm y}_t^{\rm o}({\bm \xi}) - {\bm y}_{t}^{\rm c}) \\ {\bm y}_t^{\rm dp}({\bm \xi}) \geq {\bm c}_t^{\rm dp-} ({\bm y}_t^{\rm c} - {\bm y}_{t}^{\rm o}({\bm \xi})) \\ {\bm y}_{t+1}^{\rm hs}({\bm \xi}) \geq {\bm c}_{t+1}^{\rm h}{\bm x}^{\rm i}_{t+1}({\bm \xi}) \\ {\bm y}_{t+1}^{\rm hs}({\bm \xi}) \geq -{\bm c}_{t+1}^{\rm s} {\bm x}^{\rm i}_{t+1}({\bm \xi}) \end{array} \quad \right\} \quad \forall t \in \sets T , \; {\bm \xi} \in \Xi \\ & \quad \left. \!\!\! \begin{array}{l} {\bm y}_t^{\rm o}({\bm \xi}) = {\bm y}_t^{\rm o}({\bm \xi}') \\ {\bm y}_t^{\rm dc}({\bm \xi}) = {\bm y}_t^{\rm dc}({\bm \xi}') \\ {\bm y}_t^{\rm dp}({\bm \xi}) = {\bm y}_t^{\rm dp}({\bm \xi}') \end{array} \qquad \quad \right\} \quad \forall {\bm \xi}, {\bm \xi}' \in \Xi : {\bm w}_{t-1} \circ {\bm \xi} = {\bm w}_{t-1} \circ {\bm \xi}', \; \forall t \in \sets T \\ & \quad {\bm y}_{t+1}^{\rm hs}({\bm \xi}) = {\bm y}_{t+1}^{\rm hs}({\bm \xi}') \qquad \quad \forall {\bm \xi}, {\bm \xi}' \in \Xi : {\bm w}_{t} \circ {\bm \xi} = {\bm w}_{t} \circ {\bm \xi}', \; \forall t \in \sets T. \end{array} \end{equation*} The last set of constraints corresponds to non-anticipativity constraints. The other constraints are explained in Section~\ref{sec:RSFC_description}. \subsection{Retailer-Supplier Problem: Full RO\cpluspluslogo{} Code} \label{sec:EC_RSFC_code} \lstset{firstnumber=1} \lstinputlisting{retailer.cc} \subsection{Retailer-Supplier Problem: Instance Parameters} \label{sec:EC_RSFC_instance} The parameters for the instance of the problem that we solve in Section~\ref{sec:RSFC_solution} are provided in Table~\ref{tab:RSFC_instance_params}. They correspond to the data from instance W12 in~\cite{BenTal_RSFC}. \begin{table}[ht!] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline $T$ & ${\bm x}^{\rm i}_1$ & ${\bm y}^{\rm c}_0$ & $\overline \xi$ & $\rho$ & $\underline{\bm y}_t^{\rm o}$ & $\overline{\bm y}_t^{\rm o}$ & $\underline{\bm y}_t^{\rm co}$ & $\overline{\bm y}_t^{\rm co}$ & ${\bm c}^{\rm o}_{t}$ & ${\bm c}^{\rm h}_{t+1}$ & ${\bm c}^{\rm s}_{t+1}$ & ${\bm c}^{\rm dc+}_{t}$ & ${\bm c}^{\rm dc-}_{t}$ & ${\bm c}^{\rm dp+}_{t}$ & ${\bm c}^{\rm dp-}_{t}$ \\ \hline \hline 12 & 0 & 100 & 100 & 10\% & 0 & 200 & 0 & $200t$ & 10 & 2 & 10 & 10 & 10 & 10 & 10 \\ \hline \end{tabular} \caption{Parameters for the instance of the RSFS problem that we solve in Section~\ref{sec:RSFC_solution}.} \label{tab:RSFC_instance_params} \end{table} \section{Supplemental Material: Robust Pandora's Box Problem} \label{sec:EC_PB} \subsection{Robust Pandora's Box Problem: Mathematical Formulation} \label{sec:EC_PB_formulation} Using the notation introduced in Section~\ref{sec:pandora}, the robust PB problem can be expressed mathematically as: \begin{equation*}\renewcommand{\pveqnstretch}{1} \begin{array}{cl} \ensuremath{\mathop{\mathrm{maximize}}\limits} & \quad \displaystyle \min_{{\bm \xi} \in \Xi} \;\; \sum_{t \in \sets T} \sum_{i \in \sets I} \theta^{t-1} {\bm \xi}_i {\bm z}_{t,i}({\bm \xi}) - {\bm c}_i ({\bm w}_{t,i}({\bm \xi})-{\bm w}_{t-1,i}({\bm \xi})) \\ \text{\rm subject to} & \quad {\bm z}_{t,i}, \; {\bm w}_{t,i} \in \{0,1\} \quad \forall t \in \sets T, \; \forall i \in \sets I \\ & \quad \left. \!\!\!\begin{array}{l} \displaystyle \sum_{i \in \sets I} ( {\bm w}_{t,i}({\bm \xi}) - {\bm w}_{t-1,i}({\bm \xi}) ) \; \leq \; 1 - \sum_{\tau=1}^t {\bm z}_{t,i}({\bm \xi}) \\ {\bm z}_{t,i}({\bm \xi}) \; \leq \; {\bm w}_{t-1,i}({\bm \xi}) \quad \forall i \in \sets I\\ \end{array} \quad \right\} \quad \forall t \in \sets T , \; {\bm \xi} \in \Xi \\ & \quad \left. \!\!\! \begin{array}{l} {\bm z}_{t,i}({\bm \xi}) = {\bm z}_{t,i}({\bm \xi}') \\ {\bm w}_{t,i}({\bm \xi}) = {\bm w}_{t,i}({\bm \xi}') \\ \end{array} \qquad \quad \right\} \quad \forall {\bm \xi}, {\bm \xi}' \in \Xi : {\bm w}_{t-1} \circ {\bm \xi} = {\bm w}_{t-1} \circ {\bm \xi}', \; \forall i \in \sets I, \; \forall t \in \sets T . \end{array} \end{equation*} The last set of constraints in this problem are non-anticipativity constraints. The other constraints are explained in Section~\ref{sec:RPBdescription}. \subsection{Robust Pandora's Box Problem: Full RO\cpluspluslogo{} Code} \label{sec:EC_PB_code} \lstset{firstnumber=1} \lstinputlisting{pandora.cc} \subsection{Robust Pandora's Box Problem: Instance Parameters} \label{sec:EC_PB_instance} The parameters for the instance of the robust Pandora's box problem that we solve in Section~\ref{sec:pandora_solution} are provided in Table~\ref{tab:PB_instance_params}. \begin{table}[ht!] \centering \begin{tabular}{|c|c|} \hline Parameter & Value \\ \hline \hline $(\theta,T, I,M)$ & (1,4,5,4) \\ ${\bm c}$ & $(0.69, 0.43, 0.01, 0.91, 0.64)$ \\ $\overline{\bm \xi}$ & $(5.2, 8, 19.4, 9.6, 13.2)$ \\ ${\bm \Phi}$ & $\begin{pmatrix} 0.17 & -0.7 & -0.13 & -0.6 \\ 0.39 & 0.88 & 0.74 & 0.78 \\ 0.17 & -0.6 & -0.17 & -0.84 \\ 0.09 & -0.07 & -0.52 & 0.88 \\ 0.78 & 0.94 & 0.43 & -0.58 \end{pmatrix}$ \\ \hline \end{tabular} \caption{Parameters for the instance of the PB problem that we solve in Section~\ref{sec:pandora_solution}.} \label{tab:PB_instance_params} \end{table} \section{Supplemental Material: Stochastic Best Box Problem} \label{sec:EC_BB} \subsection{Stochastic Best Box: Problem Formulation} \label{sec:EC_BB_formulation} Using the notation introduced in Section~\ref{sec:best_box}, the BB problem can be expressed mathematically as: \begin{equation*}\renewcommand{\pveqnstretch}{1} \begin{array}{cl} \ensuremath{\mathop{\mathrm{maximize}}\limits} & \quad \displaystyle \mathbb E \left[ \sum_{t \in \sets T} \sum_{i \in \sets I} \theta^{t-1} {\bm \xi}_i^{\rm v} {\bm z}_{t,i}({\bm \xi}) \right] \\ \text{\rm subject to} & \quad {\bm z}_{t,i}, \; {\bm w}_{t,i}^{\rm c}, \; {\bm w}_{t,i}^{\rm v} \in \{0,1\} \quad \forall t \in \sets T, \; \forall i \in \sets I \\ & \quad {\bm w}_{t,i}^{\rm c}({\bm \xi}) \; = \; {\bm w}_{t,i}^{\rm v}({\bm \xi}) \quad \forall t \in \sets T, \; \forall i \in \sets I \\ & \quad \left. \!\!\!\begin{array}{l} \displaystyle \sum_{i \in \sets I} {\bm \xi}_i^{\rm c} {\bm w}_{T,i}^{\rm v}({\bm \xi}) \leq B \\ \displaystyle \sum_{i \in \sets I} ( {\bm w}_{t,i}^{\rm v}({\bm \xi}) - {\bm w}_{t-1,i}^{\rm v}({\bm \xi}) ) \; \leq \; 1 - \sum_{\tau=1}^t {\bm z}_{t,i}({\bm \xi}) \\ {\bm z}_{t,i}({\bm \xi}) \; \leq \; {\bm w}_{t-1,i}^{\rm v}({\bm \xi}) \quad \forall i \in \sets I\\ \end{array} \quad \right\} \quad \forall t \in \sets T , \; {\bm \xi} \in \Xi \\ & \quad \left. \!\!\! \begin{array}{l} {\bm z}_{t,i}({\bm \xi}) = {\bm z}_{t,i}({\bm \xi}') \\ {\bm w}_{t,i}^{\rm c}({\bm \xi}) = {\bm w}_{t,i}^{\rm c}({\bm \xi}') \\ {\bm w}_{t,i}^{\rm v}({\bm \xi}) = {\bm w}_{t,i}^{\rm v}({\bm \xi}') \end{array} \qquad \quad \right\} \quad \forall {\bm \xi}, {\bm \xi}' \in \Xi : {\bm w}_{t-1} \circ {\bm \xi} = {\bm w}_{t-1} \circ {\bm \xi}', \; \forall i \in \sets I, \; \forall t \in \sets T . \end{array} \end{equation*} The first set of constraints stipulates that ${\bm \xi}_i^{\rm c}$ and ${\bm \xi}_i^{\rm v}$ must be observed simultaneously. The second set of constraints is the budget constraint. The third set of constraints stipulates that at each stage, we can either open a box or stop the search, in which case we cannot open a box in the future. The fourth set of constraints ensures that we can only keep a box that we have opened. The last set of constraints correspond to decision-dependent non-anticipativity constraints. \subsection{Stochastic Best Box Problem: Full RO\cpluspluslogo{} Code} \label{sec:EC_BB_code} \lstset{firstnumber=1} \lstinputlisting{bestbox.cc} \subsection{Stochastic Best Box: Instance Parameters} \label{sec:EC_BB_instance} The parameters for the instance of the stochastic best box problem that we solve in Section~\ref{sec:BB_solution} are provided in Table~\ref{tab:BB_instance_params}. \begin{table}[ht!] \centering \begin{tabular}{|c|c|c|c|c|c|} \hline $T$ & $I$ & $B$ & $\theta$ & $\overline{\bm \xi}^{\rm c}$ & $\overline{\bm \xi}^{\rm v}$ \\ \hline \hline 4 & 5 & 163 & 1 & (40,86,55,37,30) & (1030,1585,971,971,694) \\ \hline \end{tabular} \caption{Parameters for the instance of the BB problem that we solve in Section~\ref{sec:BB_solution}.} \label{tab:BB_instance_params} \end{table} \end{document} \phantom{bla} \end{document}
arXiv
Fred Van Oystaeyen Fred Van Oystaeyen (born 1947), also Freddy van Oystaeyen, is a mathematician and emeritus professor of mathematics at the University of Antwerp.[1] He has pioneered work on noncommutative geometry, in particular noncommutative algebraic geometry. Biography In 1972, Fred Van Oystaeyen obtained his Ph.D. from the Vrije Universiteit of Amsterdam. In 1975 he became professor at the University of Antwerp, Department of Mathematics and Computer Science.[2] Van Oystaeyen has well over 200 scientific papers and several books. One of his recent books, Virtual Topology and Functor Geometry, provides an introduction to noncommutative topology. At the occasion of his 60th birthday, a conference in his honour was held in Almería, September 18 to 22, 2007;[3] on March 25, 2011, he received his first honorary doctorate from that same university, Universidad de Almería. At the campus of Universidad de Almería the street "Calle Fred Van Oystaeyen" (previously "Calle los Gallardos") is named after him.[4] In 2019, he will receive another honorary doctorate from the Vrije Universiteit Brussel. [5] Books • Hidetoshi Marubayashi, Fred Van Oystaeyen: Prime Divisors and Noncommutative Valuation Theory, Springer, 2012, ISBN 978-3-6423-1151-2 • Fred Van Oystaeyen: Virtual topology and functor geometry, Chapman & Hall, 2008, ISBN 978-1-4200-6056-0 • Constantin Nastasescu, Freddy van Oystaeyen: Methods of graded rings, Lecture Notes in Mathematics 1836, Springer, February 2004, ISBN 978-3-540-20746-7 • Freddy van Oystaeyen: Algebraic geometry for associative algebras, M. Dekker, New York, 2000, ISBN 0-8247-0424-X • F. van Oystaeyen, A. Verschoren: Relative invariants of rings: the noncommutative theory, M. Dekker, New York, 1984, ISBN 0-8247-7281-4 • F. van Oystaeyen, A. Verschoren: Relative invariants of rings: the commutative theory, M. Dekker, New York, 1983, ISBN 0-8247-7043-9 • Freddy M.J. van Oystaeyen, Alain H.M.J. Verschoren: Non-commutative algebraic geometry: an introduction, Springer-Verlag, 1981, ISBN 0-387-11153-0 • F. van Oystaeyen, A. Verschoren: Reflectors and localization : application to sheaf theory, M. Dekker, New York, 1979, ISBN 0-8247-6844-2 • F. van Oystaeyen: Prime spectra in non-commutative algebra, Springer-Verlag, 1975, ISBN 0-8247-0424-X References 1. "Fred Van Oystaeyen. Emeritus". Universiteit Antwerpen. Retrieved 23 November 2014. 2. "About the author", Methods of Graded Rings (Lecture Notes in Mathematics), Springer 3. "BMS-NCM NEWS" (PDF 776 kB). 15 September 2007. Retrieved 23 November 2014. 4. "StreetDir.es - Calle Fred van Oystaeyen". Retrieved 27 May 2015. 5. {{cite web|url=https://www.vub.ac.be/dhc2019#eredoctoren-/-honorary-doctors} External links • Fred Van Oystaeyen, Universiteit Antwerpen - Academic bibliography - Research • Fred van Oystaeyen at the nLab • Fred Van Oystaeyen, publication list at Scientific Commons • Fred Van Oystaeyen: On the Reality of Noncommutative Space, neverendingbooks.org Authority control International • ISNI • VIAF National • Spain • France • BnF data • Catalonia • Germany • Israel • Belgium • United States • Sweden • Czech Republic • Netherlands Academics • CiNii • DBLP • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
\begin{document} \title{Trigonometric solutions of the associative Yang-Baxter equation} \author{Travis Schedler} \begin{abstract} We classify trigonometric solutions to the associative Yang-Baxter equation (AYBE) for $A = \mathrm{Mat}_n$, the associative algebra of $n$-by-$n$ matrices. The AYBE was first presented in a 2000 article by Marcelo Aguiar and also independently by Alexandre Polishchuk. Trigonometric AYBE solutions limit to solutions of the classical Yang-Baxter equation. We find that such solutions of the AYBE are equal to special solutions of the quantum Yang-Baxter equation (QYBE) classified by Gerstenhaber, Giaquinto, and Schack (GGS), divided by a factor of $q - q^{-1}$, where $q$ is the deformation parameter $q = e^{\hbar}$. In other words, when it exists, the associative lift of the classical $r$-matrix coincides with the quantum lift up to a factor. We give explicit conditions under which the associative lift exists, in terms of the combinatorial classification of classical $r$-matrices through Belavin-Drinfeld triples. The results of this paper illustrate nontrivial connections between the AYBE and both classical (Lie) and quantum bialgebras. \end{abstract} \section{Introduction} Let $A$ be an associative algebra (not necessarily with unit), and $r\in A\otimes A$. The {\it associative Yang-Baxter equation} (AYBE) for $r$ over $A$ is the equation \begin{equation} r^{13}r^{12}-r^{12}r^{23}+r^{23}r^{13}=0. \end{equation} This equation was introduced in \cite{Ag1,Ag2} and independently in \cite{P}. The algebraic meaning of this equation, explained in \cite{Ag1,Ag2}, is as follows. An associative algebra $A$ is called an infinitesimal bialgebra if it is equipped with a coassociative coproduct which is a derivation, i.e.~$\Delta(ab) =(a\otimes 1)\Delta(b)+\Delta(a)(1\otimes b)$. This notion was introduced by Joni and Rota \cite{JR} and is useful in combinatorics. Now, given an associative algebra $A$ and a solution $r\in A\otimes A$ of the AYBE, one can define a comultiplication by $\Delta(a)=(a\otimes 1)r-r(1\otimes a)$. (This comultiplication is a derivation for any $r$, and is coassociative if $r$ satisfies the AYBE). Thus, $(A,\Delta)$ is an infinitesimal bialgebra. One may also consider the AYBE with spectral parameter, \begin{multline} r^{13}(v_1-v_3)r^{12}(v_1-v_2)-r^{12}(v_1-v_2)r^{23}(v_2-v_3)+ \\ r^{23}(v_2-v_3)r^{13}(v_1-v_3)=0, \end{multline} where $r(v)$ is a meromorphic function of a complex variable $v$ with values in $A \otimes A$. Similarly to the usual (classical and quantum) YBE, this is essentially the same equation, since $r(v)$ is a solution of this equation if and only if $r(v \otimes 1 -1 \otimes v)$ satisfies the usual AYBE over $\hat A$, where $\hat A=A((v))$ is the algebra of Laurent series with coefficients in $A$, and the tensor products $\hat A \otimes \hat A (\otimes \hat A)$ are completed in some form. Further, one may consider a graded version of the AYBE. Namely, given a finite abelian group $\Gamma$, one may consider solutions $r$ of the usual AYBE over the $\Gamma$-graded algebra $A \otimes \mathbb C[\Gamma]$ which have total degree zero, i.e.~are sums of terms of bidegrees $(x,-x)\in \Gamma^2$. In this case, writing $r(u)$ for the part of $r$ of bidegree $(u,-u)$, we obtain the following equation for $r(u)$: \begin{equation} r^{13}(u+u')r^{12}(-u')-r^{12}(u)r^{23}(u+u')+r^{23}(u')r^{13}(u)=0. \end{equation} This equation, which one may call the graded AYBE, obviously makes sense for infinite groups $\Gamma$ as well; moreover, if $\Gamma$ is a complex vector space, then one may require $r(u)$ to be meromorphic in $u$. Finally, as before, one can add a spectral parameter. In this form, (with a 1-dimensional space $\Gamma$), the AYBE arose in the work of Polishchuk \cite{P}, in the study of $A_\infty$-categories attached to algebraic curves of arithmetic genus 1. More precisely, the equation considered in \cite{P} is the graded AYBE with spectral parameter over the algebra $A^{op}$ opposite to $A$. Using ordinary multiplication and making the substitution $v = v_1-v_2$ and $v' = v_2-v_3$, the equation takes the form \begin{multline} \label{aybe} r^{12}(-u',v) r^{13}(u+u',v+v') -r^{23}(u+u',v')r^{12}(u,v) \\ +r^{13}(u,v+v')r^{23}(u',v')=0, \end{multline} where $r$ is a meromorphic function of two complex variables with values in $A\otimes A$. From now on, the term ``AYBE'' will be reserved for this equation. One special case studied in \cite{P} is where $A=\mathrm{Mat}_n(\mathbb C)$ and AYBE solutions $r(u,v)$ also satisfy the unitarity condition \begin{equation} \label{aun} r^{21}(-u, -v) = -r(u,v), \end{equation} and have a Laurent expansion near $u=0$ of the form \begin{equation} \label{laur} r(u,v) = \frac{1 \otimes 1}{u} + r_0(v) + u r_1(v) + O(u^2). \end{equation} In this case, we will show that $r_0(v)$ satisfies the CYBE with spectral parameter, \begin{equation} \label{cybepint} [r_0(v)^{12}, r_0(v+v')^{13}] + [r_0(v)^{12}, r_0(v')^{23}] + [r_0(v+v')^{13}, r_0(v')^{23}] = 0, \end{equation} and the unitarity condition, \begin{equation} r(-v)^{21} = -r(v). \end{equation} This follows from the proof of the fact in \cite{P} that, even without the Laurent condition \eqref{laur}, when the limit $\overline{r}(v) = (\mathrm{pr} \otimes \mathrm{pr})r(u,v) \bigl|_{u = 0}$ exists ($\mathrm{pr}$ is the projection away from the identity to traceless matrices), it is a unitary solution of the CYBE with spectral parameter. In this paper we will classify all such matrices $r(u,v)$ where $r_0(v) = \frac{\tilde r + e^v \tilde r^{21}}{1 - e^v}$ for $\tilde r$ a constant solution of the CYBE \eqref{cybepint} satisfying $\tilde r + \tilde r^{21} = \sum_{i,j} e_{ij} \otimes e_{ji}$. These $\tilde r$ were classified by Belavin and Drinfeld in the 1980's \cite{BD} in terms of combinatorial objects known as Belavin-Drinfeld triples. We will discover that such matrices $r(u,v)$ correspond not to all Belavin-Drinfeld triples for $\tilde r$, but to a subclass of them, called {\it associative BD triples}. In particular, we answer negatively the question asked in Remark 1 of Section 5 of \cite{P}: whether any nondegenerate solution $\overline{r}(v) = (\mathrm{pr} \otimes \mathrm{pr}) r_0(v)$ of the CYBE can be ``lifted'' to such an AYBE solution $r(u,v)$ (see Remark \ref{rbarrem}). Also, for those triples which are associative, only special classical $r$-matrices from the usual continuous family are liftable. Recall that the Belavin-Drinfeld classification assigns to each BD triple a family of classical $r$-matrices parameterized by a finite-dimensional vector space of skew-symmetric diagonal components. We will demonstrate that there is only a finite number of choices of this component, up to scalars ($1 \otimes A + A \otimes 1$), which yield an $r$-matrix liftable to an associative $r$-matrix (this number is nonzero iff the BD triple is associative). More precisely, the condition for a classical $r$-matrix to be ``liftable'' to an associative $r$-matrix (a unitary solution of the AYBE) satisfying \eqref{laur} is that the map $T: \Gamma_1 \rightarrow \Gamma_2$ which defines the BD triple be ``liftable'' to a cyclic permutation $\tilde T$ of the set $\{e_1, \ldots, e_n\}$. Here, ``liftable'' means that $T(\alpha_i) = \alpha_j$ implies that $\tilde T(e_i) = \tilde T(e_j)$ and $\tilde T(e_{i+1}) = \tilde T(e_{j+1})$. Here $\Gamma_1, \Gamma_2 \subset \Gamma= \{\alpha_1, \ldots, \alpha_{n-1}\}$. Then, the skew-symmetric diagonal component $s$ parameterizing the $r$-matrices for the BD triple is determined up to scalars by an explicit formula (the solution to ``associative'' versions of the equations for $s$ in the CYBE theory). There are evidently finitely many choices of the lift $\tilde T$ of $T$, and we define the BD triple to be ``associative'' if there exists at least one. We discover that such an associative $r$-matrix lifting a classical $r$-matrix $\tilde r$ is closely related to the Gerstenhaber-Giaquinto-Schack (GGS) quantization of $\tilde r$, i.e.~a special matrix $R_\mathrm{GGS}(u) = 1 + u\tilde r +O(u^2)$ which satisfies the QYBE, \begin{equation} R^{12} R^{13} R^{23} = R^{23} R^{13} R^{12}, \end{equation} and the Hecke condition, \begin{equation} (PR - q)(PR + q^{-1}) = 0, \end{equation} where $q = e^{u/2}$ and $P = \sum_{i,j} e_{ij} \otimes e_{ji}$ is the permutation matrix (see \cite{GGS},\cite{S}). Namely, $R_{GGS}(u)=(e^{u/2} - e^{-u/2}) \lim_{v\to -\infty}r(u,v) = (e^{u/2} - e^{-u/2}) [r(u,v) - \frac{e^v}{1-e^v} P]$, where in the limit $v$ is taken to be real. In fact, we can make the connection between the AYBE solution ``lifting'' a classical $r$-matrix (a solution of \eqref{cybe} and \eqref{nu}) and the QYBE solution ``quantizing'' the classical $r$-matrix more apparent by adding the spectral parameter $v$ back into the quantum $R$-matrix. That is, for any matrix $R = 1 + u r + O(u^2)$ satisfying the QYBE and the Hecke condition which is a function only of the parameter $q = e^{u/2}$, one can consider its ``Baxterization,'' \begin{equation}\label{rwsp} R_{\mathrm{B}}(q,v) = \frac{e^v}{1 - e^v} (q - q^{-1}) P + R(q), \end{equation} which is a solution of the QYBE and the Hecke condition which quantizes the CYBE solution with spectral parameter $\frac{r + e^v r^{21}}{1 - e^v}$ \cite{Mu}. Now, letting $R_{\mathrm{BGGS}}(q,v)$ be given by \eqref{rwsp} from $R_\mathrm{GGS}(q)$, we find that \begin{equation} \label{rRc} r(u,v) = \frac{R_{\mathrm{BGGS}}(q,v)}{q - q^{-1}}, \quad \mathrm{where\ } q = e^{u/2}. \end{equation} In particular, this implies that the matrix $r(u,v)$ satisfies not only the AYBE but also the QYBE. \begin{rem} The fact that the ``associative r-matrix'' $r(u,v)$ specializes to both classical and quantum r-matrices is in good agreement with the remark in \cite{Ag1} (p.2) that infinitesimal bialgebras have nontrivial analogies and connections with both classical (Lie) and quantum bialgebras. At the same time, we must admit that we don't have a conceptual explanation for the validity of \eqref{rRc}. To find such an explanation seems to be an interesting problem. \end{rem} \begin{rem} In \cite{Mu} (p.9) Mudrov quantizes certain Belavin-Drinfeld triples that obey a slightly more restrictive version of the associative conditions than those considered in this paper. To do this, Mudrov uses the language of associative Manin triples. It appears that the theory of \cite{Mu} is parallel to \cite{Ag1,Ag2} and closely related to the content of this paper. \end{rem} \begin{rem} We expect that the results of this paper can be generalized to the case of all trigonometric solutions of the CYBE with spectral parameter (not just those obtained from constant CYBE solutions). In this case, we expect again that the classical $r$-matrices with spectral parameter can be lifted provided they satisfy the BD associativity conditions and the classical $r$-matrix for the triple is chosen correctly (in an analogous way to the case of constant $r$-matrices). Furthermore, for any given $r_0(v)$, the associative lift $r(u,v)$ should again be related to the quantum lift $R(q,v)$ by \begin{equation} R(q,v) = (q - q^{-1})r(u,v), \quad q=e^{u/2}. \end{equation} The matrix $R(q,v)$ should be given explicitly by a generalization of the GGS formula (there is already a different kind of explicit formula for $R(q,v)$ given in \cite{ESS} and \cite{ES}). \end{rem} \section{Background} \begin{ov} We formally introduce Belavin-Drinfeld triples, the AYBE as presented in \cite{P}, and the GGS Conjecture \cite{GGS}, proved in \cite{S}. \end{ov} \subsection{Belavin-Drinfeld triples} \label{bd} Let $(e_i), 1 \leq i \leq n,$ be the standard basis for $\mathbb C^n$. Set $\Gamma = \{e_i - e_{i+1}: 1 \leq i \leq n-1\}$. We will use the notation $\alpha_i := e_i - e_{i+1}$. Let $( , )$ denote the inner product on $\mathbb C^n$ having $(e_i)$ as an orthonormal basis. \begin{defe} \cite{BD} A Belavin-Drinfeld triple of type $A_{n-1}$ is given by $(T, \Gamma_1, \Gamma_2)$ where $\Gamma_1, \Gamma_2 \subset \Gamma$ and $T: \Gamma_1 \rightarrow \Gamma_2$ is a bijection, satisfying (a) $T$ preserves the inner product: $\forall \alpha, \beta \in \Gamma_1$, $(T \alpha,T \beta) = (\alpha, \beta)$. (b) $T$ is nilpotent: $\forall \alpha \in \Gamma_1, \exists k \in \mathbb N$ such that $T^k \alpha \notin \Gamma_1$. \end{defe} Let $\mathfrak g = \mathfrak{gl}(n)$ be the Lie algebra of complex $n \times n$ matrices. Define $\mathfrak h \subset \mathfrak g$ to be the abelian subalgebra of diagonal matrices and $\mathfrak{g}' \subset {\mathfrak g}$ to be the simple subalgebra of traceless matrices (i.e.~$\mathfrak{sl}(n)$). Elements of $\mathbb C^n$ define linear functions on $\mathfrak h$ by $\bigl ( \sum_i \lambda_i e_i \bigr) \bigl( \sum_i a_i\: e_{ii} \bigr)= \sum_i \lambda_i a_i$. Under this identification, we use $\Gamma$ as the set of simple roots of $\mathfrak{g}'$ with respect to the Cartan subalgebra $\mathfrak{h} \cap \mathfrak{g}'$. Let $P = \sum_{1 \leq i,j \leq n} e_{ij} \otimes e_{ji}$ be the Casimir element inverse to the standard form, $(B,C) = \mathrm{tr}(BC)$, on $\mathfrak g$. It is easy to see that $P (w \otimes v) = v \otimes w$, for any $v,w \in \mathbb C^n$, so we also call $P$ the permutation matrix. Let $P^0=\sum_i e_{ii} \otimes e_{ii}$ be the projection of $P$ to $\mathfrak h \otimes \mathfrak h$. For any Belavin-Drinfeld triple, consider the following equations for $s \in \mathfrak h \wedge \mathfrak h$: \begin{gather} \label{tr02} \forall \alpha \in \Gamma_1, \bigl[(\alpha - T \alpha) \otimes 1 \bigr] s = \frac{1}{2} [(\alpha + T \alpha) \otimes 1] P^0. \end{gather} Belavin and Drinfeld showed that solutions $r \in \mathfrak{g} \otimes \mathfrak g$ of the constant CYBE, \begin{equation} \label{cybe} [r^{12}, r^{13}] + [r^{12}, r^{23}] + [r^{13}, r^{23}] = 0, \end{equation} satisfying \begin{equation} \label{nu} r + r^{21} = P = \sum_{i,j} e_{ij} \otimes e_{ji}, \end{equation} are given, up to inner isomorphism, by a discrete datum (the Belavin-Drinfeld triple) and a continuous datum (a solution $s \in {\mathfrak h} \wedge {\mathfrak h}$ of \eqref{tr02}). We now describe this classification. For $\alpha = e_i - e_j$, set $e_\alpha := e_{ij}$. Define $|\alpha| = |j - i|$. For any $Y \subset \Gamma$, set $\tilde Y = \{\alpha \in \text{Span}(Y) \mid \alpha = e_i - e_j, i < j\}$ (the set of positive roots of the semisimple subalgebra of $\mathfrak{g}'$ having $Y$ as its set of simple roots). In particular we will use the notation $\tilde \Gamma, \tilde \Gamma_1$, and $\tilde \Gamma_2$. We extend $T$ additively to a map $\tilde \Gamma_1 \rightarrow \tilde \Gamma_2$, i.e.~by $T(\alpha+\beta)=T \alpha +T \beta$. Whenever $T^k \alpha = \beta$ for $k \geq 1$, we say $\alpha \prec \beta$ and $O(\alpha,\beta) = k$, while $O(\beta, \alpha) = -k$. Clearly $\prec$ is a partial ordering on $\tilde \Gamma$. We will also use $\alpha \underline{\prec} \beta$ to denote that either $\alpha \prec \beta$ or $\alpha = \beta$. Suppose $T^k \alpha = \beta$ for $\alpha = e_i - e_j$ and $\beta = e_l - e_m$. Then there are two possibilities on how $T^k$ sends $\alpha$ to $\beta$, since $T^k$ induces an isomorphism of the segment of the Dynkin diagram corresponding to $\alpha$ onto the segment corresponding to $\beta$. Namely, either $T^k(\alpha_i) = \alpha_l$ and $T^k(\alpha_{j-1})=\alpha_{m-1}$, or $T^k(\alpha_i) = \alpha_{m-1}$ and $T^k(\alpha_{j-1}) = \alpha_l$. In the former case, call $T^k$ {\sl orientation-preserving on $\alpha$}, and in the latter, {\sl orientation-reversing on $\alpha$}. Let \begin{equation} C_{\alpha,\beta} = \begin{cases} 1, & \text{if $T^k$ reverses orientation on $\alpha$,} \\ 0, & \text{if $T^k$ preserves orientation on $\alpha$.} \end{cases} \end{equation} Now, we define \begin{gather} \label{adef} a = \sum_{\alpha \prec \beta} (-1)^{C_{\alpha,\beta} (|\alpha|-1)} (e_{-\alpha} \otimes e_\beta - e_{\beta} \otimes e_{-\alpha}), \\ \label{rdef} r_{st} = \frac{1}{2} \sum_i e_{ii} \otimes e_{ii} + \sum_{\alpha \in \tilde \Gamma} e_{-\alpha} \otimes e_{\alpha}, \quad r_{T, s} = s + a + r_{st}. \end{gather} Here $r_{st} \in \mathfrak{g} \otimes \mathfrak{g}$ is the standard solution of the CYBE satisfying $r_{st} + r_{st}^{21} = P$, and $r_{T, s}$ is the solution of the CYBE corresponding to the data $((\Gamma_1,\Gamma_2,T), s)$ ($r_{st}$ corresponds to the trivial BD triple with $s = 0$). It follows from \cite{BD} that \begin{prop}\cite{BD} Any solution $\tilde r \in \mathfrak{g}$ of \eqref{cybe} and \eqref{nu} is equivalent to a solution $r_{T,s}$ given in \eqref{rdef} for some Belavin-Drinfeld triple and continuous parameter $s$, under an inner automorphism of $\mathfrak{g}$. \end{prop} \begin{defe} Solutions of \eqref{cybe} and \eqref{nu} will be called {\sl classical $r$-matrices}. \end{defe} \begin{exam} \label{gcge} For a given $n$, there are exactly $\phi(n)$ BD triples ($\phi$ is the Euler $\phi$-function) in which $|\Gamma_1| + 1 = |\Gamma|$ \cite{GG}. These are called {\sl generalized Cremmer-Gervais} triples (the usual Cremmer-Gervais triple is the special case $m=1$ in the following classification). These are indexed by $\{m \in \{1, \ldots, n\} \mid \text{gcd}(n,m) = 1 \}$, and given by $\Gamma_1 = \Gamma \setminus \{\alpha_{n-m}\}$, $\Gamma_2 = \Gamma \setminus \{\alpha_m\}$, and $T(\alpha_i) = \alpha_{\text{Res}(i+m)}$, where $\text{Res}$ gives the residue modulo $n$ in $\{1,\ldots,n\}$. For these triples, there is a unique $s$ taken to lie in $\mathfrak{g}' \wedge \mathfrak{g}'$, given by $s^{ii}_{ii} = 0, \forall i$, and $s_{ij}^{ij} = \frac{1}{2} - \frac{1}{n}\text{Res}(\frac{j-i}{m}), i \neq j$ (this is easy to verify directly and is also given in \cite{GG}). We will see that this formula for $s$ generalizes to formula \eqref{stp} in the associative case. \end{exam} \subsection{The CYBE and AYBE with parameters} The CYBE takes the following form ``with spectral parameter'' over a Lie algebra $\mathfrak{a}$: \begin{equation} \label{cybep} [r^{12}(x), r^{13}(x+y)] + [r^{12}(x), r^{23}(y)] + [r^{13}(x+y), r^{23}(y)] = 0. \end{equation} Here $r(v)$ is a meromorphic function of $v$ with values in $\mathfrak{a} \otimes \mathfrak{a}$. A solution $r$ is called {\sl unitary} if \begin{equation} \label{cun} r(v) = -r^{21}(-v). \end{equation} \begin{lemmadef} If $r$ is a constant solution of the CYBE, then $\frac{r + e^v r^{21}}{1 - e^v}$ is a unitary solution of the CYBE with spectral parameter $v$. For any constant solution $r$, define \begin{equation} \label{adsp} \hat r(v) = \frac{r + e^v r^{21}}{1 - e^v}. \end{equation} \end{lemmadef} \begin{proof} This follows immediately. \end{proof} The version of the AYBE we consider has the form \begin{multline} r^{12}(-u',v) r^{13}(u+u',v+v') - r^{23}(u+u',v') r^{12}(u,v) \\ + r^{13}(u,v+v') r^{23}(u',v') = 0. \end{multline} The {\sl unitarity} condition is \begin{equation} r^{21}(-u,-v) = -r(u,v). \end{equation} Unitary AYBE solutions give rise to CYBE solutions in the following way: \begin{prop} \cite{P}\label{ac} Let $A = \mathfrak{g}$ and $\mathrm{pr}: \mathfrak{g} \rightarrow \mathfrak{g}'$ the orthogonal projection with respect to the standard form, $(B,C) = \mathrm{tr}(BC)$. If $r(u,v)$ is a unitary solution of the AYBE, and the limit $\overline{r}(v) = [(\mathrm{pr} \otimes \mathrm{pr}) r(u,v)]|_{u=0}$ exists, then $\overline{r}(v)$ is a unitary solution of the CYBE with spectral parameter. \end{prop} \begin{proof} We repeat the proof of \cite{P} (since it is short and we will use \eqref{acy} later). First note that the unitarity of $\overline{r}$ follows immediately from the unitarity of $r$. Substituting $r^{21}(-u,-v) = -r^{12}(u,v)$, we rewrite the AYBE as \begin{multline} -r^{21}(u', -v) r^{13}(u+u', v+v') + r^{23}(u+u', v') r^{21}(-u,-v) \\ + r^{13}(u,v+v') r^{23}(u', v') = 0. \end{multline} We permute the first two components, yielding \begin{multline} -r^{12}(u', -v) r^{23}(u+u', v+v') + r^{13}(u+u', v') r^{12}(-u, -v) \\ + r^{23}(u, v+v') r^{13}(u', v') = 0. \end{multline} This resembles the AYBE with the order of each product reversed (which we seek). To obtain it, we make the linear change of variables given by $u \mapsto u', u' \mapsto u, v \mapsto -v$, and $v' \mapsto v + v'$: \begin{multline} r^{13}(u+u', v+v') r^{12}(-u', v) - r^{12}(u, v)r^{23}(u+u', v') \\ + r^{23}(u', v') r^{13}(u, v+v') = 0. \end{multline} Subtracting this from the AYBE, we get \begin{multline} \label{acy} [r^{12}(-u', v), r^{13}(u+u', v+v')] + [r^{12}(u,v), r^{23}(u+u', v')] \\ + [r^{13}(u, v+v'), r^{23}(u', v')] = 0. \end{multline} Applying $\mathrm{pr} \otimes \mathrm{pr} \otimes \mathrm{pr}$, we get the same equation with $(\mathrm{pr} \otimes \mathrm{pr}) r$ replacing $r$, and then we may take the limit $u \rightarrow 0$ to find that $\overline{r}(v)$ satisfies the CYBE with spectral parameter. \end{proof} This warrants the \begin{defe} Solutions of \eqref{aybe} and \eqref{aun} are called {\sl associative $r$-matrices}. \end{defe} In the case we consider, $r(u,v)$ has a Laurent expansion at $u=0$ of the form \eqref{laur}, and this result can be strengthened: \begin{lemma} If $r(u,v)$ has a Laurent expansion at $u=0$ of the form \begin{equation*} r(u,v) = \frac{1 \otimes 1}{u} + r_0(v) + u r_1(v) + O(u^2) \end{equation*} and is an associative $r$-matrix, then $r_0(v)$ is a solution of the CYBE with spectral parameter. \end{lemma} \begin{proof} This follows from \eqref{acy}, since $1$ commutes with anything. \end{proof} \subsection{The GGS quantization} Given any Belavin-Drinfeld triple $(\Gamma, \Gamma', T)$ and any matrix $s \in {\mathfrak h} \wedge {\mathfrak h}$ satisfying \eqref{tr02}, the CYBE solution $r_{T, s}$ is one-half the linear term in $\hbar$ of a quantum $R$-matrix $R_\mathrm{GGS} = 1 + 2 r_{T, s} \hbar + O(\hbar^2)$, which satisfies the quantum Yang-Baxter equation, \begin{equation} \label{qybe} R^{12} R^{13} R^{23} = R^{23} R^{13} R^{12}, \end{equation} and the Hecke relation, \begin{equation}\label{hecke} (PR - q)(PR + q^{-1}) = 0, \quad q = e^\hbar. \end{equation} The matrix $R_\mathrm{GGS}$ is given by a simple (yet not fully understood) formula proposed by Gerstenhaber, Giaquinto, and Schack \cite{GGS} in 1993: \begin{gather} R_{st} = q \sum_{i} e_{ii} \otimes e_{ii} + \sum_{i \neq j} e_{ii} \otimes e_{jj} + (q - q^{-1}) \sum_{\alpha > 0} e_{-\alpha} \otimes e_{\alpha}, \\ R_\mathrm{GGS} = q^s \biggl(R_{st} + (q - q^{-1}) \sum_{\alpha \prec \beta} (-1)^{C_{\alpha, \beta} (|\alpha| - 1)} [q^{-C_{\alpha, \beta}(|\alpha| - 1) - \mathrm{PS}(\alpha, \beta)} e_{-\alpha} \otimes e_{\beta} \nonumber \\ - q^{C_{\alpha, \beta}(|\alpha| - 1) + \mathrm{PS}(\alpha, \beta)} e_{\beta} \otimes e_{-\alpha}]\biggr) q^s, \label{ggsf} \end{gather} where $\mathrm{PS}(\alpha, \beta)$ is defined as follows. First, we define the relation $\alpha \lessdot \beta$ for $\alpha > 0, \beta > 0$ to mean that, writing $\alpha = e_i - e_j$ and $\beta = e_k - e_l$, we have $j = k$. In other words, considering $\alpha$ to be the line segment with endpoints $i$ and $j$ and $\beta$ the line segment with endpoints $k$ and $l$ on the real line, we have that $\alpha$ lies adjacent to $\beta$ on the left. Now, let $[\mathrm{statement}] = 1$ if ``statement'' is true, and $[\mathrm{statement}] = 0$ otherwise. Then, $\mathrm{PS}$ is given by \begin{multline} \mathrm{PS}(\alpha, \beta) = \frac{1}{2} \bigl([\alpha \lessdot \beta] + [\beta \lessdot \alpha]\bigr) + [\exists \gamma \mid \alpha \prec \gamma \prec \beta, \alpha \lessdot \gamma] \\ + [\exists \gamma \mid \alpha \prec \gamma \prec \beta, \gamma \lessdot \alpha]. \end{multline} \begin{thm}[The GGS Conjecture] \cite{GGS}, \cite{S} The element $R_\mathrm{GGS}$ satisfies the QYBE \eqref{qybe} and the Hecke condition \eqref{hecke}. \end{thm} \section{Statement of the main theorem} \begin{ov} In this section, we state the main theorem, which gives (1) the associativity conditions under which a classical $r$-matrix can be lifted to an associative $r$-matrix, (2) the formula relating the associative $r$-matrix to the GGS quantum $R$-matrix, and (3) a new, explicit formula for the GGS $R$-matrix in this case (which is a generalization of Giaquinto's formula for the GGS $R$-matrix in the case of generalized Belavin-Drinfeld triples). \end{ov} \begin{defe} \label{atd} Call a triple an {\sl associative triple} if (i) the triple preserves orientation, and (ii) there exists a cyclic permutation $\tilde T$ of $\{1,\ldots,n\}$ such that $T(\alpha_i) = \alpha_j$ implies $\tilde T(i) = j$ and $\tilde T(i+1) = j+1$. Such permutations are called {\sl compatible permutations}. The structure $(\Gamma_1, \Gamma_2, T, \tilde T)$ is called an {\sl associative structure}. Given such a structure, we define for each $i,j \in \{1, \ldots, n\}$ the function $O(i,j)$ to be the least nonnegative integer such that $\tilde T^{O(i,j)}(i) = j$. \end{defe} \begin{ntn} We will use the notation $s_0 = (\mathrm{pr} \otimes \mathrm{pr}) s$ in the future. \end{ntn} \begin{thm}\label{mt} (1a) A classical $r$-matrix $\hat r_{T,s}$ is the zero-degree term $r_0(v)$ of the Laurent expansion \eqref{laur} of an associative $r$-matrix $r(u,v)$ iff $(\Gamma_1, \Gamma_2, T)$ is associative, and $s_0 = (\mathrm{pr} \otimes \mathrm{pr}) s$ is given by the formula \begin{equation} \label{stp} s_0 = \sum_{i \neq j} \bigl( \frac{1}{2} - \frac{O(i,j)}{n} \bigr) e_{ii} \otimes e_{jj}, \end{equation} (1b) or equivalently satisfies \begin{equation} \label{tra} [(e_i - e_{\tilde T(i)}) \otimes 1] s_0 = \frac{1}{2} [(e_i + e_{\tilde T(i)}) \otimes 1] [(\mathrm{pr} \otimes \mathrm{pr}) P^0]. \end{equation} (2a) In this case, there is a unique associative $r$-matrix having a Laurent expansion of the form \begin{equation} \label{tle} \frac{1 \otimes 1}{u} + \hat r_{T,s}(v) + O(u), \end{equation} and it is given by \begin{equation} \label{afgq} r(u,v) = \frac{e^v}{1 - e^v} P + \frac{R_{\mathrm{GGS}}(e^{u/2})}{e^{u/2} - e^{-u/2}}, \end{equation} where $R_{\mathrm{GGS}}(e^{u/2})$ is the GGS matrix for the same $T$ and $s$ as $r_{T,s}$, replacing $q$ by $e^{u/2}$. (2b) Using the Baxterization $R_{\text{BGGS}}(q,v)$, we get \begin{equation} \label{bafgq} r(u,v) = \frac{R_{\mathrm{BGGS}}(e^{u/2}, v)}{e^{u/2} - e^{-u/2}}. \end{equation} (3) The matrix $R_\mathrm{GGS}(q)$ occurring in \eqref{afgq} is given by \begin{multline} \label{gqf} R_{\mathrm{GGS}}(q) = q^{s - s_0} \biggl[ \sum_{i,j} q^{1 - 2 O(i,j)/n} e_{ii} \otimes e_{jj} \\ + (q - q^{-1}) \biggl( \sum_{\alpha > 0} e_{-\alpha} \otimes e_\alpha + \sum_{\alpha \prec \beta} \bigl( q^{-2O(\alpha,\beta)/n} e_{-\alpha} \otimes e_\beta - q^{2O(\alpha, \beta)/n} e_{\beta} \otimes e_{-\alpha} \bigr) \biggr) \biggr] q^{s-s_0} \end{multline} for any associative structure $(\Gamma_1, \Gamma_2, T, \tilde T)$, where $s_0 = (\mathrm{pr} \otimes \mathrm{pr}) s$ is determined by \eqref{stp}. \end{thm} \begin{rem} \label{rbarrem} One can also classify associative $r$-matrices where we require only that the limit $\overline{r}(v) = (\mathrm{pr} \otimes \mathrm{pr})(r(u,v)) \bigl|_{u=0}$ exist and satisfy $\overline{r}(v) = \hat {\tilde r}$ for some classical $r$-matrix $\tilde r$ over $\mathfrak{g}'$. When the Laurent condition \eqref{laur} holds, all such lifts of $\overline{r}$ (without fixing $r_0$) are equal to $e^{cuv} r'(u,v)$, for $r'(u,v)$ an associative $r$-matrix classified in Theorem \ref{mt} and $c \in \mathbb C$. To see this, first note that the BD associativity and $s_0$ conditions must still be satisfied, because our proof of this part only uses the projection of the AYBE away from scalars. (This observation answers negatively the question asked in Remark 1 of Section 5 in \cite{P}: whether, for any unitary nondegenerate $\mathfrak{g}' \otimes \mathfrak{g}'$-valued CYBE solution $\overline{r}(v)$ with spectral parameter, there exists a unitary AYBE solution $r(u,v)$ having a Laurent expansion at $u=0$ of the form $r(u,v) = \frac{1\otimes 1}{u} + r_0(v) + O(u)$, such that $(\mathrm{pr} \otimes \mathrm{pr}) r_0(v) = \overline{r}(v).$) Then, the result follows from the fact (using Remark 2 in Section 5 of \cite{P}) that any two associative $r$-matrices $r(u,v), r'(u,v)$ with Laurent expansions of the form \eqref{laur} such that $\overline{r}(v) = \overline{r'}(v)$ are related by $r_0(v) - r'_0(v) = (1 \otimes 1)cv + \Phi^1 - \Phi^2$ where $c \in \mathbb C$ and $\Phi \in {\mathfrak h}$ satisfies $(\alpha, \Phi) = (T \alpha, \Phi), \forall \alpha \in \Gamma_1$. In this paper, we focus on lifts of $r_0$ when it is a classical $r$-matrix, rather than lifting just $\overline{r}$, since the result is cleaner. \end{rem} \begin{rem} Equation \eqref{tra} can be thought of as the ``associative'' version of \eqref{tr02} classifying classical $s$; it just so happens in the associative case that these equations completely determine $s_0$ by the choice of $\tilde T$. \end{rem} \begin{rem} In the case of generalized Cremmer-Gervais triples (see Remark \ref{gcge}), \eqref{gqf} is the formula found by Giaquinto \cite{S}. Indeed, a generalized Cremmer-Gervais triple has a unique associative structure, under which \eqref{stp} becomes the formula given in Remark \ref{gcge}. \end{rem} \begin{rem} Note that, given any associative choice of $T$, there are finitely many possible compatible choices of $\tilde T$ (depending on $T$, and up to $(n-1)!$ for the case of $T$ trivial). Hence, the space of associative matrices for each associative triple is parameterized by a finite parameter ($\tilde T$) and a continuous parameter (the choice of $s - s_0$). The matrix $s-s_0$ can be any element in $\Lambda^2 {\mathfrak h} \cap (1 \otimes {\mathfrak h} + {\mathfrak h} \otimes 1)$ satisfying $[(\alpha - T \alpha) \otimes 1] (s-s_0) = 0, \forall \alpha \in \Gamma_1$. In other words, $s - s_0 = \Phi \otimes 1 - 1 \otimes \Phi$ for $\Phi \in {\mathfrak h}$ any element satisfying $(\alpha, \Phi) = (T \alpha, \Phi), \forall \alpha \in \Gamma_1$. \end{rem} \section{Proof of the main theorem \eqref{mt}} \begin{ov} We prove the parts of Theorem \ref{mt} in the reverse order. Thus, in the first subsection, we prove part (3), namely the explicit formula for $R_\mathrm{GGS}$ for associative BD triples where $s_0 = (\mathrm{pr} \otimes \mathrm{pr}) s$ is given by \eqref{stp} for a choice of a compatible permutation $\tilde T$. Then, in the second subsection, we prove parts (2a) and (2b) of Theorem \ref{mt}, namely verifying that $r(u,v) = \frac{R_\mathrm{GGS}(e^{u/2})}{e^{u/2} - e^{-u/2}} + \frac{e^v}{1 - e^v} P$ in fact satisfies the AYBE and unitarity conditions and lifts the classical $r$-matrix, and is the unique such element. Finally, in the third subsection, we prove part (1) of Theorem \ref{mt}, that the BD associativity and $s_0$-compatibility conditions are necessary and sufficient for the lift to exist (necessity is all that will remain). \end{ov} \subsection{Proof of Theorem \ref{mt}, part (3): the generalization of Giaquinto's formula} \begin{ov} We prove the generalization of Giaquinto's formula \eqref{gqf} via a straightforward computation. \end{ov} First, we prove a lemma which gives a new formula for the combinatorial constant $\mathrm{PS}(\alpha, \beta)$: \begin{lemma} For any $\alpha \prec \beta$, the number $\mathrm{PS}(\alpha, \beta) = 1 - (\alpha \otimes \beta) s$. \end{lemma} \begin{proof} Note that, for $\beta = T^k \alpha$ ($k \geq 1$), \begin{multline} (\alpha \otimes \beta) s = \sum_{i = 0}^{k-1} [(T^i \alpha - T^{i+1} \alpha) \otimes \beta] s = \sum_{i = 0}^{k-1} \frac{1}{2} (T^i \alpha + T^{i+1} \alpha, \beta) \\ = \frac{1}{2} (\alpha, \beta) + \frac{1}{2} (2) + \sum_{i = 1}^{k-1} (T^i \alpha, \beta) \\ = 1 - \frac{1}{2} \bigl([\alpha \lessdot \beta] + [\beta \lessdot \alpha] \bigr) - \bigl([\exists \gamma \mid \alpha \prec \gamma \prec \beta, \gamma \lessdot \beta] + [\exists \gamma \mid \alpha \prec \gamma \prec \beta, \beta \lessdot \gamma]\bigr), \end{multline} which proves the desired result. \end{proof} \begin{cor} The matrix $R_\mathrm{GGS}$ can be written as \begin{multline} \label{nggs} (q - q^{-1}) \biggl[\sum_{\alpha} e_{-\alpha} \otimes e_\alpha + \sum_{\alpha = e_i - e_j \prec \beta = e_k - e_l} (-1)^{C_{\alpha, \beta}(|\alpha| - 1)} \\ \bigl(q^{-C_{\alpha, \beta}(|\alpha| - 1) + s_{ik}^{ik} + s_{jl}^{jl} - 1} e_{-\alpha} \otimes e_{\beta} - q^{C_{\alpha, \beta}(|\alpha| - 1) + 1 - s_{ik}^{ik} - s_{jl}^{jl}} e_{\beta} \otimes e_{-\alpha}\bigr)\biggr] + q^{\sum_i e_{ii} \otimes e_{ii} + 2s}. \end{multline} \end{cor} \begin{proof} This follows immediately by expanding $(\alpha \otimes \beta) s = s_{ik}^{ik} + s_{jl}^{jl} - s_{il}^{il} - s_{jk}^{jk}$ for $\alpha = e_i - e_j$ and $\beta = e_k - e_l$, and noticing that $q^s e_{-\alpha} \otimes e_\beta q^s = q^{s_{jk}^{jk} + s_{il}^{il}}$ in this case. \end{proof} \begin{proof}[Proof of Theorem \ref{mt}, part (3).] In the associative case where $s_0 = (\mathrm{pr} \otimes \mathrm{pr})s$ is given by \eqref{stp} for a compatible permutation $\tilde T$, we can simplify \eqref{nggs}. Let us assume first that $s_0 = s \in \Lambda^2 \mathfrak{g}'$. Then, for each $\alpha = e_i - e_j \prec \beta = e_k - e_l$, we have $C_{\alpha, \beta} = 0$ and $s_{ik}^{ik} = s_{jl}^{jl} = \frac{1}{2} - \frac{O(i,k)}{n}$. So, we rewrite \eqref{nggs} as follows: \begin{multline} R_\mathrm{GGS} = \sum_{i,j} q^{1-2O(i,j)/n} e_{ii} \otimes e_{jj} \\ + (q-q^{-1}) \biggl[\sum_{\alpha > 0} e_{-\alpha} \otimes e_\alpha + \sum_{\alpha \prec \beta} \bigl(q^{-2O(\alpha, \beta)/n} e_{-\alpha} \otimes e_\beta - q^{2O(\alpha, \beta)/n} e_\beta \otimes e_{-\alpha}\bigr) \biggr]. \end{multline} In the general case where $s$ is not necessarily equal to $s_0$, the result follows from the fact, evident in \eqref{ggsf}, that $R_\mathrm{GGS} = q^{s-s'} R' q^{s-s'}$, where $R'$ is the GGS matrix for the same triple as $R_\mathrm{GGS}$, but replacing $s$ with $s'$. \end{proof} \subsection{Proof of Theorem \ref{mt}, parts (2a) and (2b): the GGS $R$-matrix satisfies the AYBE with slight modifications} \begin{ov} We verify that the $r(u,v)$ given by \eqref{afgq} and \eqref{gqf} satisfies the AYBE and the unitarity condition by a direct computation using BD combinatorics. A lemma from \cite{P} proves that $r(u,v)$ is uniquely determined by $r_0$ in \eqref{laur}, it is easy to check that $r(u,v)$ lifts $r_{T,s}$ (i.e.~that $r_0 = r_{T,s}$). These results prove part (2a) of Theorem \ref{mt}, from which (2b) immediately follows. As in the previous subsection, most of the work reduces to the case where $s = s_0 \in \Lambda^2 \mathfrak{g}'$. \end{ov} \begin{lemma} \label{ruvc} Fix some associative structure $(\Gamma_1, \Gamma_2, T, \tilde T)$ and choice of $s$ such that $s_0$ is given by \eqref{stp}. Let $r(u,v)$ be given by \eqref{afgq}. Let $r_0(v)$ be the classical $r$-matrix which is the term of degree-zero in the Laurent expansion of $r(u,v)$ in $u$ at $u=0$. Then $r_0(v) = r_{T,s}$. \end{lemma} \begin{proof} This follows from a simple computation using the next lemma \eqref{gruvl}. Alternatively, it follows from the connection between $R_\mathrm{GGS}$ and $r_{T,s}$. \end{proof} \begin{lemma} \label{gruvl} Set $s - s_0 = \Phi^1 - \Phi^2$ where $\Phi \in {\mathfrak h}$ satisfies $(\alpha, \Phi) = (T \alpha, \Phi)$ for any $\alpha \in \Gamma_1$. Using \eqref{gqf}, we can write the matrix $r(u,v)$ given by \eqref{afgq} as follows: \begin{multline} r(u,v) = \frac{e^v}{1 - e^v} P + e^{-\Phi^2 u} \biggl[ \frac{1}{1 - e^{-u}} \sum_{i,j} e^{-O(i,j)u/n} e_{ii} \otimes e_{jj} \\ + \sum_{\alpha > 0} e_{-\alpha} \otimes e_\alpha + \sum_{\alpha \prec \beta} \bigl( e^{-\text{Ord}(\alpha,\beta) u/n} e_{-\alpha} \otimes e_\beta - e^{\text{Ord}(\alpha,\beta) u/n} e_{\beta} \otimes e_{-\alpha} \bigr) \biggr] e^{\Phi^1 u}. \label{gruv} \end{multline} \end{lemma} \begin{proof} By the definition of $s_0$, we may write $s - s_0 = \Phi^1 - \Phi^2$. The fact that $\Phi \in {\mathfrak h}$ satisfies $(\alpha, \Phi) = (T \alpha, \Phi)$ for any $\alpha \in \Gamma_1$ follows directly from the fact that $[(\alpha - T \alpha) \otimes 1] (s - s_0) = 0$. Now, it follows that $e^{(\Phi^1 + \Phi^2) u}$, or simply $\Phi^1 + \Phi^2$, commutes with $R_\mathrm{GGS}$. Together with the fact that $e^t P e^t = 0$ for $t$ any skew-symmetric matrix, we find that \begin{equation} r(u,v) = \frac{e^v}{1 - e^v} P + e^{-\Phi^2 u} \frac{R^0_\mathrm{GGS}(e^{u/2})}{e^{u/2} - e^{-u/2}} e^{\Phi^1 u}, \end{equation} where $R^0_\mathrm{GGS}$ is the GGS matrix quantizing $r_{T,s_0}$. Now \eqref{gruv} follows from \eqref{gqf} with a small amount of manipulation. \end{proof} \begin{ntn} For any $A \otimes A$-valued function $t$ of $u$ and $v$ (possibly constant in one or both variables), we will denote by $AYBE(t)$ the LHS of \eqref{aybe}. \end{ntn} \begin{lemma} \label{p12l} Suppose that $r(u,v)$ is a solution of the AYBE and $\Phi \in \mathfrak{h}$ is any diagonal matrix such that $\Phi^1 + \Phi^2$ commutes with $r(u,v)$. The element \begin{equation}\label{ors} r'(u,v) = e^{-\Phi^2 u} r(u,v) e^{\Phi^1 u} \end{equation} also satisfies the AYBE. If, in addition, $r(u,v)$ is unitary, then so is $r'(u,v)$. \end{lemma} \begin{proof} It is clear that $r'(u,v)$ satisfies the unitarity condition iff $r(u,v)$ does. So, we show that $r'(u,v)$ satisfies the AYBE if $r(u,v)$ does. Since $[\Phi^1 + \Phi^2, r] = 0$, it follows that $e^{(\Phi^1 + \Phi^2) z}$ commutes with $r$ for any complex variable $z$. We make use of this fact in the following computation, setting $t(u,v) = e^{-\Phi^2 u} r(u,v) e^{\Phi^1 u}$: \begin{multline} t^{12}(-u', v)t^{13}(u+u', v+v') \\ = e^{\Phi^2 u'} r^{12}(-u', v) e^{-\Phi^1 u' - \Phi^3(u+u')} r^{13}(u+u', v+v') e^{\Phi^1 (u+u')} \\ = e^{\Phi^2 u' - \Phi^3 u} r^{12}(-u', v) r^{13}(u+u', v+v') e^{\Phi^1 u - \Phi^3 u'}, \end{multline} and similarly, \begin{gather} t^{23}(u+u', v')t^{12}(u, v) = e^{\Phi^2 u' - \Phi^3 u} r^{23}(u+u', v') r^{12}(u, v) e^{\Phi^1 u - \Phi^3 u'}, \mathrm{and}\\ t^{13}(u, v+v') t^{23}(u', v') = e^{\Phi^2 u' - \Phi^3 u} r^{13}(u, v+v') r^{23}(u', v') e^{\Phi^1 u - \Phi^3 u'}. \end{gather} Hence, it follows that $AYBE(t) = e^{\Phi^2 u' - \Phi^3 u} AYBE(r) e^{\Phi^1 u - \Phi^3 u'}$. Hence, $t$ satisfies the AYBE iff $r$ does, proving the desired result. \end{proof} \begin{lemma} \label{nypl} The element $r(u,v) = y(u) + \frac{e^v}{1 - e^v} P$ satisfies the AYBE and the unitarity condition, where $y(u)$ is any solution of the AYBE such that $y(-u) + y^{21}(u) = P$. \end{lemma} \begin{proof} Using facts of the form $P^{12} t^{13} = t^{23} P^{12}$ which follow because $P$ is the permutation matrix, we compute \begin{multline} AYBE(y + f(v) P) = AYBE(y) + f(v+v') [y^{12}(-u') + y^{21}(u')] P^{13} \\ + [f(v)f(v+v') - f(v')f(v) + f(v+v')f(v')]P^{12} P^{13} \\ = [f(v+v') + f(v)f(v+v') - f(v') f(v) + f(v+v')f(v')] P^{12} P^{13}. \end{multline} So, the AYBE is satisfied for $y + f(v) P$, where $f$ is any function satisfying the relation \begin{equation} f(v+v') = \frac{f(v)f(v')}{1 + f(v) + f(v')}. \end{equation} We can rewrite this as \begin{equation} f(v+v')^{-1} = f(v)^{-1} + f(v')^{-1} + f(v)^{-1} f(v')^{-1}, \end{equation} which is the same as the condition that $g(v) = f(v)^{-1} + 1$ satisfies $g(v+v') = g(v) g(v')$. So the solutions are $g(v) = e^{Kv}$ for $K \in \mathbb C$, and in particular, when $K = -1$, we find $f(v) = \frac{e^v}{1 - e^v}$. Furthermore, provided $K \neq 0$, we evidently have $\frac{1}{e^{Kv} - 1} + \frac{1}{e^{-Kv} - 1} = -1$, so that $y(u) + \frac{e^{-Kv}}{1 - e^{-Kv}} P$ satisfies the unitarity condition. This concludes the proof. \end{proof} \begin{rem} This lemma essentially shows how to ``Baxterize'' AYBE solutions. As mentioned in the introduction, we know that the same procedure works for QYBE solutions using a result from \cite{Mu}. \end{rem} \begin{lemma}\label{ypl} The element \begin{multline} y(u) = \frac{1}{1 - e^{-u}} \sum_{i,j} e^{-O(i,j)u/n} e_{ii} \otimes e_{jj} \\ + \sum_{\alpha > 0} e_{-\alpha} \otimes e_\alpha + \sum_{\alpha \prec \beta} \bigl( e^{-\text{Ord}(\alpha,\beta) u/n} e_{-\alpha} \otimes e_\beta - e^{\text{Ord}(\alpha,\beta) u/n} e_{\beta} \otimes e_{-\alpha} \bigr) \label{yeq} \end{multline} satisfies the AYBE. \end{lemma} \begin{proof} We will compute the coefficients $AYBE(y)_{ikm}^{jlp}$ and see that they are all zero, so that $y$ satisfies the AYBE. Note that we need only check those indices for which $i+k+m = j+l+p$, because all nonzero coefficients in the formula for $AYBE(y)$ obey this relation, and the product or sum of matrices whose nonzero coefficients obey this relation yields another matrix of the same form. First, let us compute the coefficient $AYBE(y)_{ikm}^{jlp}$ for $i \neq j, k \neq l,$ and $m \neq p$, subject to the relation $i+k+m = j+l+p$. We have \begin{multline} \label{aby} AYBE(y)_{ikm}^{jlp} = y_{ik}^{\_l}(-u') y_{\_m}^{jp}(u+u') \\ - y_{km}^{\_p}(u) y_{i\_}^{jl}(u+u') + y_{im}^{j\_}(u) y_{k\_}^{lp}(u'), \end{multline} where the underscore means that the index is deduced from the other three by setting equal the sums of the upper and lower indices. In each product of two coefficients, the two underscores are equal. We claim that either two or none of the three terms on the right-hand side are nonzero, and that when there are two nonzero terms, they cancel. To see this, set $\alpha = e_i - e_j, \beta = e_k - e_l$, and $\gamma = e_m - e_p$. Suppose that $|\alpha| = |i-j| > |\beta| = |k - l|$ and $|\alpha| > |\gamma|$. Then if the first term in the RHS of \eqref{aby} is nonzero, it follows that $-\alpha = T^{c} \beta + T^{d} \gamma$, for some $c, d \in \mathbb Z$, and furthermore that exactly one of the other two terms is nonzero: the second term if $|c| < |d|$, the third term if $|d| < |c|$, or if $c = d$ then the second term is nonzero iff $\alpha < 0$ (and the third term iff $\alpha > 0$). Conversely, if the second or third term is nonzero, then the first term must be nonzero with the given conditions holding. Hence either two or zero terms are nonzero. Furthermore, two nonzero terms have values $\pm e^{du + (d-c) u'}$, with the positive sign for the first term and the negative for the second or third term, so they cancel. In cases where $|\beta|$ or $|\gamma|$ is the largest among $|\alpha|$, $|\beta|$, and $|\gamma|$, the same argument applies, and the right hand side is zero. Next, let us check that $AYBE(y)_{ikm}^{jlm} = 0$ for any $i \neq j, k \neq l,$ with $i+k = j+l$. We use \eqref{aby}, setting $p = m$. Set $\alpha = e_i - e_j$ and $\beta = e_k - e_l$. It is evident that the first two terms are each nonzero iff either $-\alpha \underline{\prec} \beta$ with $\beta > 0$, or $-\beta \prec \alpha$ with $\alpha > 0$. On the other hand, the last term is nonzero iff one of these two conditions is true, with the additional condition that, setting the underscores equal to $t$, either $-\alpha \underline{\prec} e_m - e_t \prec \beta$, or $-\beta \underline{\prec} e_t - e_m \prec \alpha$. Assuming that all three terms are nonzero, and using the notational abuse $O(\alpha, \beta) = O(\text{sign}(\alpha)\alpha, \text{sign}(\beta)\beta)$, we have \begin{multline} AYBE(y)_{ikm}^{jlm} = \frac{\text{sign}(\beta)}{1 - e^{-u-u'}} \bigl( e^{O(\alpha,\beta)u'/n - O(j,m)(u+u')/n} \\ - e^{-O(\alpha,\beta) u/n - O(k,m)(u+u')/n} \bigr) - e^{-O(-\alpha, e_m - e_t) u/n + O(e_m - e_t, \beta) u'/n}. \end{multline} Further assuming that $\alpha < 0$ and $-\alpha \prec \beta$, we write the first two terms of the RHS of \eqref{aby} as \begin{multline} \frac{e^{O(j,k) u'/n - O(j,m) (u+u')/n} - e^{-O(j,k) u/n - (O(j,m)/n + 1 - O(j,k)/n)(u+u')}}{1 - e^{-u-u'}} \\ = e^{O(m,k) u'/n - O(j,m) u/n} = e^{-O(-\alpha, e_m - e_t) u/n + O(e_m - e_t, \beta) u'/n}, \end{multline} so $AYBE(y)_{ikm}^{jlm} = 0$. On the other hand, if the third term of the RHS of \eqref{aby} is zero, and still assuming $\alpha < 0$, then we can write the first two terms (if nonzero) as \begin{equation} \frac{e^{O(j,k)u'/n - [O(j,k) + O(k,m)](u+u')/n} - e^{-O(j,k) u/n - O(k,m)(u+u')/n}}{1 - e^{-u-u'}} = 0, \end{equation} so again $AYBE(y)_{ikm}^{jlm} = 0$. Almost the same thing happens when $\alpha > 0, -\beta \prec \alpha$. So, in any case, we find that $AYBE(y)_{ikm}^{jlm} = 0$. By the same reasoning, we can see that $AYBE(y)_{ikm}^{jlp} = 0$ whenever either 1) $i = j, k \neq l$, and $m \neq p$ or 2) $k = l, i \neq j$, and $m \neq p$. Finally, we check that $AYBE(y)_{ikm}^{ikm} = 0$ for all $i, k,$ and $m$. We compute: \begin{multline} AYBE(y)_{ikm}^{ikm} = \frac{e^{O(i,k)u'/n - O(i,m)(u+u')/n}}{(1 - e^{u'})(1 - e^{-u-u'})} \\ - \frac{e^{-O(k,m)(u+u')/n - O(i,k)u/n}}{(1 - e^{-u-u'})(1 - e^{-u})} + \frac{e^{-O(i,m)u/n - O(k,m)u'/n}}{(1-e^{-u})(1-e^{-u'})} \\ = \frac{-e^{-u'+O(i,k)u'/n - O(i,m)(u+u')/n}(1 - e^{-u}) -e^{-O(k,m)(u+u')/n - O(i,k)u/n}(1 - e^{-u'})}{(1 - e^{-u})(1 - e^{-u'})(1 - e^{-u-u'})} \\+ \frac{e^{-O(i,m)u/n - O(k,m)u'/n}(1 - e^{-u-u'})}{(1 - e^{-u})(1 - e^{-u'})(1 - e^{-u-u'})}. \end{multline} Let $\delta = 1$ if $i \underline{\prec} k \underline{\prec} m$ in the $\tilde T$-ordering---that is, if $k$ lies between $i$ and $m$ under iteration of the cyclic permutation $\tilde T$ (or $k = i$ or $m$). Otherwise, set $\delta=0$. Let $\bar \delta$ denote the opposite of $\delta$, i.e.~$\bar \delta = 1 - \delta$. Now, we simplify this to: \begin{multline} AYBE(y)_{ikm}^{ikm}[(1-e^{-u})(1-e^{-u'})(1-e^{-u-u'})] \\ = -e^{-O(i,m)u/n - O(k,m) u'/n - u'\delta}(1 - e^{-u}) \\ -e^{-O(i,m)u/n - O(k,m)u'/n - u \bar \delta}(1 - e^{-u'}) + e^{-O(i,m) u/n - O(k,m) u'/n}(1 - e^{-u-u'}) \\ = e^{-O(i,m)u/n - O(k,m) u'/n} [-e^{-u' \delta} + e^{-u' \delta - u} - e^{-u \bar \delta} + e^{-u \bar \delta - u'} + 1 - e^{-u-u'}] = 0, \end{multline} so $AYBE(y)^{ikm}_{ikm} = 0$, independently of $\delta$. Hence, $y$ satisfies the AYBE. \end{proof} \begin{rem} In the preceding proof, the cancellation of terms in the first two parts of the proof (the ones involving some non-diagonal matrices) is actually a very special case of the pairing of so-called $T$-quadruples in \cite{S}. In \cite{S} these tools are developed much more extensively to expand the twist from \cite{ESS}, which is an arduous computation. \end{rem} \begin{lemma} \label{lau}\cite{P} Let $r$ be a solution of the AYBE with a Laurent expansion of the form \eqref{laur}. Then $r$ is uniquely determined by $r_0$. \end{lemma} \begin{proof} We repeat the computations of \cite{P}. First, note that, since the polynomials $u^k, (u')^k,$ and $(u+u')^k$ are linearly independent, $r_k$ is uniquely determined by $r_0$ and $r_1$ for all $k > 2$. Now, from the AYBE for $r$ we obtain the equation \begin{multline} \label{r01eq} r_0^{12}(v) r_0^{13}(v+v') - r_0^{23}(v') r_0^{12}(v) + r_0^{13}(v+v') r_0^{23}(v') \\ = r_1^{12}(v) + r_1^{23}(v') + r_1^{13}(v+v'). \end{multline} All we have to show is that this equation uniquely determines $r_1$. Suppose that $r'(u,v)$ is another AYBE solution with $r'(u,v) = \frac{1 \otimes 1}{u} + r_0(v) + u r_1'(v) + O(u^2)$. Then $t = r_1' - r_1$ satisfies \begin{equation} t^{12}(v) + t^{13}(v+v') + t^{23}(v') = 0. \end{equation} Now, applying $\mathrm{pr} \otimes id \otimes id$ to this equation, we obtain $(\mathrm{pr} \otimes id) t(v) = 0$ and similarly we obtain $(id \otimes \mathrm{pr}) t(v) = 0$. Hence, $t(v)$ is a scalar meromorphic function satisfying $t(v) + t(v') + t(v+v') = 0$. Now, for any $k \in \mathbb Z \setminus \{0, 1\}$, the elements $v^k$, $(v')^k$, and $(v+v')^k$ are linearly independent, so when we write $t$ in terms of its Laurent expansion, we see that the identity can only be satisfied if $t = a + bv$ for some $a, b \in \mathbb C$. Now the identity holds iff $a = b = 0$. Hence, $t(v) = 0$ identically so that $r_1$ is uniquely given by $r_0$. \end{proof} Now, we can complete the \begin{proof}[Proof of Theorem \ref{mt}, part (2a)] Uniqueness is a consequence of Lemma \ref{lau}. By Lemma \ref{ruvc}, $r(u,v)$ indeed has the Laurent expansion \eqref{tle}. Then, Lemma \ref{gruvl}, which uses part (3) of the Theorem, reduces our task to verifying that \eqref{gruv} satisfies the AYBE and the unitarity condition. By Lemma \eqref{p12l}, we can assume that $\Phi = 0$, since the proof of Lemma \ref{gruvl} points out that $\Phi^1 + \Phi^2$ commutes with $r(u,v)$. By Lemma \ref{nypl}, it suffices only to show that $y(u)$ given by \eqref{yeq} satisfies the AYBE. This is proved in Lemma \ref{ypl}. Hence, the element $r(u,v)$ given by \eqref{afgq} is a unitary AYBE solution lifting $r_0(v)$, proving part (2a) of Theorem \ref{mt}. \end{proof} \begin{proof}[Proof of Theorem \ref{mt}, part (2b)] This follows directly from part (2a) and \eqref{rwsp}. \end{proof} \subsection{Proof of Theorem \ref{mt}, parts (1a) and (1b)} \begin{ov} In this section, we present and exploit condition \eqref{cab}, which follows from \eqref{r01eq} in Lemma \ref{lau}, in order to prove the necessity of the associative BD conditions and formula \eqref{stp} for $s_0$, which is all of (1a) that remains to be proved. The equivalence of \eqref{stp} and \eqref{tra} is an easy computation, proving part (1b) and hence the Theorem. \end{ov} \begin{lemma} Suppose that $r(u,v)$ is a solution of the AYBE having a Laurent expansion of the form \eqref{laur}, where $r_0(v)$ is the classical $r$-matrix with spectral parameter $r_0(v) = \hat r_{T,s}$ for the BD triple $(\Gamma_1, \Gamma_2, T)$ and matrix $s$. Then \begin{equation} \label{cab} (\mathrm{pr} \otimes \mathrm{pr} \otimes \mathrm{pr}) [r_{T,s}^{12} r_{T,s}^{13} - r_{T,s}^{23} r_{T,s}^{12} + r_{T,s}^{13} r_{T,s}^{23}] = 0. \end{equation} \end{lemma} \begin{proof} This follows from \eqref{r01eq} in Lemma \ref{lau}, using the next Lemma \eqref{ovl}. \end{proof} \begin{lemma} \label{ovl} Let $r_0(v) = \hat {\tilde r}$ where $r$ satisfies $r + r^{21} = P$. Then \begin{multline} r_0^{12}(v) r_0^{13}(v+v') - r_0^{23}(v') r_0^{12}(v) + r_0^{13}(v+v') r_0^{23}(v')\\ = \tilde r^{12} \tilde r^{13} - \tilde r^{23} \tilde r^{12} + \tilde r^{13} \tilde r^{23}. \end{multline} \end{lemma} \begin{proof} Note that $P^{12} \tilde r^{13} = \tilde r^{23} P^{12}$, and similar relations are all derived from $P t P = t^{21}$. Substituting $\tilde r^{21} = P - \tilde r^{12}$ six times, we get $\tilde r^{21} \tilde r^{31} - \tilde r^{32} \tilde r^{21} + \tilde r^{31} \tilde r^{32} = \tilde r^{12} \tilde r^{13} - \tilde r^{23} \tilde r^{12} + \tilde r^{13} \tilde r^{23}$. Similarly, we can deduce $\tilde r^{21} \tilde r^{13} - \tilde r^{23} \tilde r^{21} - \tilde r^{13} \tilde r^{23} = -(\tilde r^{12} \tilde r^{13} - \tilde r^{23} \tilde r^{12} + \tilde r^{13} \tilde r^{23})$ and a handful of similar identities to expand \begin{multline}\label{ovrae} [(1-e^v)(1-e^{v'}) (1-e^{v+v'})][r_0^{12}(v) r_0^{13}(v+v') \\ - r_0^{23}(v') r_0^{12}(v) + r_0^{13}(v+v') r_0^{23}(v')] \\ = (\tilde r^{12} + e^v \tilde r^{21})(\tilde r^{13} + e^{v+v'} \tilde r^{31})(1-e^{v'}) - (\tilde r^{23} + e^{v'} \tilde r^{32}) (\tilde r^{12} + e^{v} \tilde r^{21})(1 - e^{v+v'}) \\ + (\tilde r^{13} + e^{v+v'} \tilde r^{31}) (\tilde r^{23} + e^{v'} \tilde r^{32})(1 - e^v)\\ = (e^{2v+2v'} + e^{2v+v'}+e^{v+2v'}+2 e^{v+v'} + e^{v} + e^{v'} + 1)(\tilde r^{12} \tilde r^{13} - \tilde r^{23} \tilde r^{12} + \tilde r^{13} \tilde r^{23}) \\ =[\tilde r^{12} \tilde r^{13} - \tilde r^{23} \tilde r^{12} + \tilde r^{13} \tilde r^{23}][(1-e^v)(1-e^{v'}) (1-e^{v+v'})], \end{multline} proving the Lemma. \end{proof} Now, we are in a position to prove the necessity of the BD associativity conditions given in Definition \ref{atd}: \begin{lemma}\label{nc1} \label{orl} The first condition of Definition \ref{atd} is necessary for an AYBE solution limiting to the CYBE solution to exist. \end{lemma} \begin{proof} Suppose that we are given a Belavin-Drinfeld triple which does not preserve orientation. Hence, there exists $i$ and $j$ such that $T(\alpha_i) = \alpha_{j}$ and $T(\alpha_{i+1}) = \alpha_{j-1}$. Now, let $\tilde r$ be the constant solution of the CYBE corresponding to our Belavin-Drinfeld triple. Then, we find that $AYBE(\tilde r)_{i+2,j-1,j}^{i,j,j+1}$ $= 1 + 0 + 0 = 1$, so \eqref{cab} is not satisfied. \end{proof} \begin{lemma} \label{nc2} The second condition of Definition \ref{atd} is necessary for the triple to give rise to AYBE solutions. \end{lemma} \begin{proof} We consider the coefficients $AYBE(\tilde r)_{i+1,j,k}^{i,j+1,k}$, for $T(\alpha_i) = \alpha_j$. We find that \begin{multline} AYBE(\tilde r)_{i+1,j,k}^{i,j+1,k} = \tilde r_{i+1, j}^{i, j+1} \tilde r_{i,k}^{i,k} - \tilde r^{j,k}_{j, k} \tilde r_{i+1, j}^{i, j+1} + \tilde r_{i+1, k}^{i, k+1} \tilde r_{j, k+1}^{j+1, k} \\ = [(e_i - e_j) \otimes e_k] (s + \frac{1}{2} \sum_l e_{ll} \otimes e_{ll}) - \delta_{ik} = [(e_i - e_j) \otimes e_k] s - \frac{1}{2} (e_i + e_j, e_k). \end{multline} In order for $AYBE(\tilde r)$ to be zero modulo scalars, it is necessary that all of these coefficients are equal for all $k$. That is, we require \begin{equation} \label{ntrij} [(e_i - e_j) \otimes \alpha] s = \frac{1}{2} (e_i + e_j, \alpha) \end{equation} for all roots $\alpha \in \Gamma$. Applying the same work for $AYBE(\tilde r)^{j+1, i, k}_{j, i+1, k}$ we deduce also that \begin{equation} \label{ntrij2} [(e_{i+1}- e_{j+1}) \otimes \alpha] s = \frac{1}{2} (e_{i+1} + e_{j+1}, \alpha) \end{equation} for all roots $\alpha$. Now, provided the first condition of Definition \ref{atd} is satisfied (which we now know is necessary), we can define a permutation $\tilde T$ of $\{1, \ldots, n\}$ such that $T(\alpha_i) = \alpha_j$ implies $\tilde T(i) = j$ and $\tilde T(i+1) = j+1$. This permutation is compatible just in the case it is cyclic; we can choose it to be cyclic iff there is no cycle $(a_1, \ldots, a_k), 1 \leq k < n$, such that, for each $1 \leq i \leq k$, either $T(\alpha_{a_i}) = \alpha_{a_{i+1}}$, or $T(\alpha_{a_{i}-1}) = \alpha_{a_{i+1}-1}$ (subscripts of $a$ are given modulo $k$). Now, in the case that such a cycle exists, \eqref{ntrij} and \eqref{ntrij2} imply \begin{equation} 0 = \frac{1}{2} (e_{a_1} + \ldots + e_{a_{k}}, \alpha) \end{equation} for any $\alpha$. This implies that $e_{a_1} + \ldots + e_{a_{k}} = 1$, so the cycle contains all of $\{1, \ldots, n\}$, contradicting our assumption. \end{proof} \begin{lemma} \label{ays} Suppose $r(u,v)$ satisfies the AYBE and has a Laurent expansion of the form \eqref{laur} with $r_0(v) = \hat{\tilde r}$, where $\tilde r$ is a constant CYBE solution corresponding to the triple $(\Gamma_1, \Gamma_2, T)$ and $s$. Write $\tilde r = a + r_s + s$. Then, for some compatible permutation $\tilde T$, $s_0 = (\mathrm{pr} \otimes \mathrm{pr})s$ satisfies \eqref{tra}. \end{lemma} \begin{proof} Take \eqref{cab} and project to ${\mathfrak h} \otimes {\mathfrak h} \otimes {\mathfrak h}$. Let $t = s + \frac{1}{2} \sum_i e_{ii} \otimes e_{ii}$ be the projection of $\tilde r$ to ${\mathfrak h} \otimes {\mathfrak h}$. Define $t'_{ij} = t^{ij}_{ij} - t^{1j}_{1j} - t^{i1}_{i1}$. Now, \eqref{cab} is equivalent to the condition that \begin{equation} \label{ch1} [(e_1 - e_i) \otimes (e_1 - e_j) \otimes (e_1 - e_k)] (\tilde r^{12} \tilde r^{13} - \tilde r^{23} \tilde r^{12} + \tilde r^{13} \tilde r^{23}) = 0 \end{equation} for all $1 < i, j, k \leq n$. (The same is true if we replace $1$ with any fixed integer $p$ between $1$ and $n$ and allow $i, j,$ and $k$ to take on any value other than $p$.) Using the fact that $t^{11}_{11} = \frac{1}{2}$, we can simplify \eqref{ch1} to \begin{equation}\label{ch2} t'_{ij} t'_{ik} - t'_{jk} t'_{ij} + t'_{ik} t'_{jk} = \frac{1}{4}, \quad 1 < i,j,k \leq n. \end{equation} Specializing to the case $k = i, i \neq j$, we note that $t'_{ij} = - t'_{ji}$, and \eqref{ch2} yields \begin{equation} (t'_{ij})^2 = \frac{1}{4}. \end{equation} Hence, \begin{equation} t'_{ij} = \pm \frac{1}{2}, 1 < i,j \leq n, i \neq j. \end{equation} Given the fact that $t'_{ij} = \frac{1}{2}$ and $t'_{jk} = \frac{1}{2}$ for some distinct $i,j,$ and $k$, \eqref{ch2} implies that $t'_{ik} = \frac{1}{2}$. Also, for any distinct $i,j \in \{2,\ldots, n\}$, we have $\{t'_{ij}, t'_{ji}\} = \{\frac{1}{2}, -\frac{1}{2}\}$. Thus, we can obtain a unique total ordering of $\{2, \ldots, n\}$, say the ordered list $(a_2, \ldots, a_n)$, such that $t'_{a_i a_j} = \frac{1}{2}$ for all $i < j$. This is equivalent to a cyclic permutation of $\{1, \ldots, n\}$ given by $\sigma = (1,a_2,a_3 \ldots, a_n)$. That is, a cyclic permutation of $(1, \ldots, n)$ is associated with the ordering of $2, \ldots, n$ obtained by ``cutting off'' $1$. Evidently the values $t'_{ij}$ completely determine $t$ up to scalars. We rewrite this in a way which yields \eqref{tra}. Set $\tilde T = \sigma$. Note that $t_{ii} = \frac{1}{2}$ for $i \neq 1$ and $t_{11} = - \frac{1}{2}$. Using this, we find \begin{equation} \label{ntr1} [(e_i - e_{\tilde T(i)}) \otimes (e_1 - e_j)] t = t'_{\tilde T(i), j} - t'_{ij} = \delta_{i1} - \delta_{ij} = (e_i, e_1 - e_j). \end{equation} Furthermore, we evidently have \begin{equation} \label{ntr2} [(e_i - e_{\tilde T(i)}) \otimes (e_1 - e_j)] \sum_i e_{ii} \otimes e_{ii} = (e_i - e_{\tilde T(i)}, e_1 - e_j). \end{equation} Subtracting one-half of \eqref{ntr2} from \eqref{ntr1}, we get \begin{equation} \label{ntr} [(e_i - e_{\tilde T(i)}) \otimes \alpha] s = \frac{1}{2} (e_i + e_{\tilde T(i)}, \alpha) \end{equation} for any root $\alpha$. Letting $P' = P^0 - \frac{1}{n}(1 \otimes 1) = (\mathrm{pr} \otimes \mathrm{pr}) P^0$ denote the projection of $P^0 = \sum_i e_{ii} \otimes e_{ii}$ to $\mathfrak{g}' \otimes \mathfrak{g}'$ as in \eqref{tra}, and using $s_0 = (\mathrm{pr} \otimes \mathrm{pr})s$, we can also write \eqref{ntr} as \begin{equation} \label{ntr3} [(e_i - e_{\tilde T(i)}) \otimes 1] s_0 = \frac{1}{2} [(e_i + e_{\tilde T(i)}) \otimes 1] P', \end{equation} which is exactly \eqref{tra}. Now, it remains to see that $\tilde T$ is compatible with $T$, that is, $T(\alpha_i) = \alpha_j$ implies $\tilde T(i) = j$ and $\tilde T(i+1) = j+1$. To see this, we apply \eqref{ntrij} and \eqref{ntrij2}. Suppose $T(\alpha_i) = \alpha_j$. Using the previous work in this lemma, we know that there is a unique permutation $\tilde T$ such that $s$ satisfies \eqref{tra} for all roots $\alpha$. In particular, \eqref{tra} implies that \begin{equation} \label{ijtt} [(e_i - e_j) \otimes \alpha] s = \frac{1}{2}(e_i + e_j, \alpha) + \sum_{k : \ 0 < O(i,k) < O(i,j)} (e_k, \alpha), \forall \alpha \in \Gamma. \end{equation} Equating this with the right-hand side of \eqref{ntrij}, we conclude that $j = \tilde T(i)$. Also, \eqref{ijtt} continues to hold replacing $i$ and $j$ with $i+1$ and $j+1$, respectively. Comparing this with \eqref{ntrij2}, we also find that $\tilde T(i+1) = j+1$. This completes the proof. \end{proof} \begin{lemma} \label{stptra} The condition \eqref{stp} is equivalent to the condition \eqref{tra}. \end{lemma} \begin{proof} Let ${\mathfrak h}_0 = {\mathfrak h} \cap \mathfrak{g}'$ be the space of traceless diagonal matrices. Since \eqref{tra} uniquely determines $s_0 \in {\mathfrak h}_0 \wedge {\mathfrak h}_0$, it suffices to show that the element $s_0$ given by \eqref{stp} satisfies \eqref{tra}. This is verified as follows, letting $s_0$ be given by \eqref{stp}: \begin{multline} [(e_i - e_{\tilde T(i)}) \otimes 1] s_0 = \sum_{j \notin \{i, \tilde T(i)\}} - \frac{1}{n} e_{jj} + \frac{n-2}{2n} (e_{ii} + e_{\tilde T(i), \tilde T(i)}) \\ = \frac{1}{2} (e_i + e_{\tilde T(i)} \otimes 1) P', \end{multline} as desired. \end{proof} \begin{proof}[Proof of Theorem \ref{mt}, parts (1a) and (1b)] Part (2a) proves sufficiency of the conditions, since the $r(u,v)$ given by \ref{afgq} lifts $r_0(v) = r_{T,s}$ and is a unitary AYBE solution (and it satisfies the BD associativity and $s_0$-conditions). For the ``only-if,'' or necessity, Lemma \ref{ays} proves that, given any AYBE solution $r(u,v)$ with a Laurent expansion as in \eqref{laur}, for $r_0(v) = r_{T,s}$, $s$ must satisfy \eqref{tra} for some unique compatible permutation $\tilde T$. In particular, this implies that the BD triple is associative, which alternately follows from Lemmas \ref{nc1} and \ref{nc2}. This completes the proof of (1a). By Lemma \ref{stptra}, part (1b) follows. \end{proof} This completes the proof of Theorem \ref{mt}. \noindent Address: 45 rue d'Ulm; 75005 PARIS; France. \noindent Email: {\sl [email protected]} \end{document}
arXiv
\begin{definition}[Definition:Transmitted Codeword] Let $M$ be a message sent over a transmission line $T$. Let $M$ consist of a number of codewords. Each of the codewords comprising $M$, as it is before it has been transmitted across $T$, is referred to as a '''transmitted codeword'''. \end{definition}
ProofWiki
Packed storage matrix A packed storage matrix, also known as packed matrix, is a term used in programming for representing an $m\times n$ matrix. It is a more compact way than an m-by-n rectangular array by exploiting a special structure of the matrix. Typical examples of matrices that can take advantage of packed storage include: • symmetric or hermitian matrix • Triangular matrix • Banded matrix. Code examples (Fortran) Both of the following storage schemes are used extensively in BLAS and LAPACK. An example of packed storage for hermitian matrix: complex:: A(n,n) ! a hermitian matrix complex:: AP(n*(n+1)/2) ! packed storage for A ! the lower triangle of A is stored column-by-column in AP. ! unpacking the matrix AP to A do j=1,n k = j*(j-1)/2 A(1:j,j) = AP(1+k:j+k) A(j,1:j-1) = conjg(AP(1+k:j-1+k)) end do An example of packed storage for banded matrix: real:: A(m,n) ! a banded matrix with kl subdiagonals and ku superdiagonals real:: AP(-kl:ku,n) ! packed storage for A ! the band of A is stored column-by-column in AP. Some elements of AP are unused. ! unpacking the matrix AP to A do j=1,n forall(i=max(1,j-kl):min(m,j+ku)) A(i,j) = AP(i-j,j) end do print *,AP(0,:) ! the diagonal
Wikipedia
\begin{definition}[Definition:Truth Table] A '''truth table''' is a tabular array that represents the computation of a truth function, that is, a function of the form: :$f : \mathbb B^k \to \mathbb B$ where: :$k$ is a non-negative integer :$\mathbb B$ is a set of truth values, usually $\set {0, 1}$ or $\set {\T, \F}$. \end{definition}
ProofWiki
\begin{document} \title[Regularity of Chemotaxis-Navier-Stokes]{Low Modes Regularity criterion for a chemotaxis-Navier-Stokes system} \author [Mimi Dai]{Mimi Dai} \address{Department of Mathematics, Stat. and Comp. Sci., University of Illinois Chicago, Chicago, IL 60607,USA} \email{[email protected]} \author [Han Liu]{Han Liu} \address{Department of Mathematics, Stat. and Comp. Sci., University of Illinois Chicago, Chicago, IL 60607,USA} \email{[email protected]} \begin{abstract} In this paper we study the regularity problem of a three dimensional chemotaxis-Navier-Stokes system on a periodic domain. A new regularity criterion in terms of only low modes of the oxygen concentration and the fluid velocity is obtained via a wavenumber splitting approach. The result improves many existing criteria in the literature. KEY WORDS: Chemotaxis model; Navier-Stokes equations; regularity. \hspace{0.02cm}CLASSIFICATION CODE: 76D03, 35Q35, 35Q92, 92C17. \end{abstract} \maketitle \section{Introduction} We are interested in the following chemotaxis-Navier-Stokes system on a periodic domain \begin{equation}\label{cmtns} \begin{cases} n_t+u\cdot \nabla n=\Delta n-\nabla \cdot(n\chi(c)\nabla c)\\ c_t+u \cdot \nabla c=\Delta c-nf(c)\\ u_t+(u\cdot \nabla)u+\nabla P =\Delta u +n \nabla \Phi\\ \nabla \cdot u=0, \ \ \ \ (t,x) \in \mathbb{R}^+ \times \mathbb{T}^3. \end{cases} \end{equation} This coupled system arises from modelling aerobic bacteria, e.g. $\textit{Bacillus subtilis},$ suspended into sessile drops of water. It describes a scenario in which both the bacteria, whose population density is denoted by $n=n(t,x),$ and oxygen, whose concentration is denoted by $c=c(t,x),$ are transported by the fluid and diffuse randomly. In addition, the bacteria, which have chemotactic sensitivity $\chi(c),$ tend to swim towards their nutrient oxygen and consume it at a per-capita rate $f(c)$. At the same time, since the bacteria are heavier than water, their chemotactic swimming induces buoyant forces which affects the fluid motion. This buoyancy-driven effect is reflected in the third equation in system (\ref{cmtns}), represented by an extra term $n\nabla \Phi$ added to the Navier-Stokes equation. In this extra term, $\Phi$ denotes the gravitational potential, whereas the Navier-Stokes equation is conventionally written with $u=u(t,x)$ denoting the fluid velocity, and $P=P(t,x)$ the pressure. In this paper, we consider a simple yet prototypical case in which \begin{equation}\label{phchf} \nabla\Phi \equiv \text{const.},\ \ \ \ \chi(c) \equiv \text{const.},\ \ \ \ f(c) \equiv c. \end{equation} We note that in this case, solutions to system (\ref{cmtns}) satisfy the following scaling property: \[n_\lambda(t, x)=\lambda^2 n(\lambda^2t, \lambda x),\ \ c_\lambda(t, x)= c(\lambda^2t, \lambda x), \] \[u_\lambda(t, x)=\lambda u(\lambda^2t, \lambda x),\ \ p_\lambda(t, x)=\lambda^2 p(\lambda^2t, \lambda x)\] solve (\ref{cmtns}) with initial data $$n_{\lambda, 0}=\lambda^2n(\lambda x),\ \ c_{\lambda,0}= c(\lambda x),\ \ u_{\lambda, 0}=\lambda u(\lambda x),$$ provided that $$(n(t,x), c(t,x), u(t,x))$$ solves (\ref{cmtns}) with initial data $(n_0(x), c_0(x), u_0(x)).$ It is obvious that the Sobolev space $\dot H^{-\frac12}\times \dot H^{\frac12}\times\dot H^{\frac32}$ is scaling invariant (also called critical) for $(n,u,c)$ under the above nature scaling of the system. Experiments showed that under the chemotaxis-fluid interaction of system (\ref{cmtns}), even almost homogeneous initial bacteria distribution can evolve and exhibit quite intricate spatial patterns, see \cite{DCCGK}, \cite{TCDWKG} and \cite{L1}. In \cite{L1} Lorz proved the existence of a local weak solution to the $3$D chemotaxis-Navier-Stokes system on bounded domains. In a recent work by Winkler (\cite{W2}), the existence of global weak solutions was proved under more general assumptions via entropy-energy estimates. We refer readers to the work of Winkler (\cite{W, W1, W2}), Liu and Lorz (\cite{LL}), Duan, Lorz and Markowich (\cite{DLM1}), Chae, Kang and Lee (\cite{CKL, CKL1}), as well as Jiang, Wu and Zheng (\cite{JWZ}) for more details about the well-posedness results for the chemotaxis-Navier-Stokes system. We are aware of several regularity criteria concerning the three dimensional chemotaxis-Navier-Stokes system. In \cite{CKL}, Chae, Kang and Lee obtained local-in-time classical solutions and Prodi-Serrin type regularity criteria. In particular, suppose that \begin{equation}\label{rc1} \begin{split} &\|u\|_{L^q(0,T; L^p)}+\|\nabla c\|_{L^2(0,T;L^\infty)}<\infty, \ \ \ \frac{3}{p}+\frac{2}{q}=1, \ \ 3 < p \leq \infty, \end{split} \end{equation} then the corresponding classical solution can be extended beyond time $T$. In \cite{CKL1}, Chae, Kang and Lee also obtained regularity criteria in terms of the $L^p$ norms of $u$ and $n.$ Jiang, Wu and Zheng, in their recent paper \cite{JWZ1}, proved that a classical solution to the initial boundary value problem of the Keller-Segel model i.e. the fluid free version of system (\ref{cmtns}), exists beyond time $T$ if \begin{equation}\notag \begin{split} &\|\nabla c\|_{L^2(0,T;L^\infty)}<\infty,\\ \text{or }\ \ &\|n\|_{L^q(0,T; L^p)}<\infty, \ \ \ \frac{3}{p}+\frac{2}{q} \leq 2,\ \ \frac{3}{2}<p \leq \infty. \end{split} \end{equation} In this paper, we aim to establish a regularity condition weaker than (\ref{rc1}). Our result is stated in the following theorem, where $u_{\leq Q_u}$ and $c_{\leq Q_c}$ denote certain low modes of the velocity and oxygen concentration, which we shall explain later. \begin{Theorem}\label{Rgc} Let $(n(t), c(t), u(t))$ be a weak solution to (\ref{cmtns}) on $[0,T]$ on the torus $\mathbb T^3$. Assume that $(n(t), c(t), u(t))$ is regular on $[0, T)$ and \begin{equation}\label{c1u1q} \int^{T}_0 \|\nabla c_{\leq Q_c(t)}(t)\|^2_{L^ \infty}+\|u_{\leq Q_u(t)}(t)\|_{B^1_{\infty, \infty}} \mathrm{d}t < \infty, \end{equation} then $(n(t), c(t), u(t))$ is regular on $[0, T].$ \end{Theorem} We note that the quantity in (\ref{c1u1q}) is invariant with respect to the scaling of system (\ref{cmtns}). It is obvious that the condition on the oxygen concentration $c$ in (\ref{c1u1q}) is weaker than that of (\ref{rc1}). It was also shown that the condition on velocity $u$ in (\ref{c1u1q}) is weaker than that of (\ref{rc1}), see \cite{CD1}. Our main devices are frequency localization technique and the wavenumber splitting method, which have been extensively applied to study the regularity problem and related problems (such as determining modes, see \cite{CD} and \cite{CDK}) for various supercritical dissipative equations. In particular, in \cite{CS2} it was proved that a solution to the Navier-Stokes or Euler equation does not blow up at time $T$ if \begin{gather*} \int_0^T \|\nabla \times u_{\leq Q}\|_{B^0_{\infty, \infty}}\mathrm{d}t < \infty \end{gather*} for some wavenumber $\Lambda(t)=2^{Q(t)}.$ This result improved previous regularity criteria of Beale-Kato-Majda \cite{BKM} as well as of Prodi-Serrin-Ladyzhenskaya. The idea of separating high and low frequency modes by a critical wavenumber originats in Kolmogorov's theory of turbulence, which predicts the existence of a critical wavenumber above which the dissipation term dominates. Later, the wavenumber splitting method has been applied to a liquid crystal model with Q-tensor configuration \cite{D-Qtensor}, where it was shown that a condition solely on the low modes of the velocity (no condition on the Q-tensor) can guarantee regularity. \section{Preliminaries} \label{sec:pre} \subsection{Notation} \label{sec:notation} The symbol $A\lesssim B$ denotes an estimate of the form $A\leq C B$ with some absolute constant $C$, and $A\sim B$ denotes an estimate of the form $C_1 B\leq A\leq C_2 B$ with absolute constants $C_1$, $C_2$. The Sobolev norm $\|\cdot\|_{L^p}$ is shortened as $\|\cdot\|_p$ without no confusion. The symbols $W^{k,p}$ and $H^{s}$ represent the standard Sobolev spaces and $L^2$-based Sobolev spaces, respectively. \subsection{Littlewood-Paley decomposition} \label{sec:LPD} The main analysis tools are the frequency localization method and a wavenumber splitting approach based on the Littlewood-Paley theory, which we briefly recall here. For a complete description of the theory and applications, the readers are referred to the books \cite{BCD} and \cite{G}. We denote the Fourier transform and its inverse by $\mathcal F$ and $\mathcal F^{-1},$ respectively. We construct a family of smooth functions $\{\varphi_q \}_{q=-1}^\infty$ with annular support that forms a dyadic partition of unity in the frequency space, defined as \begin{equation}\notag \varphi_q(\xi)= \begin{cases} \varphi(\lambda_q^{-1}\xi) \ \ \ \mbox { for } q\geq 0,\\ \chi(\xi) \ \ \ \mbox { for } q=-1, \end{cases} \end{equation} where $\varphi(\xi)=\chi(\xi/2)-\chi(\xi)$ and $\chi\in C_0^\infty(\mathbf{R}^n)$ is a nonnegative radial function chosen in a way such that \begin{equation}\notag \chi(\xi)= \begin{cases} 1, \ \ \mbox { for } |\xi|\leq\frac{3}{4}\\ 0, \ \ \mbox { for } |\xi|\geq 1. \end{cases} \end{equation} Introducing $\tilde h:=\mathcal F^{-1}\chi$ and $h:=\mathcal F^{-1}\varphi,$ we define the Littlewood-Paley projections for a function $u \in \mathcal{S}'$ as \begin{equation}\notag \begin{cases} & u_{-1}=\mathcal F^{-1}(\chi(\xi)\mathcal Fu)=\displaystyle\int \tilde h(y)u(x-y)dy,\\ &u_q:=\Delta_qu=\mathcal F^{-1}(\varphi(\lambda_q^{-1}\xi)\mathcal Fu)=\lambda_q^n\displaystyle\int h(\lambda_qy)u(x-y)dy, \ q\geq 0. \end{cases} \end{equation} Then the identity \begin{equation}\notag u=\sum_{q=-1}^\infty u_q \end{equation} holds in the sense of distributions. To simplify the notation, we denote \begin{equation}\notag \tilde u_q=u_{q-1}+u_q+u_{q+1}, \qquad u_{\leq Q}=\sum_{q=-1}^Qu_q, \qquad u_{(P,Q]}=\sum_{q=P+1}^Qu_q. \end{equation} We note that $ \|u\|_{H^s} \sim \left(\sum_{q=-1}^\infty\lambda_q^{2s}\|u_q\|_2^2\right)^{\frac{1}{2}},$ for each $u \in H^s$ and $s\in\mathbf{R}$. Utilizing the Littlewood-Paley theory, we give an equivalent definition of the norm of the Besov space $B_{p,\infty}^{s}$ as follows. \begin{Definition} Let $s\in \mathbb R$, and $1\leq p\leq \infty$. The Besov space $B_{p,\infty}^{s}$ is the space of tempered distributions $u$ whose Besov norm $\|u\|_{B_{p, \infty}^{s}} < \infty,$ where $$\|u\|_{B_{p, \infty}^{s}}:=\sup_{q\geq -1}\lambda_q^s\|u_q\|_p. $$ \end{Definition} Moreover, we recall Bernstein's inequality, see \cite{L}. \begin{Lemma}\label{brnst} Let $n$ be the space dimension and $1\leq s\leq r\leq \infty$. Then for all tempered distributions $u$, \begin{equation}\notag \|u_q\|_{r}\lesssim \lambda_q^{n(\frac{1}{s}-\frac{1}{r})}\|u_q\|_{s}. \end{equation} \end{Lemma} Throughout the paper, we will also utilize Bony's paraproduct decomposition \begin{equation}\notag \begin{split} \Delta_q(u\cdot v)=&\sum_{|q-p|\leq 2}\Delta_q(u_{\leq{p-2}}\cdot v_p)+ \sum_{|q-p|\leq 2}\Delta_q(u_{p}\cdot v_{\leq{p-2}})+\sum_{p\geq q-2}\Delta_q(u_p\cdot\tilde v_p), \end{split} \end{equation} as well as the commutator notation $$ [\Delta_q, u_{\leq{p-2}}\cdot\nabla]v_p=\Delta_q(u_{\leq{p-2}}\cdot\nabla v_p)-u_{\leq{p-2}}\cdot\nabla \Delta_qv_p. $$ An estimate for the commutator is given by the following lemma (c.f. \cite{CD}). \begin{Lemma}\label{comtr} Let $\frac1{r_2}+\frac1{r_3}=\frac1{r_1},$ we have the estimate \begin{equation}\notag \|[\Delta_q,u_{\leq{p-2}}\cdot\nabla] v_q\|_{r_1}\lesssim \|v_q\|_{r_3}\sum_{p' \leq p-2} \lambda_{p'} \|u_{p'}\|_{r_2}. \end{equation} \end{Lemma} \subsection{Weak solution and regular solution to system (\ref{cmtns})} \label{sec:sol} From \cite{W2}, we know that on bounded, smooth and convex domain $\Omega$ in three dimension, system (\ref{cmtns}) has a global weak solution $(n,c,u)$ which satisfies the equations in (\ref{cmtns}) in the distributional sense, provided that the initial data $(n_0, c_0, u_0)$ satisfy $$n_0 \in L \log L(\Omega), \ \ c_0 \in L^\infty(\Omega) \text{ and } \sqrt{c_0} \in H^1(\Omega), \ \ u_0 \in L_\sigma^2(\Omega),$$ where $L_\sigma^2(\Omega)$ is the space of solenoidal vector field in $L^2(\Omega).$ We highlight the following properties of the weak solution $(n,c,u)$ in particular \begin{align*} &n \in L^\infty(0,\infty; L^1(\Omega)),\ \ c \in L^\infty(0,\infty; L^\infty(\Omega)),\\ &u \in L^\infty_{loc}(0,\infty; L^2(\Omega))\cap L^2_{loc}(0,\infty; H^1_0(\Omega)). \end{align*} A regular solution of (\ref{cmtns}) is understood in the way that the solution has enough regularity to satisfy the equations of the system point-wise. Typically, a solution in a space with higher regularity than its critical space can be shown regular via bootstrapping arguments. The local existence of regular solution to (\ref{cmtns}) was shown in \cite{CKL}. \subsection{Parabolic regularity theory} \label{sec:par} We consider the heat equation on $\mathbb{T}^d$ with $d\geq2$ \begin{equation}\label{eq-heat} u_t-\Delta u=f \end{equation} with initial data $u_0$. We shall see that the solution $u$ turns out to be smoother than the source term $f.$ \begin{Lemma}\label{le-heat} Let $u$ be a solution to (\ref{eq-heat}) with $u_0\in H^{\alpha+1}$ and $f\in L^2(0,T; H^\alpha)$ for $\alpha\in \mathbb R$. Then we have $u\in L^2(0,T; H^{\alpha+2})\cap H^1(0,T; H^\alpha)$. \end{Lemma} \textbf{Proof:\ } Projecting equation (\ref{eq-heat}) by $\Delta_q$ and taking inner product of the resulted equation with $\lambda_q^{2\alpha+4}u_q$ leads \begin{equation}\notag \frac12\frac{d}{dt}\lambda_q^{2\alpha+4}\|u_q\|_2^2+\lambda_q^{2\alpha+4}\|\nabla u_q\|_2^2=\lambda_q^{2\alpha+4}\int f_qu_q\, \mathrm{d}x. \end{equation} Applying H\"older's and Young's inequalities to the right hand side yields \begin{equation}\notag \frac{d}{dt}\lambda_q^{2\alpha+4}\|u_q\|_2^2+\lambda_q^{2\alpha+4}\|\nabla u_q\|_2^2\leq 4\lambda_q^{2\alpha+2}\|f_q\|_2^2. \end{equation} As a consequence of Duhamel's formula, summation in $q$ and integration over $[0,T]$, we obtain \begin{equation}\notag \begin{split} \int_0^T\sum_{q \geq -1}\lambda_q^{2\alpha+4}\|u_q(t)\|_2^2\, \mathrm{d}t \leq &\int_0^T\sum_{q \geq -1} \lambda_q^{2\alpha+4}\|u_q(0)\|_2^2e^{-\lambda_q^2t}\, \mathrm{d}t\\ &+4\int_0^T\sum_{q \geq -1}\lambda_q^{2\alpha+2}\int_0^te^{-\lambda_q^2(t-s)}\|f_q(s)\|_2^2\mathrm{d}s \, \mathrm{d}t.\ \end{split} \end{equation} The first integral on the right hand side is handled as \begin{equation}\notag \int_0^T\sum_{q \geq -1} \lambda_q^{2\alpha+4}\|u_q(0)\|_2^2e^{-\lambda_q^2t}\, \mathrm{d}t \leq \sum_{q \geq -1} \lambda_q^{2\alpha+2}\|u_q(0)\|_2^2\left(1-e^{-\lambda_q^2T}\right) \lesssim \|u_0\|_{H^{\alpha+1}}^2. \end{equation} In order to estimate the second integral, we exchange the order of integration to obtain \begin{equation}\notag \begin{split} &\int_0^T\sum_{q \geq -1}\lambda_q^{2\alpha+2}\int_0^te^{-\lambda_q^2(t-s)}\|f_q(s)\|_2^2\mathrm{d}s \, \mathrm{d}t\\ \leq &\int_0^T\int_s^T\sum_{q \geq -1}\lambda_q^{2\alpha+2}e^{-\lambda_q^2(t-s)}\|f_q(s)\|_2^2\mathrm{d}t \, \mathrm{d}s\\ \leq &\int_0^T\sum_{q \geq -1}\lambda_q^{2\alpha}\|f_q(s)\|_2^2\left(1-e^{-\lambda_q^2(T-s)}\right)\, \mathrm{d}s\\ \lesssim &\|f\|_{L^2(0,T;H^\alpha)}^2. \end{split} \end{equation} Combining the estimates above, we conclude that $u\in L^2(0,T; H^{\alpha+2})$ for $\alpha\in \mathbb R$. To prove $u\in H^1(0,T; H^{\alpha})$, we first project equation (\ref{eq-heat}) to the $q$-th dyadic shell \[(u_t)_q=\Delta u_q+f_q.\] It follows \[\|(u_t)_q\|_2^2\leq 2\|\Delta u_q\|_2^2+2\|f_q\|_2^2.\] Thus we deduce that \begin{equation}\notag \begin{split} \int_0^T\sum_{q\geq -1}\lambda_q^{2\alpha} \|(u_t)_q\|_2^2\,\mathrm{d}t\lesssim & \int_0^T\sum_{q\geq -1}\lambda_q^{2\alpha} \|\Delta u_q\|_2^2\,\mathrm{d}t +\int_0^T\sum_{q\geq -1}\lambda_q^{2\alpha} \|f_q\|_2^2\,\mathrm{d}t\\ \lesssim &\|u\|_{L^2(0,T; H^{\alpha+2})}^2+\|f\|_{L^2(0,T;H^\alpha)}^2. \end{split} \end{equation} It is then clear that $u\in H^1(0,T; H^\alpha)$, which completes the proof of the lemma. \par{\raggedleft$\Box$\par} \section{Proof of Theorem \ref{Rgc}} \label{sec:reg} This section is devoted to the proof of the main result. We start by introducing the dissipation wavenumber $\Lambda_u(t)$ for $u$ and $\Lambda_c(t)$ for $c$, \begin{equation}\label{wave-uc} \begin{split} \Lambda_u(t)=&\min\left\{\lambda_q: \lambda_p^{-1}\|u_p(t)\|_\infty<C_0, \forall p >q, q \in \mathbb{N}\right\},\\ \Lambda_c(t)=&\min\left\{\lambda_q: \lambda_p^{\frac{3}{r}}\|c_p(t)\|_r<C_0, \forall p >q, q \in \mathbb{N}\right\}, \ \ r\in\left(3,\frac{3}{1-\varepsilon}\right), \end{split} \end{equation} where $\varepsilon>0$ is a fixed arbitrarily small constant, and $C_0$ is a small constant to be determined later. Through out this section, we use $C$ for various absolute constants which can be different from line to line. In addition, we let $Q_u(t)$ and $Q_c(t)$ be integers such that $\lambda_{Q_u(t)}=\Lambda_u(t)$ and $\lambda_{Q_c(t)}=\Lambda_c(t)$. Then the constraint on the low modes is defined as \[f(t):= \| \nabla c_{\leq Q_c(t)}(t)\|^2_{L^{\infty}}+\|u_{\leq Q_u(t)}(t)\|_{B^1_{\infty, \infty}}.\] Notice that the wavenumber $\Lambda_u$ separates the inertial range from the dissipation range where the viscous term $\Delta u$ dominates; and $\Lambda_c$ has the same property. Precisely, we have \[\|u(t)_{Q_u}\|_{\infty}\geq C_0 \Lambda_u(t), \ \ \Lambda_c^{\frac3r}(t)\|c(t)_{Q_c}\|_{r}\geq C_0;\] \[\lambda_q\|u(t)_q\|_{\infty}< C_0, \ \forall q>Q_u; \ \ \lambda_q^{\frac3r}(t)\|c(t)_q\|_{r}< C_0, \ \forall q>Q_c.\] The crucial part is to establish a uniform (in time) bound for each of the unknowns $n, u$ and $c$ in a space with higher regularity than the critical Sobolev space. In fact, it is sufficient to prove that $(n, u, c)\in L^\infty(0,T; \dot H^{s_1})\times L^\infty(0,T; \dot H^{s_2})\times L^\infty(0,T; \dot H^{s_3})$ for some $s_1>-\frac12, s_2>\frac12$ and $s_3>\frac32$. Due to the complicated interactions among the three equations in (\ref{cmtns}), the aforementioned goal will be achieved in two steps. The first step is to show that $(n, u, c)\in L^\infty(0,T; \dot H^{s})\times L^\infty(0,T; \dot H^{s+1})\times L^\infty(0,T; \dot H^{s+1})$ for some $s\in (-\frac12,0)$. The second step is to apply bootstrapping arguments, the Lp-Lq theory for parabolic equations and a mixed derivative theorem to the equation of oxygen concentration $c$, and hence improve the regularity of $c$. In the first step, we multiply the equations in (\ref{cmtns}) by $\lambda_q^{2s}\Delta_q^2n,$ $\lambda_q^{2s+2}\Delta_q^2c$ and $\lambda_q^{2s+2}\Delta_q^2u,$ respectively. Integrating and summing lead to \begin{equation}\label{eqtnn} \begin{split} \frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\sum_{q \geq -1} \lambda^{2s}_q \|n_q\|_2^2 \leq & -\sum_{q \geq -1}\lambda^{2s}_q\|\nabla n_q\|_2^2 -\sum_{q \geq -1}\lambda^{2s}_q\int_{\mathbb{R}^3} \Delta_q (u \cdot \nabla n)n_q \mathrm{d}x\\ &-\sum_{q \geq -1}\lambda^{2s}_q\int_{\mathbb{R}^3} \Delta_q(\nabla \cdot (n \chi(c)\nabla c))n_q \mathrm{d}x; \end{split} \end{equation} \begin{equation}\label{eqtnc} \begin{split} &\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\sum_{q \geq -1} \lambda^{2s+2}_q \|c_q\|_2^2 \leq -\sum_{q \geq -1}\lambda^{2s+2}_q\|\nabla c_q\|_2^2 \\ &-\sum_{q \geq -1}\lambda^{2s+2}_q\int_{\mathbb{R}^3} \Delta_q (u \cdot \nabla c)c_q \mathrm{d}x -\sum_{q \geq -1}\lambda^{2s+2}_q\int_{\mathbb{R}^3} \Delta_q(nf(c))c_q \mathrm{d}x; \end{split} \end{equation} \begin{equation}\label{eqtnu} \begin{split} &\frac{1}{2}\frac{\mathrm{d}}{\mathrm{d}t}\sum_{q \geq -1} \lambda^{2s+2}_q \|u_q\|_2^2 \leq -\sum_{q \geq -1}\lambda^{2s+2}_q\|\nabla u_q\|_2^2 \\ &-\sum_{q \geq -1}\lambda^{2s+2}_q\int_{\mathbb{R}^3} \Delta_q (u \cdot \nabla u)u_q \mathrm{d}x -\sum_{q \geq -1}\lambda^{2s+2}_q\int_{\mathbb{R}^3} \Delta_q(n\nabla\Phi)u_q \mathrm{d}x. \end{split} \end{equation} For simplicity we label the terms \begin{gather*} I:=-\sum_{q \geq -1}\lambda^{2s+2}_q\int_{\mathbb{R}^3}\Delta_q (u \cdot \nabla u)u_q \mathrm{d}x, \ \ \ \ II:=-\sum_{q \geq -1}\lambda^{2s+2}_q\int_{\mathbb{R}^3} \Delta_q (u \cdot \nabla c)c_q \mathrm{d}x,\\ III:= -\sum_{q \geq -1}\lambda^{2s}_q\int_{\mathbb{R}^3} \Delta_q (u \cdot \nabla n)n_q \mathrm{d}x,\\ IV:=-\sum_{q \geq -1}\lambda^{2s+2}_q\int_{\mathbb{R}^3} \Delta_q(n\nabla\Phi)u_q \mathrm{d}x, \ \ \ \ V:=-\sum_{q \geq -1}\lambda^{2s+2}_q\int_{\mathbb{R}^3} \Delta_q(nf(c))c_q \mathrm{d}x,\\ VI:=-\sum_{q \geq -1}\lambda^{2s}_q\int_{\mathbb{R}^3} \Delta_q(\nabla \cdot (n \chi(c)\nabla c))n_q \mathrm{d}x. \end{gather*} \subsection{Estimate of $I$} We estimate the term $I$ using the wavenumber splitting method. As we shall see, the commutator reveals certain cancellation within the nonlinear interactions. \begin{Lemma}\label{ppsuu} Let $s>-\frac{1}{2}.$ We have \begin{equation}\notag |I| \lesssim C_0\sum_{q> -1}\lambda_q^{2s+4}\|u_q\|_2^2+Q_uf(t)\sum_{q\geq -1}\lambda_q^{2s+2}\|u_q\|_2^2. \end{equation} \end{Lemma} \textbf{Proof:\ } Applying Bony's paraproduct decomposition to $I$ leads to \begin{equation}\notag \begin{split} I=&-\sum_{q\geq -1}\sum_{|q-p|\leq 2}\lambda_q^{2s+2}\int_{\mathbf{R}^3}\Delta_q(u_{\leq p-2}\cdot\nabla u_{p})u_q\, \mathrm {d}x\\ &-\sum_{q\geq -1}\sum_{|q-p|\leq 2}\lambda_q^{2s+2}\int_{\mathbf{R}^3}\Delta_q(u_{p}\cdot\nabla u_{\leq{p-2}})u_q\, \mathrm {d}x\\ &-\sum_{q\geq -1}\sum_{p\geq q-2}\lambda_q^{2s+2}\int_{\mathbf{R}^3}\Delta_q(u_p\cdot\nabla\tilde u_p)u_q\, \mathrm {d}x\\ =:&I_{1}+I_{2}+I_{3}. \end{split} \end{equation} Using the fact $\sum_{|q-p|\leq 2}\Delta_q u_p=u_q$ and the commutator notation, we have \begin{equation}\notag \begin{split} I_{1}=&-\sum_{q\geq -1}\sum_{|q-p|\leq 2}\lambda_q^{2s+2}\int_{\mathbf{R}^3}[\Delta_q, u_{\leq{p-2}}\cdot\nabla] u_pu_q\, \mathrm {d}x\\ &-\sum_{q\geq -1}\lambda_q^{2s_3}\int_{\mathbf{R}^3}u_{\leq{q-2}}\cdot\nabla u_q u_q\, \mathrm {d}x\\ &-\sum_{q\geq -1}\sum_{|q-p|\leq 2}\lambda_q^{2s+2}\int_{\mathbf{R}^3}(u_{\leq{p-2}}-u_{\leq{q-2}})\cdot\nabla\Delta_qu_p u_q\, \mathrm {d}x\\ =:&I_{11}+I_{12}+I_{13}. \end{split} \end{equation} Moreover we have $I_{12}=0$ due to that $\mbox{div}\, u_{\leq q-2}=0.$ We then split $I_{11}$ based on definition of $\Lambda_u(t)$ \begin{equation}\notag \begin{split} |I_{11}|\leq &\sum_{q\geq -1}\sum_{|q-p|\leq 2}\lambda_q^{2s+2}\int_{\mathbf{R}^3}\left|[\Delta_q, u_{\leq{p-2}}\cdot\nabla] u_pu_q\right|\, \mathrm {d}x\\ \leq & \sum_{p\leq Q_u+2}\sum_{|q-p|\leq 2}\lambda_q^{2s+2}\int_{\mathbf{R}^3}\left|[\Delta_q, u_{\leq{p-2}}\cdot\nabla] u_pu_q\right|\, \mathrm {d}x\\ &+\sum_{p> Q_u+2}\sum_{|q-p|\leq 2}\lambda_q^{2s+2}\int_{\mathbf{R}^3}\left|[\Delta_q, u_{\leq Q_u}\cdot\nabla] u_pu_q\right|\, \mathrm {d}x\\ &+\sum_{p> Q_u+2}\sum_{|q-p|\leq 2}\lambda_q^{2s+2}\int_{\mathbf{R}^3}\left|[\Delta_q, u_{(Q_u,p-2]}\cdot\nabla] u_pu_q\right|\, \mathrm {d}x\\ =: &I_{111}+I_{112}+I_{113}. \end{split} \end{equation} Using (\ref{comtr}), H\"older's inequality, and definition of $f(t)$, we obtain \begin{equation}\notag \begin{split} I_{111}\leq &\sum_{1\leq p\leq Q_u+2}\sum_{|q-p|\leq 2}\lambda_q^{2s+2}\|\nabla u_{\leq p-2}\|_\infty\|u_p\|_2\|u_q\|_2\\ \lesssim & f(t) \sum_{1\leq p\leq Q_u+2}\|u_p\|_2\sum_{|q-p|\leq 2}\lambda_q^{2s+2}\|u_q\|_2\sum_{p'\leq p-2}1\\ \lesssim &Q_u f(t) \sum_{1\leq p\leq Q_u+2}\lambda_p^{s+1}\|u_p\|_2\sum_{|q-p|\leq 2}\lambda_q^{s+1}\|u_q\|_2\\ \lesssim &Q_u f(t) \sum_{q\geq -1}\lambda_q^{2s+2}\|u_q\|_2^2; \end{split} \end{equation} and similarly \begin{equation}\notag \begin{split} I_{112}\leq &\sum_{ p> Q_u+2}\sum_{|q-p|\leq 2}\lambda_q^{2s+2}\|\nabla u_{\leq Q_u}\|_\infty\|u_p\|_2\|u_q\|_2\\ \lesssim & Q_uf(t) \sum_{ p> Q_u+2}\|u_p\|_2\sum_{|q-p|\leq 2}\lambda_q^{2s+2}\|u_q\|_2\\ \lesssim & Q_uf(t) \sum_{ p> Q_u+2}\lambda_p^{s+1}\|u_p\|_2\sum_{|q-p|\leq 2}\lambda_q^{s+1}\|u_q\|_2\\ \lesssim &Q_u f(t) \sum_{q> Q_u}\lambda_q^{2s+2}\|u_q\|_2^2. \end{split} \end{equation} We estimate $I_{113}$ with the help of H\"older's inequality and Lemma (\ref{comtr}) \begin{equation}\notag \begin{split} I_{113} \leq &\sum_{ p> Q_u+2}\sum_{|q-p|\leq 2}\lambda_q^{2s+2}\|[\Delta_q,u_{(Q_u,p-2]}\cdot\nabla]u_p\|_2\|u_q\|_2\\ \leq &\sum_{ p> Q_u+2}\|u_p\|_2\sum_{|q-p|\leq 2}\lambda_q^{2s+2}\|u_q\|_2\sum_{Q_u<p'\leq p-2}\lambda_{p'}\|u_{p'}\|_\infty\\ \lesssim & C_0 \sum_{ p> Q_u+2}\|u_p\|_2\sum_{|q-p|\leq 2}\lambda_q^{2s+2}\|u_q\|_2\sum_{Q_u<p'\leq p-2}\lambda_{p'}^{2}\\ \lesssim & C_0 \sum_{ p> Q_u+2}\lambda_p^{2s+4}\|u_p\|_2^2\sum_{Q_u<p'\leq p-2}\lambda_{p'-p}^{2}\\ \lesssim & C_0 \sum_{ p> Q_u+2}\lambda_p^{2s+4}\|u_p\|_2^2. \end{split} \end{equation} We split $I_{13}$ according to the definition of $\Lambda_u(t)$ \begin{equation}\notag \begin{split} |I_{13}|\leq & \sum_{q\geq -1}\sum_{|q-p|\leq 2}\lambda_q^{2s+2}\int_{\mathbf{R}^3}\big|(u_{\leq{p-2}}-u_{\leq{q-2}})\cdot\nabla\Delta_qu_p u_q\big|\, \mathrm {d}x\\ \leq & \sum_{-1\leq q\leq Q_u}\sum_{|q-p|\leq 2}\lambda_q^{2s+2}\int_{\mathbf{R}^3}\big|(u_{\leq{p-2}}-u_{\leq{q-2}})\cdot\nabla\Delta_qu_p u_q\big|\, \mathrm {d}x\\ & +\sum_{q>Q_u}\sum_{|q-p|\leq 2}\lambda_q^{2s+2}\int_{\mathbf{R}^3}\big|(u_{\leq{p-2}}-u_{\leq{q-2}})\cdot\nabla\Delta_qu_p u_q\big|\, \mathrm {d}x\\ =:& I_{131}+I_{132}. \end{split} \end{equation} Using H\"older's inequality and definition of $f(t)$ we can bound $I_{131}.$ \begin{equation}\notag \begin{split} |I_{131}|\leq & \sum_{-1\leq q\leq Q_u}\lambda_q^{2s+3} \|u_q\|_{\infty}\sum_{|q-p|\leq 2}\|u_{\leq{p-2}}-u_{\leq{q-2}}\|_2\|u_p\|_2\\ \lesssim & f(t)\sum_{-1\leq q\leq Q_u}\lambda_q^{2s+2} \sum_{|q-p|\leq 2}\|u_{\leq{p-2}}-u_{\leq{q-2}}\|_2\|u_p\|_2\\ \lesssim & f(t)\sum_{-1\leq q\leq Q_u}\lambda_q^{2s+2} \|u_q\|_2^2.\\ \end{split} \end{equation} And we estimate $I_{132}$ using H\"older's inequality and the definition of $\Lambda_u(t)$, \begin{equation}\notag \begin{split} |I_{132}|\leq & \sum_{q> Q_u}\lambda_q^{2s+3} \|u_q\|_\infty\sum_{|q-p|\leq 2}\|u_{\leq{p-2}}-u_{\leq{q-2}}\|_2\|u_p\|_{2}\\ \lesssim & C_0 \sum_{q> Q_u}\lambda_q^{2s+4}\sum_{|q-p|\leq 2}\|u_{\leq{p-2}}-u_{\leq{q-2}}\|_2\|u_p\|_2\\ \lesssim & C_0 \sum_{q> Q_u}\lambda_q^{2s+4}\|u_q\|_2^2. \end{split} \end{equation} We omit the detailed estimation of $I_{2}$ as it is similar to that of $I_{11}$. Meanwhile, for $I_{3}$ we have for $s>-\frac12$ \begin{equation}\notag \begin{split} |I_{3}|\leq&\sum_{q\geq -1}\sum_{p\geq q-2}\lambda_q^{2s+2}\int_{\mathbb R^3}|\Delta_q(u_p\otimes\tilde u_p)\nabla u_q|\, \mathrm {d}x\\ \leq &\sum_{q> Q_u}\lambda_q^{2s+3}\|u_q\|_\infty\sum_{p\geq q-2}\|u_p\|_2^2 +\sum_{-1\leq q\leq Q_u}\lambda_q^{2s+3}\|u_q\|_\infty\sum_{p\geq q-2}\|u_p\|_2^2\\ \leq &C_0\sum_{q> Q_u}\lambda_q^{2s+4}\sum_{p\geq q-2}\|u_p\|_2^2 +f(t)\sum_{-1\leq q\leq Q_u}\lambda_q^{2s+2}\sum_{p\geq q-2}\|u_p\|_2^2\\ \lesssim & C_0 \sum_{p> Q_u}\lambda_p^{2s+4}\|u_p\|_2^2\sum_{Q_u< q\leq p+2}\lambda_{q-p}^{2s+4} +f(t)\sum_{p\geq -1}\lambda_p^{2s+2}\|u_p\|_2^2\sum_{q\leq p+2}\lambda_{q-p}^{2s+2}\\ \lesssim & C_0 \sum_{q> Q_u}\lambda_q^{2s+4}\|u_q\|_2^2+f(t)\sum_{q\geq -1}\lambda_q^{2s+2}\|u_q\|_2^2. \end{split} \end{equation} We combine the above estimates to conclude that \begin{equation}\notag |I|\lesssim C_0\sum_{q> -1}\lambda_q^{2s+4}\|u_q\|_2^2+Q_uf(t)\sum_{q\geq -1}\lambda_q^{2s+2}\|u_q\|_2^2. \end{equation} \par{\raggedleft$\Box$\par} \subsection{Estimate of $II$} \begin{Lemma}\label{ppscv} Let $s>-\frac12$. We have \begin{equation}\notag \begin{split} |II|\leq & \left(CC_0+\frac1{32}\right) \sum_{q>-1}\left(\lambda_q^{2s+4}\|c_q\|_2^2+\lambda_q^{2s+4}\|u_q\|_2^2\right)+CQ_uf(t)\sum_{q\geq-1}\lambda_q^{2s+2}\|c_q\|_2^2. \end{split} \end{equation} \end{Lemma} \textbf{Proof:\ } We start with Bony's paraproduct decomposition for $II$, \begin{equation}\notag \begin{split} II=&\sum_{q\geq -1}\sum_{|q-p|\leq 2}\lambda_q^{2s+2}\int_{\mathbf{R}^3}\Delta_q(u_{\leq p-2}\cdot\nabla c_p)c_q\, \mathrm{d}x\\ &+\sum_{q\geq -1}\sum_{|q-p|\leq 2}\lambda_q^{2s+2}\int_{\mathbf{R}^3}\Delta_q(u_{p}\cdot\nabla c_{\leq{p-2}})c_q\, \mathrm{d}x\\ &+\sum_{q\geq -1}\sum_{p\geq q-2}\lambda_q^{2s+2}\int_{\mathbf{R}^3}\Delta_q(u_p\cdot\nabla\tilde c_p)c_q\, \mathrm{d}x\\ =&II_{1}+II_{2}+II_{3}. \end{split} \end{equation} Using the commutator notation, we rewrite $II_1$ as \begin{equation}\notag \begin{split} II_{1}=&\sum_{q\geq -1}\sum_{|q-p|\leq 2}\lambda_q^{2s+2}\int_{\mathbf{R}^3}[\Delta_q,u_{\leq{p-2}}\cdot\nabla] c_pc_q\, \mathrm{d}x\\ &+\sum_{q\geq -1}\lambda_q^{2s+2}\int_{\mathbf{R}^3}u_{\leq{q-2}}\cdot\nabla c_q c_q\, \mathrm{d}x\\ &+\sum_{q\geq -1}\sum_{|q-p|\leq 2}\lambda_q^{2s+2}\int_{\mathbf{R}^3}(u_{\leq{p-2}}-u_{\leq{q-2}})\cdot\nabla\Delta_qc_p c_q\, \mathrm{d}x\\ =:&II_{11}+II_{12}+II_{13}. \end{split} \end{equation} Here, just as in proposition (\ref{ppsuu}), we used $\sum_{|q-p|\leq 2}\Delta_pc_q=c_q$ to obtain $II_{12},$ which vanishes as $\mbox{div} \, u_{\leq q-2}=0$. One can see that $II_{11}$ enjoys the same estimates that $I_{11}$ does. Splitting the summation by $Q_u(t)$ yields \begin{equation}\notag \begin{split} |II_{11}|\leq &\sum_{q\geq -1}\sum_{|q-p|\leq 2}\lambda_q^{2s+2}\int_{\mathbf{R}^3}\left|[\Delta_q, u_{\leq{p-2}}\cdot\nabla] c_pc_q\right|\, \mathrm {d}x\\ \leq & \sum_{p\leq Q_u+2}\sum_{|q-p|\leq 2}\lambda_q^{2s+2}\int_{\mathbf{R}^3}\left|[\Delta_q, u_{\leq{p-2}}\cdot\nabla] c_pc_q\right|\, \mathrm {d}x\\ &+\sum_{p> Q_u+2}\sum_{|q-p|\leq 2}\lambda_q^{2s+2}\int_{\mathbf{R}^3}\left|[\Delta_q, u_{\leq Q_u}\cdot\nabla] c_pc_q\right|\, \mathrm {d}x\\ &+\sum_{p> Q_u+2}\sum_{|q-p|\leq 2}\lambda_q^{2s+2}\int_{\mathbf{R}^3}\left|[\Delta_q, u_{(Q_u,p-2]}\cdot\nabla] c_pc_q\right|\, \mathrm {d}x\\ =: &II_{111}+II_{112}+II_{113}. \end{split} \end{equation} We estimate the first two of the above three terms by H\"older's inequality, Lemma (\ref{comtr}) and the definition of $f(t).$ \begin{equation}\notag \begin{split} II_{111}\leq &\sum_{1\leq p\leq Q_u+2}\sum_{|q-p|\leq 2}\lambda_q^{2s+2}\|\nabla u_{\leq p-2}\|_\infty\|c_p\|_2\|c_q\|_2\\ \lesssim & Q_uf(t) \sum_{1\leq p\leq Q_u+2}\lambda_p^{s+1}\|c_p\|_2\sum_{|q-p|\leq 2}\lambda_q^{s+1}\|c_q\|_2\\ \lesssim &Q_u f(t) \sum_{q\geq -1}\lambda_q^{2s+2}\|c_q\|_2^2, \end{split} \end{equation} \begin{equation}\notag \begin{split} II_{112}\leq &\sum_{ p> Q_u+2}\sum_{|q-p|\leq 2}\lambda_q^{2s+2}\|\nabla u_{\leq Q_u}\|_\infty\|c_p\|_2\|c_q\|_2\\ \lesssim &Q_u f(t) \sum_{ p> Q_u+2}\lambda_p^{s+1}\|c_p\|_2\sum_{|q-p|\leq 2}\lambda_q^{s+1}\|c_q\|_2\\ \lesssim &Q_u f(t) \sum_{q> Q_u}\lambda_q^{2s+2}\|c_q\|_2^2. \end{split} \end{equation} And with the help of H\"older's inequality, Lemma (\ref{comtr}) and the definition of $\Lambda_u(t)$, we estimate $I_{113}$ as \begin{equation}\notag \begin{split} II_{113} \leq &\sum_{ p> Q_u+2}\sum_{|q-p|\leq 2}\lambda_q^{2s+2}\|[\Delta_q,u_{(Q_u,p-2]}\cdot\nabla]c_p\|_2\|c_q\|_2\\ \leq &\sum_{ p> Q_u+2}\sum_{|q-p|\leq 2}\lambda_q^{2s+2}\|c_q\|_2\sum_{Q_u<p'\leq p-2}\lambda_{p'}\|u_{p'}\|_\infty\|c_p\|_2\\ \leq& \sum_{ p> Q_u+2}\|c_p\|_2\sum_{|q-p|\leq 2}\lambda_q^{2s+2}\|c_q\|_2\sum_{Q_u<p'\leq p-2}\lambda_{p'}\|u_{p'}\|_\infty\\ \lesssim & C_0 \sum_{ p> Q_u+2}\|c_p\|_2\sum_{|q-p|\leq 2}\lambda_q^{2s+2}\|c_q\|_2\sum_{Q_u<p'\leq p-2}\lambda_{p'}^{2}\\ \lesssim & C_0 \sum_{ p> Q_u+2}\lambda_p^{2s+4}\|c_p\|_2^2\sum_{Q_u<p'\leq p-2}\lambda_{p'-p}^{2}\\ \lesssim & C_0 \sum_{ p> Q_u+2}\lambda_p^{2s+4}\|c_p\|_2^2. \end{split} \end{equation} $II_{13}$ can be estimated in the same fashion as $I_{13}.$ Splitting the sum by the wavenumber $Q_u(t),$ we have \begin{equation}\notag \begin{split} |II_{13}|\leq & \sum_{q\geq -1}\sum_{|q-p|\leq 2}\lambda_q^{2s+2}\int_{\mathbf{R}^3}\big|(u_{\leq{p-2}}-u_{\leq{q-2}})\cdot\nabla\Delta_qc_p c_q\big|\, \mathrm {d}x\\ \leq & \sum_{-1\leq q\leq Q_u}\sum_{|q-p|\leq 2}\lambda_q^{2s+2}\int_{\mathbf{R}^3}\big|(u_{\leq{p-2}}-u_{\leq{q-2}})\cdot\nabla\Delta_qc_p c_q\big|\, \mathrm {d}x\\ & +\sum_{q>Q_u}\sum_{|q-p|\leq 2}\lambda_q^{2s+2}\int_{\mathbf{R}^3}\big|(u_{\leq{p-2}}-u_{\leq{q-2}})\cdot\nabla\Delta_qc_p c_q\big|\, \mathrm {d}x\\ =:&II_{131}+II_{132}, \end{split} \end{equation} which, using H\"older's inequality along with the definition of $f(t)$ and $\Lambda_u(t),$ we estimate as \begin{equation}\notag \begin{split} |II_{131}|\leq & \sum_{-1\leq q\leq Q_u}\lambda_q^{2s+3} \|c_q\|_2\sum_{|q-p|\leq 2}\|u_{\leq{p-2}}-u_{\leq{q-2}}\|_\infty\|c_p\|_2\\ \lesssim & f(t)\sum_{-1\leq q\leq Q_u}\lambda_q^{2s+2}\|c_q\|_2 \sum_{|q-p|\leq 2}\|c_p\|_2\\ \lesssim & f(t)\sum_{-1\leq q\leq Q_u}\lambda_q^{2s+2} \|c_q\|_2^2,\\ \end{split} \end{equation} \begin{equation}\notag \begin{split} |II_{132}|\leq & \sum_{q> Q_u}\lambda_q^{2s+3} \|c_q\|_2\sum_{|q-p|\leq 2}\|u_{\leq{p-2}}-u_{\leq{q-2}}\|_\infty\|c_p\|_2\\ \lesssim & C_0 \sum_{q> Q_u}\lambda_q^{2s+3}\|c_q\|_2\sum_{|q-p|\leq 2}\lambda_p\|c_p\|_2\\ \lesssim & C_0 \sum_{q> Q_u}\lambda_q^{2s+4}\|c_q\|_2^2. \end{split} \end{equation} To estimate $II_2$, we make use of the wavenumber $Q_c(t)$ instead of $Q_u(t)$. Splitting the summation by $Q_c(t)$ yields \begin{equation}\notag \begin{split} II_2=& \sum_{q \geq -1}\sum_{|q-p| \leq 2}\lambda^{2s+2}_q \int_{\mathbb{R}^3}\Delta_q (u_p\cdot \nabla c_{\leq p-2})c_q\mathrm{d}x\\ =& \sum_{q \geq -1}\sum_{|q-p| \leq 2, p\leq Q_c+2}\lambda^{2s+2}_q \int_{\mathbb{R}^3}\Delta_q (u_p\cdot \nabla c_{\leq p-2})c_q\mathrm{d}x\\ &+ \sum_{q \geq -1}\sum_{|q-p| \leq 2, p> Q_c+2}\lambda^{2s+2}_q \int_{\mathbb{R}^3}\Delta_q (u_p\cdot \nabla c_{\leq Q_c})c_q\mathrm{d}x\\ &+ \sum_{q \geq -1}\sum_{|q-p| \leq 2, p> Q_c+2}\lambda^{2s+2}_q \int_{\mathbb{R}^3}\Delta_q (u_p\cdot \nabla c_{(Q_c, p-2]})c_q\mathrm{d}x\\ =: &II_{21}+II_{22}+II_{23}. \end{split} \end{equation} It follows from H\"older's and Young's inequalities that \begin{equation}\notag \begin{split} |II_{21}|\leq & \sum_{q \geq -1}\sum_{|q-p| \leq 2,p\leq Q_c+2}\lambda^{2s+2}_q \int_{\mathbb{R}^3}\big|\Delta_q (u_p\cdot \nabla c_{\leq p-2})c_q\big|\mathrm{d}x\\ \leq & \sum_{q \geq -1}\lambda^{2s+2}_q \|c_q\|_2\sum_{|q-p| \leq 2,p\leq Q_c+2}\| u_p\|_2\| \nabla c_{\leq p-2}\|_\infty\\ \leq &\| \nabla c_{\leq Q_c}\|_\infty \sum_{q \geq -1}\lambda^{s+1}_q \|c_q\|_2 \sum_{|q-p| \leq 2,p\leq Q_c+2}\lambda_p^{s+2}\| u_p\|_2\lambda_p^{-1}\\ \leq & \frac{1}{32} \sum_{q \geq -1}\lambda_q^{2s+4}\|u_q\|_2^2+Cf(t)\sum_{q \geq -1}\lambda_q^{2s+2}\|c_q\|_2^2. \end{split} \end{equation} While the term $II_{22}$ can be treated the same way as $II_{21}$, the third term is estimated as follows, by utilizing the definition of $\Lambda_c(t)$ \begin{equation}\notag \begin{split} |II_{23}|\leq &\sum_{q \geq -1}\sum_{|q-p| \leq 2, p> Q_c+2}\lambda^{2s+2}_q \|u_p\|_2\|c_q\|_2 \|\nabla c_{(Q_c, p-2]}\|_\infty\\ \lesssim &\sum_{q >Q_c}\lambda^{2s+2}_q \|u_q\|_2\|c_q\|_2 \|\nabla c_{(Q_c, q]}\|_\infty\\ \lesssim &\sum_{q >Q_c}\lambda^{2s+2}_q \|u_q\|_2\|c_q\|_2 \sum_{Q_c<p\leq q}\lambda_p^{1+\frac3r}\|c_p\|_r\\ \lesssim & C_0\sum_{q >Q_c}\lambda^{2s+2}_q \|u_q\|_2\|c_q\|_2 \sum_{Q_c<p\leq q}\lambda_p\\ \lesssim & C_0\sum_{q >Q_c}\lambda^{s+2}_q \|u_q\|_2\lambda^{s+1}_q\|c_q\|_2 \sum_{Q_c<p\leq q}\lambda_{p-q}\\ \lesssim & C_0\sum_{q >Q_c}\lambda^{2s+4}_q \|u_q\|_2^2+C_0\sum_{q >Q_c}\lambda^{2s+2}_q \|c_q\|_2^2. \end{split} \end{equation} As for $II_3,$ we first integrate by parts, then split by the wavenumber $Q_u(t)$, \begin{equation}\notag \begin{split} |II_{3}|\leq&\left|\sum_{p\geq -1}\lambda_q^{2s+2}\sum_{-1\leq q\leq p+2}\int_{\mathbb R^3}\Delta_q(u_p\otimes\tilde c_p)\nabla c_q \, \mathrm{d}x\right|\\ \leq&\sum_{p>Q_u}\lambda_q^{2s+2}\sum_{-1\leq q\leq p+2}\int_{\mathbb R^3}\left|\Delta_q(u_p\otimes\tilde c_p)\nabla c_q\right| \, \mathrm{d}x\\ &+\sum_{-1\leq p\leq Q_u}\lambda_q^{2s+2}\sum_{-1\leq q\leq p+2}\int_{\mathbb R^3}\left|\Delta_q(u_p\otimes\tilde c_p)\nabla c_q\right| \, \mathrm{d}x\\ =:& II_{31}+II_{32}. \end{split} \end{equation} Proceeding with the help of H\"older's, Young's and Jensen's inequalities, we have for $s>-\frac12$ \begin{equation}\notag \begin{split} |II_{31}| \lesssim &\sum_{p> Q_u}\|u_p\|_\infty\|c_p\|_2\sum_{-1\leq q\leq p+2}\lambda_q^{2s+3}\|c_q\|_2\\ \lesssim & C_0 \sum_{p> Q_u}\lambda_p\|c_p\|_2\sum_{-1\leq q\leq p+2}\lambda_q^{2s+3}\|c_q\|_2\\ \lesssim & C_0 \sum_{p> Q_u}\lambda_p^{s+2}\|c_p\|_2\sum_{-1\leq q\leq p+2}\lambda_q^{s+2}\|c_q\|_2\lambda_{q-p}^{s+1}\\ \lesssim & C_0 \sum_{p> Q_u}\left(\lambda_p^{2s+4}\|c_p\|_2^2+\left(\sum_{-1\leq q\leq p+2}\lambda_q^{s+2}\|c_q\|_2\lambda_{q-p}^{s+1}\right)^2\right)\\ \lesssim & C_0 \sum_{q\geq -1}\lambda_q^{2s+4}\|c_q\|_2^2, \end{split} \end{equation} and \begin{equation}\notag \begin{split} |II_{32}| \lesssim &\sum_{-1\leq p\leq Q_u}\|u_p\|_\infty\|c_p\|_2\sum_{-1\leq q\leq p+2}\lambda_q^{2s+3}\|c_q\|_2\\ \lesssim &f(t)\sum_{-1\leq p\leq Q_u}\lambda_p^{-1}\|c_p\|_2\sum_{-1\leq q\leq p+2}\lambda_q^{2s+3}\|c_q\|_2\\ \lesssim &f(t)\sum_{-1\leq p\leq Q_u}\lambda_p^{s+1}\|c_p\|_2\sum_{-1\leq q\leq p+2}\lambda_q^{s+1}\|c_q\|_2\lambda_{q-p}^{s+2}\\ \lesssim &f(t)\sum_{ q \geq -1}\lambda_q^{2s+2}\|c_q\|_2^2. \end{split} \end{equation} We combine the above estimates to conclude that \begin{equation}\notag |II|\lesssim C_0 \sum_{q\geq -1}(\lambda_q^{2s+4}\|c_q\|_2^2+\lambda_q^{2s+4}\|u_q\|_2^2)+Q_uf(t)\sum_{q\geq-1}\lambda_q^{2s+2}\|c_q\|_2^2. \end{equation} \par{\raggedleft$\Box$\par} \subsection{Estimate of $III$} \begin{Lemma}\label{est-3} Let $s<0$. We have \begin{equation}\notag \begin{split} |III|\lesssim & C_0 \sum_{q\geq-1}\lambda_q^{2s+2}\|n_q\|_2^2+Q_uf(t)\sum_{q\geq-1}\lambda_q^{2s}\|n_q\|_2^2. \end{split} \end{equation} \end{Lemma} \textbf{Proof:\ } We start with Bony's paraproduct decomposition for both of the terms. \begin{equation}\notag \begin{split} III=&\sum_{q\geq -1}\sum_{|q-p|\leq 2}\lambda_q^{2s}\int_{\mathbf{R}^3}\Delta_q(u_{\leq p-2}\cdot\nabla n_p)n_q\, \mathrm{d}x\\ &+\sum_{q\geq -1}\sum_{|q-p|\leq 2}\lambda_q^{2s}\int_{\mathbf{R}^3}\Delta_q(u_{p}\cdot\nabla n_{\leq{p-2}})n_q\, \mathrm{d}x\\ &+\sum_{q\geq -1}\sum_{p\geq q-2}\lambda_q^{2s}\int_{\mathbf{R}^3}\Delta_q(u_p\cdot\nabla\tilde n_p)n_q\, \mathrm{d}x\\ =&III_{1}+III_{2}+III_{3}. \end{split} \end{equation} Since we can estimate $III_1$ and $III_3$ in the same manner as $II_1$ and $II_3,$ respectively, the details of computation are omitted for simplicity. We claim \begin{equation}\notag |III_1|+|III_3|\lesssim C_0 \sum_{q\geq-1}\lambda_q^{2s+2}\|n_q\|_2^2+Q_uf(t)\sum_{q\geq-1}\lambda_q^{2s}\|n_q\|_2^2. \end{equation} We are then left with the estimation of $III_2$ to complete the conclusion. Splitting the term by the wavenumber $Q_u(t),$ we have \begin{equation}\notag \begin{split} III_{2}=& \sum_{ -1 \leq p \leq Q_u}\sum_{|q-p| \leq 2}\lambda^{2s}_q \int_{\mathbb{R}^3}\Delta_q (u_p\cdot \nabla n_{\leq p-2})n_q\mathrm{d}x\\ &+\sum_{ p > Q_u}\sum_{|q-p| \leq 2}\lambda^{2s}_q \int_{\mathbb{R}^3}\Delta_q (u_p\cdot \nabla n_{\leq p-2})n_q\mathrm{d}x\\ =:& III_{21}+III_{22}. \end{split} \end{equation} Applying H\"older's, Young's and Jensen's inequalities, we have for $s<1$ \begin{equation}\notag \begin{split} |III_{21}| \leq & \sum_{ -1 \leq p \leq Q_u}\|u_p\|_\infty \sum_{|q-p| \leq 2}\lambda^{2s}_q \|n_q\|_2 \sum_{p' \leq p-2}\lambda_{p'}\|n_{p'}\|_2 \\ \leq & \sum_{ -1 \leq p \leq Q_u}\lambda_p\|u_p\|_\infty \sum_{|q-p| \leq 2}\lambda^{2s-1}_q \|n_q\|_2 \sum_{p' \leq p-2}\lambda_{p'}\|n_{p'}\|_2 \\ \leq & \sum_{ -1 \leq p \leq Q_u}\lambda_p\|u_p\|_\infty \sum_{|q-p| \leq 2}\lambda^{s}_q \|n_q\|_2 \sum_{p' \leq p-2}\lambda_{p'}^{s}\|n_{p'}\|_2\lambda_{q-p'}^{s-1} \\ \lesssim & f(t)\sum_{-1 \leq p \leq Q_u}\Big(\lambda_p^{2s}\|n_q\|_2^2 + \big(\sum_{p' \leq p-2}\lambda_{p'}^{s}\|n_{p'}\|_2\lambda_{p-p'}^{s-1}\big)^2\Big)\\ \lesssim & f(t) \sum_{q \geq -1}\lambda_q^{2s}\|n_q\|_2^2, \end{split} \end{equation} and for $s<0$ \begin{equation}\notag \begin{split} |III_{22}| \leq & \sum_{p > Q_u}\|u_p\|_\infty \sum_{|q-p| \leq 2}\lambda^{2s}_q \|n_q\|_2 \sum_{p' \leq p-2}\lambda_{p'}\|n_{p'}\|_2 \\ \leq & \sum_{p > Q_u}\lambda_p^{-1}\|u_p\|_\infty \sum_{|q-p| \leq 2}\lambda^{2s+1}_q \|n_q\|_2 \sum_{p' \leq p-2}\lambda_{p'}\|n_{p'}\|_2 \\ \leq & C_0 \sum_{q>Q_u-2}\lambda^{s+1}_q \|n_q\|_2 \sum_{p' \leq q}\lambda_{p'}^{s+1}\|n_{p'}\|_2\lambda_{q-p'}^{s} \\ \lesssim & C_0 \sum_{q \geq -1}\lambda_q^{2s+2}\|n_q\|_2^2. \end{split} \end{equation} \par{\raggedleft$\Box$\par} \subsection{Estimate of $IV$} Here we use H\"older's and Young's inequalities to obtain \begin{equation}\label{estnn} \begin{split} |IV| \leq & \sum_{q \geq -1}\lambda_q^{s+1}\|n_q\|_2\lambda_q^{s+1}\|u_q\|_2\\ \leq & \frac1{32} \sum_{q \geq -1}\lambda_q^{2s+2}\|n_q\|_2^2+ C \sum_{q \geq -1}\lambda_q^{2s+2}\|u_q\|_2^2 \end{split} \end{equation} for an absolute constant $C$. \subsection{Estimate of $V$} \begin{Lemma}\label{ppsnc} If $r> 3$ and $s<0$, we have \begin{equation}\notag \begin{split} |V| \leq &\left(\frac1{32}+CC_0\right)\sum_{q\geq -1}\left(\lambda_q^{2s+4}\|c_q\|_2^2+\lambda_q^{2s+2}\|n_q\|_2^2\right)\\ &+C(f(t)+1)\sum_{q\geq-1}\left(\lambda_q^{2s+2}\|c_q\|_2^2+\lambda_q^{2s}\|n_q\|_2^2\right). \end{split} \end{equation} \end{Lemma} \textbf{Proof:\ } Bony's paraproduct decomposition leads to \begin{equation}\notag \begin{split} V=& \sum_{q \geq -1} \sum_{|q-p| \leq 2}\lambda^{2s+2}_q\int_{\mathbb{R}^3}\Delta_q(n_{\leq p-2}c_p)c_q\mathrm{d}x\\ &+ \sum_{q \geq -1} \sum_{|q-p| \leq 2}\lambda^{2s+2}_q\int_{\mathbb{R}^3}\Delta_q(n_pc_{\leq p-2})c_q\mathrm{d}x\\ &+ \sum_{q \geq -1} \sum_{p \geq q-2}\lambda^{2s+2}_q\int_{\mathbb{R}^3}\Delta_q(\tilde n_pc_p)c_q\mathrm{d}x\\ =: & V_1+V_2+V_3. \end{split} \end{equation} We further split $V_1$ by the wavenumber $\Lambda_c(t)$ \begin{equation}\notag \begin{split} V_1=& \sum_{q \leq Q_c} \sum_{|q-p| \leq 2}\lambda^{2s+2}_q\int_{\mathbb{R}^3}\Delta_q(n_{\leq p-2}c_p)c_q\mathrm{d}x\\ & +\sum_{q >Q_c} \sum_{|q-p| \leq 2}\lambda^{2s+2}_q\int_{\mathbb{R}^3}\Delta_q(n_{\leq p-2}c_p)c_q\mathrm{d}x\\ =:&V_{11}+V_{12}. \end{split} \end{equation} To estimate $V_{11}$, H\"older's inequality gives \begin{equation}\notag \begin{split} |V_{11}| \leq & \sum_{q \leq Q_c} \lambda^{2s+2}_q\|c_q\|_\infty\sum_{|q-p| \leq 2}\|c_p\|_2\sum_{p' \leq p-2}\|n_{p'}\|_2 \\ \lesssim & \sum_{q \leq Q_c}\lambda_q\|c_q\|_\infty\sum_{|q-p| \leq 2}\lambda_p^{s+2}\|c_p\|_2\sum_{p' \leq p-2}\lambda_{p'}^{s} \|n_{p'}\|_2\lambda_{p-p'}^{s-1}\lambda_{p'}^{-1}.\\ \end{split} \end{equation} Since $s-1<0$ for $s<0$, we apply Young's and Jensen's inequalities to obtain \begin{equation}\notag \begin{split} |V_{11}| \leq & \frac1{32} \sum_{q \leq Q_c+2}\lambda_q^{2s+4}\|c_q\|_2^2+C\sum_{q \leq Q_c+2}\lambda_q^{2}\|c_q\|_\infty^2\left(\sum_{p' \leq q}\lambda_{p'}^{s}\|n_{p'}\|_2\lambda_{q-p'}^{s-1}\right)^2\\ \leq & \frac1{32} \sum_{q \leq Q_c+2}\lambda_q^{2s+4}\|c_q\|_2^2+Cf(t)\sum_{q \leq Q_c+2}\sum_{p' \leq q}\lambda_{p'}^{2s}\|n_{p'}\|_2^2\lambda_{q-p'}^{s-1}\\ \leq & \frac1{32} \sum_{q \leq Q_c+2} \lambda_q^{2s+4}\|c_q\|_2^2+Cf(t)\sum_{q \leq Q_c}\lambda_q^{2s}\|n_q\|_2^2. \end{split} \end{equation} Regarding $V_{12}$, we have \begin{equation}\notag \begin{split} |V_{12}| \leq & \sum_{q > Q_c} \lambda^{2s+2}_q\|c_q\|_\infty\sum_{|q-p| \leq 2}\|c_p\|_2\sum_{p' \leq p-2}\|n_{p'}\|_2 \\ \lesssim & \sum_{q > Q_c} \lambda^{2s+2+\frac3r}_q\|c_q\|_r\sum_{|q-p| \leq 2}\|c_p\|_2\sum_{p' \leq p-2}\|n_{p'}\|_2 \\ \lesssim &C_0 \sum_{q > Q_c-2} \lambda^{2s+2}_q\|c_q\|_2\sum_{p' \leq q}\|n_{p'}\|_2 \\ \lesssim &C_0 \sum_{q > Q_c-2} \lambda^{s+2}_q\|c_q\|_2\sum_{p' \leq q}\lambda_{p'}^s\|n_{p'}\|_2 \lambda_{q-p'}^{s}. \end{split} \end{equation} Again, since $s<0$, we can apply Young's and Jensen's inequalities \begin{equation}\notag \begin{split} |V_{12}| \lesssim &C_0 \sum_{q > Q_c-2} \lambda^{2s+4}_q\|c_q\|_2^2+\sum_{q>Q_c}\left(\sum_{p' \leq q}\lambda_{p'}^s\|n_{p'}\|_2 \lambda_{q-p'}^{s}\right)^2\\ \lesssim &C_0 \sum_{q > Q_c-2} \lambda^{2s+4}_q\|c_q\|_2^2+\sum_{q\geq-1}\lambda_{p'}^{2s}\|n_{p'}\|_2^2. \end{split} \end{equation} We also split $V_2$ by the wavenumber $\Lambda_c(t)$ as \begin{equation}\notag \begin{split} V_2 =&\sum_{q \geq -1} \sum_{|q-p| \leq 2}\lambda^{2s+2}_q\int_{\mathbb{R}^3}\Delta_q(n_pc_{\leq p-2})c_q\mathrm{d}x\\ =&\sum_{q \geq -1} \sum_{|q-p| \leq 2,p\leq Q_c+2}\lambda^{2s+2}_q\int_{\mathbb{R}^3}\Delta_q(n_pc_{\leq p-2})c_q\mathrm{d}x\\ &+\sum_{q \geq -1} \sum_{|q-p| \leq 2,p> Q_c+2}\lambda^{2s+2}_q\int_{\mathbb{R}^3}\Delta_q(n_pc_{\leq Q_c})c_q\mathrm{d}x\\ &+\sum_{q \geq -1} \sum_{|q-p| \leq 2,p> Q_c+2}\lambda^{2s+2}_q\int_{\mathbb{R}^3}\Delta_q(n_pc_{(Q_c,p-2]})c_q\mathrm{d}x\\ =:&V_{21}+V_{22}+V_{23}. \end{split} \end{equation} H\"older's and Young's inequalities yield the following estimate on $V_{21}$, \begin{equation}\notag \begin{split} |V_{21}| \lesssim & \sum_{q \geq -1} \lambda^{2s+2}_q\|c_q\|_2 \sum_{|q-p| \leq 2}\|n_p\|_2\sum_{p' \leq p-2\leq Q_c}\|c_{p'}\|_\infty \\ \lesssim & \sum_{q \geq -1}\lambda_q^{s+1}\|c_q\|_2 \sum_{|q-p| \leq 2}\lambda_p^{s+1} \|n_p\|_2\sum_{p' \leq p-2\leq Q_c}\lambda_{p'}\|c_{p'}\|_\infty\lambda_{p'}^{-1}\\ \leq & \frac1{32}\sum_{q \geq -1}\lambda_q^{2s+2}\|n_q\|_2^2+C\sum_{p\geq-1}\lambda_p^{2s+2}\|c_p\|_2^2\left(\sum_{p'\leq p-2\leq Q_c}\lambda_{p'}\|c_{p'}\|_\infty\lambda_{p'}^{-1}\right)^2\\ \leq & \frac1{32}\sum_{q \geq -1}\lambda_q^{2s+2}\|n_q\|_2^2+C\sum_{p\geq-1}\lambda_p^{2s+2}\|c_p\|_2^2\sum_{p'\leq p-2\leq Q_c}\lambda_{p'}^2\|c_{p'}\|_\infty^2\lambda_{p'}^{-1}\\ \leq & \frac1{32}\sum_{q \geq -1}\lambda_q^{2s+2}\|n_q\|_2^2+Cf(t)\sum_{p\geq-1}\lambda_p^{2s+2}\|c_p\|_2^2\sum_{p'\leq p-2\leq Q_c}\lambda_{p'}^{-1}\\ \leq & \frac1{32}\sum_{q \geq -1} \lambda_q^{2s+2}\|n_q\|_2^2+Cf(t)\sum_{q \geq -1}\lambda_q^{2s+2}\|c_q\|_2^2. \end{split} \end{equation} Note that $V_{22}$ can be estimated in the same way. Provided $\frac3r\leq 1$, the last term $V_{23}$ is estimated as follows, \begin{equation}\notag \begin{split} |V_{23}|\leq & \sum_{q \geq -1} \sum_{|q-p| \leq 2,p> Q_c+2}\lambda^{2s+2}_q \|n_p\|_2\|c_q\|_{\frac{2r}{r-2}}\|c_{(Q_c,p-2]}\|_r\\ \lesssim & \sum_{q \geq -1} \lambda^{2s+2+\frac3r}_q \|n_q\|_2\|c_q\|_2\sum_{q> Q_c,Q_c<p'\leq q}\|c_{p'}\|_r\\ \lesssim & C_0\sum_{q \geq -1} \lambda^{2s+2+\frac3r}_q \|n_q\|_2\|c_q\|_2\sum_{q> Q_c,Q_c<p'\leq q}\lambda_{p'}^{-\frac3r}\\ \lesssim & C_0\sum_{q \geq -1} \lambda^{s+1}_q \|n_q\|_2\lambda^{s+2}_q\|c_q\|_2\lambda_q^{\frac3r-1}\sum_{q> Q_c,Q_c<p'\leq q}\lambda_{p'}^{-\frac3r}\\ \lesssim & C_0\sum_{q \geq -1} \lambda^{s+1}_q \|n_q\|_2\lambda^{s+2}_q\|c_q\|_2\\ \lesssim & C_0\sum_{q \geq -1} \lambda^{2s+2}_q \|n_q\|_2^2+C_0\sum_{q \geq -1}\lambda^{2s+4}_q\|c_q\|_2^2.\\ \end{split} \end{equation} The term $V_3$ can be dealt with in a similar way as $V_1$. We first split the sum, \begin{equation}\notag \begin{split} V_3=&\sum_{q \leq Q_c} \sum_{p \geq q-2}\lambda^{2s+2}_q\int_{\mathbb{R}^3}\Delta_q(\tilde n_pc_p)c_q\mathrm{d}x\\ &+\sum_{q < Q_c} \sum_{p \geq q-2}\lambda^{2s+2}_q\int_{\mathbb{R}^3}\Delta_q(\tilde n_pc_p)c_q\mathrm{d}x.\\ \end{split} \end{equation} Without giving details, we claim that \begin{equation}\notag |V_3|\leq \left(\frac1{32}+CC_0\right)\sum_{q\geq -1}\lambda_q^{2s+4}\|c_q\|_2^2+C(f(t)+1)\sum_{q\geq-1}\lambda_q^{2s}\|n_q\|_2^2. \end{equation} \par{\raggedleft$\Box$\par} \subsection{Estimate of $VI$} \begin{Lemma}\label{ppscc} Let $-\frac12<s<0$ and $3<r<\frac 3{1+s}$. We have \begin{equation}\notag |VI| \leq \left(\frac1{32}+CC_0\right)\sum_{q\geq -1}\lambda_q^{2s+2}\|n_q\|_2^2+Cf(t)\sum_{q\geq -1}\lambda_q^{2s}\|n_q\|_2^2. \end{equation} \end{Lemma} \textbf{Proof:\ } Utilizing Bony's paraproduct, $VI$ can be decomposed as \begin{equation}\notag \begin{split} VI= &-\sum_{q\geq -1}\sum_{|q-p|\leq 2}\lambda_q^{2s}\int_{\mathbf{R}^3}\Delta_q(n_{\leq p-2}\nabla c_p)\nabla n_q\, \mathrm{d}x\\ &-\sum_{q\geq -1}\sum_{|q-p|\leq 2}\lambda_q^{2s}\int_{\mathbf{R}^3}\Delta_q(n_{p}\nabla c_{\leq{p-2}})\nabla n_q\, \mathrm{d}x\\ &-\sum_{q\geq -1}\sum_{p\geq q-2}\lambda_q^{2s}\int_{\mathbf{R}^3}\Delta_q(\tilde n_p\nabla c_p)\nabla n_q\, \mathrm{d}x\\ =:& VI_{1}+VI_{2}+VI_{3}. \end{split} \end{equation} We continue to decompose $VI_1$ by $Q_c$, \begin{equation}\notag \begin{split} VI_1=&-\sum_{q\geq -1}\sum_{|q-p|\leq 2, p\leq Q_c}\lambda_q^{2s}\int_{\mathbf{R}^3}\Delta_q(n_{\leq p-2}\nabla c_p)\nabla n_q\, \mathrm{d}x\\ &-\sum_{q\geq -1}\sum_{|q-p|\leq 2, p> Q_c}\lambda_q^{2s}\int_{\mathbf{R}^3}\Delta_q(n_{\leq p-2}\nabla c_p)\nabla n_q\, \mathrm{d}x\\ =:&VI_{11}+VI_{12}. \end{split} \end{equation} To estimate $VI_{11}$, we apply H\"older's inequality first, \begin{equation}\notag \begin{split} |VI_{11}| \lesssim& \sum_{q \geq -1}\lambda_q^{2s+1}\|n_q\|_2 \sum_{|q-p| \leq 2,p\leq Q_c}\|\nabla c_p\|_\infty \sum_{p' \leq p-2}\|n_{p'}\|_2\\ \lesssim& \sum_{q \geq -1}\lambda_q^{s+1}\|n_q\|_2 \sum_{|q-p| \leq 2,p\leq Q_c}\|\nabla c_p\|_\infty\sum_{p' \leq p-2}\lambda_{p'}^{s}\|n_{p'}\|_2\lambda_{p-p'}^{s}.\\ \end{split} \end{equation} We then apply Young's and Jensen's inequalities, \begin{equation}\notag \begin{split} |VI_1| \leq & \frac1{32} \sum_{q \geq -1}\lambda_q^{2s+2}\|n_q\|_2^2+ Cf(t)\sum_{q \geq -1}\big(\sum_{p' \leq q}\lambda_{p'}^{s}\|n_{p'}\|_2\lambda_{q-p'}^{s}\big)^2\\ \leq & \frac1{32} \sum_{q \geq -1}\lambda_q^{2s+2}\|n_q\|_2^2+Cf(t)\sum_{q \geq -1}\lambda_q^{2s}\|n_q\|_2^2, \end{split} \end{equation} where we require $s <0$ to ensure $\sum_{p'\leq p-2} \lambda^{s}_{p-p'} <\infty$. Again, applying H\"older's inequality first to $VI_{12}$ yields \begin{equation}\notag \begin{split} |VI_{12}|\leq &\sum_{q\geq -1}\sum_{|q-p|\leq 2, p> Q_c}\lambda_q^{2s}\|\nabla n_q\|_2\|\nabla c_p\|_r\|n_{\leq p-2}\|_{\frac{2r}{r-2}}\\ \lesssim& \sum_{q\geq -1}\sum_{|q-p|\leq 2, p> Q_c}\lambda_q^{2s+2-\frac3r}\|n_q\|_2\lambda_p^{\frac3r}\|c_p\|_r\sum_{p'\leq p-2}\lambda_{p'}^{\frac3r}\|n_{p'}\|_2\\ \lesssim& C_0\sum_{q\geq -1}\lambda_q^{2s+2-\frac3r}\|n_q\|_2\sum_{p'\leq q}\lambda_{p'}^{\frac3r}\|n_{p'}\|_2\\ \lesssim& C_0\sum_{q\geq -1}\lambda_q^{s+1}\|n_q\|_2\sum_{p'\leq q}\lambda_{p'}^{s+1}\|n_{p'}\|_2\lambda_{q-p}^{s+1-\frac3r}.\\ \end{split} \end{equation} Following Young's and Jensen's inequalities, we obtain for $s+1<\frac3r$ \begin{equation}\notag \begin{split} |VI_{12}|\leq &C C_0\sum_{q\geq -1}\lambda_q^{2s+2}\|n_q\|_2^2+CC_0\sum_{q\geq-1}\left(\sum_{p'\leq q}\lambda_{p'}^{s+1}\|n_{p'}\|_2\lambda_{q-p}^{s+1-\frac3r}\right)^2\\ \leq &C C_0\sum_{q\geq -1}\lambda_q^{2s+2}\|n_q\|_2^2+CC_0\sum_{q\geq-1}\sum_{p'\leq q}\lambda_{p'}^{2s+2}\|n_{p'}\|_2^2\lambda_{q-p}^{s+1-\frac3r}\\ \leq &C C_0\sum_{q\geq -1}\lambda_q^{2s+2}\|n_q\|_2^2. \end{split} \end{equation} The first step of dealing with $VI_2$ is to split it by $Q_c$ as well, \begin{equation}\notag \begin{split} VI_2=&-\sum_{q\geq -1}\sum_{|q-p|\leq 2,p\leq Q_c+2}\lambda_q^{2s}\int_{\mathbf{R}^3}\Delta_q(n_{p}\nabla c_{\leq{p-2}})\nabla n_q\, \mathrm{d}x\\ &-\sum_{q\geq -1}\sum_{|q-p|\leq 2,p> Q_c+2}\lambda_q^{2s}\int_{\mathbf{R}^3}\Delta_q(n_{p}\nabla c_{\leq Q_c})\nabla n_q\, \mathrm{d}x\\ &-\sum_{q\geq -1}\sum_{|q-p|\leq 2,p> Q_c+2}\lambda_q^{2s}\int_{\mathbf{R}^3}\Delta_q(n_{p}\nabla c_{(Q_c,p-2]})\nabla n_q\, \mathrm{d}x\\ =:&VI_{21}+VI_{22}+VI_{23}. \end{split} \end{equation} Using H\"older's and Young's inequalities, and the fact that $\lambda_{q-p}^{s} \sim C$ as $p,q$ are close to each other, we have \begin{equation}\notag \begin{split} |VI_{21}| \leq& \sum_{q \geq -1}\lambda_q^{2s+1}\|n_q\|_2 \sum_{|q-p| \leq 2,p\leq Q_c+2}\| n_p\|_2 \|\nabla c_{\leq p-2}\|_\infty\\ \leq& \sum_{q \geq -1}\lambda_q^{s+1}\|n_q\|_2 \sum_{|q-p| \leq 2,p\leq Q_c+2}\lambda_p^{s}\| n_p\|_2 \|\nabla c_{\leq p-2}\|_\infty\lambda_{q-p}^{s}\\ \leq & \frac1{32} \sum_{q \geq -1}\lambda_q^{2s+2}\|n_q\|_2^2+Cf(t)\sum_{q \geq -1}\lambda_q^{2s}\|n_q\|_2^2. \end{split} \end{equation} We skip the computation for $VI_{22}$ which can be estimated in a likely way as $VI_{21}$. We proceed to estimate $V_{23}$, provided $r>3$ \begin{equation}\notag \begin{split} |VI_{23}|\leq &\sum_{q\geq -1}\sum_{|q-p|\leq 2,p> Q_c+2}\lambda_q^{2s}\|n_p\|_2\|\nabla c_{(Q_c,p-2]}\|_{r}\|\nabla n_q\|_{\frac{2r}{r-2}}\\ \lesssim &\sum_{q\geq -1}\lambda_q^{2s+1+\frac3r}\|n_q\|_2^2\sum_{ Q_c<p'\leq q}\lambda_{p'}\|c_{p'}\|_{r}\\ \lesssim &C_0\sum_{q\geq -1}\lambda_q^{2s+1+\frac3r}\|n_q\|_2^2\sum_{ Q_c<p'\leq q}\lambda_{p'}^{1-\frac3r}\\ \lesssim &C_0\sum_{q\geq -1}\lambda_q^{2s+2}\|n_q\|_2^2\sum_{ Q_c<p'\leq q}\lambda_{p'-q}^{1-\frac3r}\\ \lesssim &C_0\sum_{q\geq -1}\lambda_q^{2s+2}\|n_q\|_2^2. \end{split} \end{equation} Finally, we notice that $VI_3$ can be handled in an analogous way of $VI_1$; thus the details of computation are omitted. It completes the proof of the lemma. \par{\raggedleft$\Box$\par} Summing inequalities in Lemma \ref{ppsuu}- Lemma \ref{ppscc} and (\ref{eqtnn})-(\ref{estnn}) produces the following Gr\"onwall type of inequality, \begin{equation}\notag \begin{split} &\frac{\mathrm{d}}{\mathrm{d}t}\sum_{q \geq -1}(\lambda_q^{2s}\|n_q\|_2^2+\lambda_q^{2s+2}\|c_q\|_2^2+\lambda_q^{2s+2}\|u_q\|_2^2)\\ \leq & (-2+CC_0) \sum_{q\geq -1}(\lambda_q^{2s+2}\|n_q\|_2^2+\lambda_q^{2s+4}\|c_q\|_2^2+\lambda_q^{2s+4}\|u_q\|_2^2)\\ &+CQ_uf(t)\sum_{q\geq -1}(\lambda_q^{2s}\|n_q\|_2^2+\lambda_q^{2s+2}\|c_q\|_2^2+\lambda_q^{2s+2}\|u_q\|_2^2). \end{split} \end{equation} Thus, $C_0$ can be chosen small enough such that $CC_0<\frac14$. On the other hand, combining the definition of $\Lambda_u(t)$ and Bernstein's inequality, one can deduce \[1\leq C_0^{-1}\Lambda_u^{-1}\|u_{Q_u}\|_\infty\lesssim C_0^{-1}\Lambda_u^{\frac12}\|u_{Q_u}\|_2\lesssim C_0^{-1}\Lambda_u^{-\frac12-s}\|u\|_{\dot H^{s+1}},\] which implies that for $s>-\frac12$, \[Q_u=\log \Lambda_u\lesssim 1+\log {\|u\|_{\dot H^{s+1}}}.\] Hence the energy inequality becomes \begin{equation}\notag \begin{split} &\frac{\mathrm{d}}{\mathrm{d}t}\sum_{q \geq -1}(\lambda_q^{2s}\|n_q\|_2^2+\lambda_q^{2s+2}\|c_q\|_2^2+\lambda_q^{2s+2}\|u_q\|_2^2)\\ \leq &-\sum_{q\geq -1}(\lambda_q^{2s+2}\|n_q\|_2^2+\lambda_q^{2s+4}\|c_q\|_2^2+\lambda_q^{2s+4}\|u_q\|_2^2)\\ &+C f(t)\left(1+\log {\|u\|_{\dot H^{s+1}}}\right)\sum_{q\geq -1}(\lambda_q^{2s}\|n_q\|_2^2+\lambda_q^{2s+2}\|c_q\|_2^2+\lambda_q^{2s+2}\|u_q\|_2^2). \end{split} \end{equation} By the hypothesis (\ref{c1u1q}) and Gr\"onwall's inequality, we can conclude that \begin{align*} n\in L^\infty(0,T; H^{s})\cap L^2(0,T; H^{s+1}),\\ c\in L^\infty(0,T; H^{s+1})\cap L^2(0,T; H^{s+2}),\\ u\in L^\infty(0,T; H^{s+1})\cap L^2(0,T; H^{s+2}). \end{align*} We consider a particular case $s=-\varepsilon$ for small enough $\varepsilon>0$. We realize that our solution is regular via bootstrapping arguments. In fact, we have \[n\in L^\infty(0,T; H^{-\varepsilon}), \ \ u\in L^\infty(0,T; H^{1-\varepsilon}), \ \ c\in L^\infty(0,T; H^{1-\varepsilon})\cap L^\infty(0,T; L^\infty).\] By scaling, it is known that $H^{-\frac 12}$, $H^{\frac 12}$, and $H^{\frac 32}$ are critical for $n, u$ and $c$, respectively. For small enough $\varepsilon>0$, $H^{-\varepsilon}$ is subcritical for $n$; so is $H^{1-\varepsilon}$ for $u$. Thus, it suffices to bootstrap the equation of $c$ to obtain higher regularity for $c$. We recall \[c_t-\Delta c=-u\cdot\nabla c-nc.\] Since $u\in L^\infty(0,T; H^{1-\varepsilon})$ and $\nabla c\in L^2(0,T; H^{1-\varepsilon})$, by Sobolev embedding $H^{1-\varepsilon} \hookrightarrow L^{\frac{6}{1+2\varepsilon}},$ we have $u\cdot\nabla c\in L^2(0,T; L^{\frac{3}{1+2\varepsilon}})$. Similarly, the fact of $c\in L^\infty(0,T; H^{1-\varepsilon})$ and $n\in L^2(0,T; H^{1-\varepsilon})$ implies $nc\in L^2(0,T; L^{\frac{3}{1+2\varepsilon}})$. Then the standard maximal regularity theory of heat equation yields \[c\in H^1(0,T; L^{\frac{3}{1+2\varepsilon}})\cap L^2(0,T; W^{2, \frac{3}{1+2\varepsilon}}).\] We need to bootstrap one more time. Now we have $\nabla c\in L^2(0,T; W^{1, \frac{3}{1+2\varepsilon}})$ which, along with $u\in L^\infty(0,T; H^{1-\varepsilon})$, implies $u\cdot\nabla c\in L^2(0,T; L^{\frac{6}{1+6\varepsilon}})$. On the other hand, we have $nc\in L^2(0,T; L^{\frac{6}{1+2\varepsilon}})$ from the estimate $n\in L^2(0,T; H^{1-\varepsilon})$ and the maximal principle $c\in L^\infty(0,T;L^\infty)$. Again, the maximal regularity theory of heat equation produces that \[c\in H^1(0,T; L^{\frac{6}{1+6\varepsilon}})\cap L^2(0,T; W^{2, \frac{6}{1+6\varepsilon}}).\] As a consequence of the mixed derivative theorem \cite{PS}, we have \[c\in W^{1-\theta, 2}(0,T; W^{2\theta, \frac{6}{1+6\varepsilon}})\] for any $\theta\in[0,1]$. In fact, if we take $\theta\in (\frac{1+6\varepsilon}4,\frac12)$, Sobolev embedding theorem shows that \[c\in W^{1-\theta, 2}(0,T; W^{2\theta, \frac{6}{1+6\varepsilon}})\hookrightarrow L^\infty(0,T; H^{\frac32+\varepsilon_0})\] for an small enough constant $\varepsilon_0>0$. Notice that $H^{\frac 32}$ is critical for $c$. Thus we can stop the bootstrapping for $c$ equation. Regarding the density function $n$, although the obtained estimates are in subcritical space already, we would like to further improve the estimates to reach spaces with even higher regularity (i.e. Sobolev spaces with positive smoothness index). Indeed, we look at the $n$ equation again, \[n_t-\Delta n=-u\cdot \nabla n-\nabla\cdot (n\nabla c).\] Due to the fact $\nabla c\in L^\infty(0,T; H^{\frac12+\varepsilon_0})$ and $n\in L^2(0,T; H^{1-\varepsilon})$, Sobolev embedding theorem yields $n\nabla c\in L^2(0,T; L^2)$ and hence $\nabla \cdot (n\nabla c) \in L^2(0,T; H^{-1})$. While $u\in L^\infty(0,T; H^{1-\varepsilon})$ and $n\in L^2(0,T; H^{1-\varepsilon})$ together imply $un\in L^2(0,T; L^{\frac3{1+2\varepsilon}})$ which is embedded in $L^2(0,T; L^2)$. Thus $u\cdot \nabla n=\nabla\cdot (un)\in L^2(0,T; H^{-1})$. Applying Lemma \ref{le-heat} with $\alpha=-1$ to the $n$ equation, we claim \[n\in H^1(0,T; H^{-1})\cap L^2(0,T; H^1).\] Summarizing the analysis above gives us \begin{equation}\notag \begin{split} &n\in L^\infty(0,T; H^{-\varepsilon})\cap L^2(0,T; H^{1})\cap H^1(0,T; H^{-1}),\\ &u\in L^\infty(0,T; H^{1-\varepsilon})\cap L^2(0,T; H^{2-\varepsilon}),\\ &c\in L^\infty(0,T; H^{\frac32+\varepsilon_0})\cap L^2(0,T; W^{2, \frac{6}{1+6\varepsilon}}). \end{split} \end{equation} Since each of the three functions $n,u$ and $c$ is in higher regularity space than its critical Sobolev space, further bootstrapping procedures for parabolic equations and standard argument of extending regularity can be applied to infer that the solution $(u,n,c)$ is regular up to time $T$. {\textbf{Acknowledgement. }} The authors would like to thank Professor Gieri Simonett for pointing out a mistake in the early version of the manuscript. \end{document}
arXiv
\begin{document} \title{A functional analytic approach to perturbations of \\the Lorentz gas} \author{Mark F. Demers \and Hong-Kun Zhang} \address{Mark F. Demers, Department of Mathematics and Computer Science, Fairfield University, Fairfield CT 06824, USA.} \email{[email protected]} \address{Hong-Kun Zhang, Department of Mathematics and Statistics, UMass Amherst, MA 01003, USA.} \email{[email protected]} \thanks{M.\ D.\ is partially supported by NSF Grant DMS-1101572. H.-K.\ Z.\ is partially supported by NSF CAREER Grant DMS-1151762.} \date{\today} \begin{abstract} We present a functional analytic framework based on the spectrum of the transfer operator to study billiard maps associated with perturbations of the periodic Lorentz gas. We show that recently constructed Banach spaces for the billiard map of the classical Lorentz gas are flexible enough to admit a wide variety of perturbations, including: movements and deformations of scatterers; billiards subject to external forces; nonelastic reflections with kicks and slips at the boundaries of the scatterers; and random perturbations comprised of these and possibly other classes of maps. The spectra and spectral projections of the transfer operators are shown to vary continuously with such perturbations so that the spectral gap enjoyed by the classical billiard persists and important limit theorems follow. \end{abstract} \maketitle \section{Introduction} \label{intro} The Lorentz gas is known to enjoy strong ergodic properties: both the continuous time dynamics and the billiard maps are completely hyperbolic, ergodic, K-mixing and Bernoulli (see \cite{Sin70, GO, SC87, CH96} and the references therein). Young \cite{Y98} proved exponential decay of correlations for billiard maps corresponding to the finite horizon periodic Lorentz gas using Markov extensions; this technique was subsequently extended to other dispersing billiards \cite{C99} and used to obtain important limit theorems such as local large deviation estimates and almost-sure invariance principles \cite{melbourne nicol, melbourne nicol 2, rey}. In this setting, it is natural to ask how the statistical properties of dispersing billiard maps vary with the shape and position of the scatterers. Alternatively, one may change the billiard dynamics by introducing an external force between collisions or by considering nonelastic reflections at the boundaries. Such perturbed dynamics lead to nonequilibrium billiards whose invariant measures are singular with respect to Lebesgue measure. One of the first nonequilibrium physical models that was studied rigorously is the periodic Lorentz gas with a small constant electrical field \cite{CELSa, CELSb} and the well-known Ohm's law was proved for that case. More general external forces were handled in \cite{Ch01, Ch08, CD09} and billiards with kicks at reflections have been studied in \cite{mark, Z09}. Recently, Chernov and Dolgopyat \cite{CD} used coupling methods to study the motion of a point particle colliding with a moving scatterer. Locally perturbed periodic rearrangements of scatterers have also been the subject of recent studies \cite{DSV}. Despite such successes, the study of perturbations of billiards has thus far been handled on a case by case basis, with methods adapted and developed for each specific type of perturbation considered. In this paper, we propose a unified framework in which to study a large class of perturbations of dispersing billiards. This framework is based on the spectral analysis of the transfer operator associated with the billiard map and uses the recent work \cite{demers zhang} which successfully constructed Banach spaces on which the transfer operator for the classical periodic Lorentz gas has a spectral gap. We first present abstract conditions under which we have uniform control of spectral data for a given class of perturbed maps. We then prove that four broad classes of perturbations of billiards fit within this framework, namely: \begin{itemize} \item[(i)] Tables with shifted, rotated or deformed scatterers; \item[(ii)] Billiards under small external forces which bend trajectories during flight; \item[(iii)] Billiards with kicks or twists at reflections, including slips along the disk; \item[(iv)] Random perturbations comprised of maps with uniform properties (including any of the above classes, or a combination of them). \end{itemize} In particular, the results on random perturbations are a version of time-dependent billiards, in which scatterers are allowed to change positions between collisions. The fact that our main theorems, \ref{thm:uniform} and \ref{thm:close}, are proved in an abstract setting will facilitate the application of this framework to other classes of perturbations as they arise in future works. The present functional analytic approach uses the Banach spaces constructed in \cite{demers zhang} as well as the perturbative framework of Keller and Liverani \cite{keller liverani} to prove that the spectral data and spectral projectors, including invariant measures, rates of decay of correlations, variance in the central limit theorem, etc, vary H\"older continuously for the classes of perturbations mentioned above (see \cite{baladi book, liv} for expositions of this approach). In addition, this approach yields new results for the perturbed billiard maps in terms of local limit theorems, in particular giving new information about the evolution of noninvariant measures in the context of these limit theorems. For example, applying Corollary~\ref{cor:limit theorems} to billiards under external forces and kicks, we obtain a local large deviation estimate with a rate function that is the same for all probability measures in our Banach space. This implies in particular that Lebesgue measure and the singular SRB measure for the perturbed billiard have the same large deviation rate function. The paper is organized as follows. In Section 2, we describe our abstract framework, state precisely the applications which serve as our model perturbations and formulate our main results. In Section 3, we lay out our common approach under the general conditions {\bf (H1)}-{\bf (H5)} which guarantee the required uniform Lasota-Yorke inequalities for Theorem~\ref{thm:uniform}, proved in Section~\ref{uniform}; we also formulate conditions {\bf (C1)}-{\bf (C4)} to verify that a perturbation is small in the sense of our Banach spaces for Theorem~\ref{thm:close}, proved in Section~\ref{close}. The investigations of the concrete models are provided in Sections~\ref{perts} and \ref{kick}. \section{Setting and Results} \label{results} In this section, we describe the abstract framework into which we will place our perturbations and formulate precisely the classes of concrete deterministic perturbations to which our results apply. We also formulate a class of random perturbations with maps drawn from any mixture of the deterministic perturbations described below. We postpone until Section~\ref{common} a precise description of the Banach spaces and the formal requirements on the abstract class of maps $\mathcal{F}$. \subsection{Perturbative framework} \label{framework} We recall here the perturbative framework of Keller and Liverani \cite{keller liverani}. Suppose there exist two Banach spaces $(\B, \| \cdot \|_\B)$ and $(\B_w, | \cdot |_w )$ with the unit ball of $\B$ compactly embedded in $\B_w$, $| \cdot |_w \leq \| \cdot \|_\B$, and a family of bounded linear operators $\{ \mathcal{L}_\varepsilon \}_{\varepsilon \ge 0}$ defined on both $\B_w$ and $\B$ such that the following holds.\footnote{The results of \cite{keller liverani} hold in a more general setting, but we only state the version we need for our purposes.} There exist constants $C, \eta >0$ and $\sigma < 1$ such that for all $\varepsilon \ge 0$ and $n \geq 0$, \begin{equation} \label{eq:pert LY} \begin{split} | \mathcal{L}_\varepsilon^n h|_w & \leq C \eta^n |h|_w \qquad \mbox{for all $h \in \B_w$}, \\ \| \mathcal{L}_\varepsilon^n h \|_\B & \le C \sigma^n \| h \|_\B + C \eta^n |h|_w \qquad \mbox{for all $h \in \B$}. \end{split} \end{equation} If $\sigma < \eta$, the operators $\mathcal{L}_\varepsilon$ are quasi-compact with essential spectral radius bounded by $\sigma$ and spectral radius at most $\eta$ (see for example \cite{baladi book}). Suppose further that \begin{equation} \label{eq:pert small} ||| \mathcal{L}_\varepsilon - \mathcal{L}_0 ||| := \sup \{ |\mathcal{L}_\varepsilon h - \mathcal{L}_0 h |_w : h \in \B, \| h \|_\B \le 1 \} \le \rho(\varepsilon), \end{equation} where $\rho(\varepsilon)$ is a non-increasing upper semicontinuous function satisfying $\lim_{\varepsilon \to 0} \rho(\varepsilon) = 0$. The main result of \cite{keller liverani} is the following. Let sp$(\mathcal{L}_0)$ denote the spectrum of $\mathcal{L}_0$. For any $\sigma_1 > \sigma$, by quasi-compactness, sp$(\mathcal{L}_0) \cap \{ z \in \mathbb{C} : |z| \geq \sigma_1 \}$ consists of finitely many eigenvalues $\varrho_1, \ldots, \varrho_k$ of finite multiplicity. Thus there exists $t_* >0$ and we may choose $\sigma_1$ such that $| \varrho_i - \varrho_j | > t_*$ for $i \ne j$ and dist$(\mbox{sp}(\mathcal{L}_0), \{ |z| = \sigma_1 \}) > t_*$. For $t < t_*$ and $\varepsilon \geq 0$, define the spectral projections, \[ \begin{split} \Pi_\varepsilon^{(j)} & := \frac{1}{2\pi i} \int_{|z - \varrho_j|=t} (z- \mathcal{L}_\varepsilon)^{-1} \, dz \qquad \mbox{and} \\ \Pi_\varepsilon^{(\sigma_1)} & := \frac{1}{2\pi i} \int_{|z|=\sigma_1} (z- \mathcal{L}_\varepsilon)^{-1} \, dz . \end{split} \] \begin{theorem} (\cite{keller liverani}) \label{thm:kl} Assume that \eqref{eq:pert LY} and \eqref{eq:pert small} hold. Then for each $t \le t_*$ and $s < 1 - \frac{\log \sigma_1}{\log \sigma}$, there exist $\varepsilon_1, C >0$ such that for any $0 \le \varepsilon < \varepsilon_1$, the spectral projections $\Pi_\varepsilon^{(j)}$ and $\Pi_\varepsilon^{(\sigma_1)}$ are well defined and satisfy, for each $j = 1, \ldots k$, \begin{itemize} \item[(1)] $||| \Pi_\varepsilon^{(j)} - \Pi_0^{(j)} ||| \le C \rho(\varepsilon)^s$ and $||| \Pi_\varepsilon^{(\sigma_1)} - \Pi_0^{(\sigma_1)} ||| \le C \rho(\varepsilon)^s$ ; \item[(2)] rank$(\Pi_\varepsilon^{(j)}) = \mbox{rank}(\Pi_0^{(j)})$; \item[(3)] $\| \mathcal{L}_\varepsilon^n \Pi_\varepsilon^{(\sigma_1)} \| \le C \sigma_1^n$, for all $n\ge 0$. \end{itemize} \end{theorem} We say an operator $\mathcal{L}$ has a spectral gap if $\mathcal{L}$ has a simple eigenvalue of maximum modulus and all other eigenvalues have strictly smaller modulus. The above theorem implies in particular that if $\mathcal{L}_0$ has a spectral gap, then so does $\mathcal{L}_\varepsilon$ for $\varepsilon$ sufficiently small. In addition, the related statistical properties (for instance, invariant measures, rates of decay of correlations, variance of the Central Limit Theorem) are stable and vary H\"older continuously as a function of $\rho(\varepsilon)$. This is the framework into which we will place our perturbations of the Lorentz gas. \subsection{An abstract result for a class of maps with uniform properties} \label{abstract} We begin by fixing the phase space $M$ of a billiard map associated with a periodic Lorentz gas. That is, we place finitely many (disjoint) scatterers $\Gamma_i$, $i=1, \ldots d$, on $\mathbb{T}^2$ which have $\mathcal{C}^3$ boundaries with strictly positive curvature. The classical billiard flow on the table $\mathbb{T}^2 \setminus \cup_i \{ \mbox{interior } \Gamma_i \}$ is induced by a particle traveling at unit speed and undergoing elastic collisions at the boundaries. In what follows, we also consider particles whose motion between collisions follows slightly curved trajectories (due to external forces) as well as certain types of collisions which do not obey the usual law of reflection. In all cases, the billiard map associated with the flow is the Poincar\'e map corresponding to collisions with the scatterers. Its phase space is $M = \cup_{i=1}^d I_i \times [-\pi/2, \pi/2]$, where $\ell(I_i) = |\partial \Gamma_i|$, i.e.\ the length of $I_i$ equals the arclength of $\partial \Gamma_i$, $i=1, \ldots d$. $M$ is parametrized by the canonical coordinates $(r, \varphi)$ where $r$ represents the arclength parameter on the boundaries of the scatterers (oriented clockwise) and $\varphi$ represents the angle an outgoing (postcollisional) trajectory makes with the unit normal to the boundary at the point of collision. The phase space $M$ and coordinates so defined are fixed for all classes of perturbations we consider; however, the configuration space (the billiard table on which the particles flow) and the laws which govern the motion of the particles may vary as long as all variations give rise to the same phase space $M$, i.e.\ the number of $\Gamma_i$ and the arclengths of their boundaries do not change. See Remark~\ref{rem:arclength} for a way to relax this requirement on the arclength. For any $x = (r,\varphi) \in M$, we define $\tau(x)$ to be the first collision of the trajectory starting at $x$ under the billiard flow. The billiard map is defined wherever $\tau(x) < \infty$. We say that the billiard has finite horizon if there is an upper bound on the function $\tau$. Otherwise, we say the billiard has infinite horizon. Notice that the function $\tau$ depends on the placement of the scatterers in $\mathbb{T}^2$, while $M$ is independent of their placement. We assume there exists a class of maps $\mathcal{F}$ on $M$ satisfying properties {\bf (H1)}-{\bf (H5)} of Section~\ref{class of maps} with uniform constants. For each $T \in \mathcal{F}$, in Section~\ref{transfer} we define the transfer operator $\mathcal{L}_T$ associated with $T$ on an appropriate class of distributions $h$ by \[ \mathcal{L}_Th(\psi) = h(\psi \circ T), \;\; \; \mbox{for suitable test functions } \psi. \] In Section~\ref{norms}, we define Banach spaces of distributions $( \B, \| \cdot \|_\B )$ and $(\B_w, | \cdot |_w )$, preserved under the action of $\mathcal{L}_T$, $T \in \mathcal{F}$, such that the unit ball of $\B$ is compactly embedded in $\B_w$. \begin{theorem} \label{thm:uniform} Fix $M$ as above and suppose there exists a class of maps $\mathcal{F}$ satisfying {\bf (H1)}-{\bf (H5)} of Section~\ref{class of maps}. Then $\mathcal{L}_T$ is well defined as a bounded linear operator on $\B$ for each $T \in \mathcal{F}$. In addition, there exist $C>0$, $\sigma <1$ such that for any $T \in \mathcal{F}$ and $n \geq 0$, \begin{equation} \begin{split} \label{eq:uniform LY} | \mathcal{L}_T^n h|_w & \leq C \eta^n |h|_w \qquad \mbox{for all $h \in \B_w$}, \\ \| \mathcal{L}_T^n h \|_\B & \le C \sigma^n \| h \|_\B + C \eta^n |h|_w \qquad \mbox{for all $h \in \B$}, \end{split} \end{equation} where $\eta \ge 1$ is from {\bf (H5)}. This, plus the compactness of $\B$ in $\B_w$, implies that all the operators $\mathcal{L}_T$, $T \in \mathcal{F}$, are quasi-compact with essential spectral radius bounded by $\sigma$: i.e., outside of any disk of radius greater than $\sigma$, their spectra contain finitely many eigenvalues of finite multiplicity. Moreover, for each $T \in \mathcal{F}$, \begin{itemize} \item[(i)] the spectral radius of $\mathcal{L}_T$ is 1 and the elements of the peripheral spectrum are measures absolutely continuous with respect to $\overline{\mu} := \lim_{n \to \infty} \frac 1n \sum_{i=0}^{n-1} \mathcal{L}_T^i 1$; \item[(ii)] an ergodic, invariant probability measure $\nu$ for $T$ is in $\B$ if and only if $\nu$ is a physical measure\footnote{Recall that a physical measure for $T$ is an ergodic, invariant probability measure $\nu$ such that $\lim_{n \to \infty} \frac 1n \sum_{i=1}^{n-1} f(T^i x) = \int f \, d\nu$ for a positive Lebesgue measure set of $x \in M$.} for $T$; \item[(iii)] there exist a finite number of $q_\ell \in \mathbb{N}$ such that the spectrum of $\mathcal{L}$ on the unit circle is $\cup_\ell \{ e^{2\pi i \frac{k}{q_\ell}} : 0 \le k < q_\ell, \, k \in \mathbb{N} \}$. The peripheral spectrum contains no Jordan blocks. \item[(iv)] Let $\mathcal{S}_{\pm n, \varepsilon}^{\bH}$ denote the $\varepsilon$-neighborhood of $\mathcal{S}_{\pm n}^\bH$, the singularity set for $T^{\pm n}$ (with homogeneity strips). Then for each $\nu$ in the peripheral spectrum and $n \in \mathbb{N}$, we have $\nu(\mathcal{S}_{\pm n, \varepsilon}^\bH) \le C_n \varepsilon^\alpha$, for some constants $C_n >0$. \item[(v)] If $(T \overline{\mu})$ is ergodic, then 1 is a simple eigenvalue. If $(T^n, \overline{\mu})$ is ergodic for all $n \in \mathbb{N}$, then 1 is the only eigenvalue of modulus 1, $(T, \overline{\mu})$ is mixing and enjoys exponential decay of correlations for H\"older observables. \end{itemize} \end{theorem} Theorem~\ref{thm:uniform} is proved in Section~\ref{uniform}. In Section~\ref{distance}, we define a distance $d_\mathcal{F}(\cdot , \cdot)$ between maps in $\mathcal{F}$. Our next result shows that this distance controls the size of perturbations in the spectra of the associated transfer operators. \begin{theorem} \label{thm:close} Let $\beta >0$ be from the definition of $(\B, \| \cdot \|_\B)$ in Section~\ref{norms}. There exists $C >0$ such that if $T_1, T_2 \in \mathcal{F}$ with $d_\mathcal{F}(T_1, T_2) \le \varepsilon$, then \[ ||| \mathcal{L}_{T_1} - \mathcal{L}_{T_2} ||| \le C \varepsilon^{\beta/2}, \; \; \; \mbox{where $||| \cdot |||$ is from \eqref{eq:pert small}.} \] \end{theorem} We prove Theorem~\ref{thm:close} in Section~\ref{close}. According to Theorem~\ref{thm:kl}, an immediate consequence of Theorems~\ref{thm:uniform} and \ref{thm:close} is the following. \begin{cor} \label{cor:limit theorems} If $T_0 \in \mathcal{F}$ has a spectral gap, then all $T \in X_\varepsilon(T_0) = \{ T \in \mathcal{F} : d_\mathcal{F}(T,T_0) < \varepsilon$ have a spectral gap for $\varepsilon$ sufficiently small. In particular, the maps in $X_\varepsilon$ enjoy the following limit theorems (among others), which follow from the existence of a spectral gap. Fix $T \in F$ with a spectral gap. Let $\gamma = \max \{p , 2\beta + \delta \}$ for some $\delta>0$, where $p$ and $\beta$ are from Sect.~\ref{norms}, and let $g \in \mathcal{C}^\gamma(M)$. Define $S_ng = \sum_{k=0}^{n-1} g \circ T^k$. \begin{itemize} \item[(a)] (Local large deviation estimate) For any (not necessarily invariant) probability measure $\nu \in \B$, \[ \lim_{\varepsilon \to 0} \lim_{n \to \infty} \frac 1n \log \nu \Big( x \in M : \frac 1n S_ng(x) \in [t-\varepsilon, t+\varepsilon] \Big) = - I(t) \] where the rate function $I(t)$ is independent of $\nu \in \B$ (but may depend on $T$), and $t$ is in a neighborhood of the mean $\overline{\mu}(g)$. \item[(b)] (Almost-sure invariance principle). Suppose $\overline{\mu}(g)=0$ and distribute $(g \circ T^j)_{j \in \mathbb{N}}$ according to a probability measure $\nu \in \B$. Then there exist $\lambda >0$, a probability space $\Omega$ with random variables $\{ X_n \}$ satisfying $S_ng \stackrel{\mbox{dist.}}{=} X_n$, and a Brownian motion $W$ with variance $\varsigma^2\geq 0$ such that \[ X_n = W(n) + \mathcal{O}(n^{1/2-\lambda}) \qquad \mbox{as $n\to \infty$ almost-surely in $\Omega$}. \] \end{itemize} \end{cor} \noindent The proof of the corollary is the same as that of \cite[Theorem 2.6]{demers zhang} and will not be repeated here. \subsection{Applications to concrete classes of deterministic perturbations} \label{concrete} In this section we describe precisely several types of perturbations of the Lorentz gas which fall under the abstract framework we have outlined above. In light of Theorems~\ref{thm:uniform} and \ref{thm:close}, it suffices to check two things for each class of perturbations we will introduce: (1) {\bf (H1)}-{\bf (H5)} hold uniformly in each class; (2) the perturbations are small in the sense of the distance $d_\mathcal{F}( \cdot , \cdot)$. \noindent {\bf A. Movements and Deformations of Scatterers.} \\ We fix the phase space $M = \cup_{i=1}^d I_i \times [-\frac{\pi}{2}, \frac{\pi}{2}]$ associated with a billiard map corresponding to a periodic Lorentz gas with $d$ scatterers as described above. We assume that the billiard particle moves along straight lines and undergoes elastic reflections at the boundaries. For given $I_1, \ldots, I_d$, we use the notation $Q = Q(\{ \Gamma_i \}_{i=1}^d ; \{ I_i \}_{i=1}^d)$ to denote the configuration of scatterers $\Gamma_1, \ldots, \Gamma_d$ placed on the billiard table such that $|\partial \Gamma_i| = \ell(I_i)$, $i = 1, \ldots, d$. Since we have fixed $I_1, \ldots, I_d$, $M$ remains the same for all configurations $Q$ that we consider. For each such configuration, we define \[ \tau_{\min}(Q) = \inf \{ \tau(x) : \tau(x) \mbox{ is defined for the configuration } Q \} . \] Similarly, $\mathcal{K}_{\min}(Q)$ and $\mathcal{K}_{\max}(Q)$ denote the minimum and maximum curvatures respectively of the $\Gamma_i$ in the configuration $Q$. The constant $E_{\max}(Q)$ denotes the maximum $C^3$ norm of the $\partial \Gamma_i$ in $Q$. For each fixed $\tau_*, \mathcal{K}_*, E_* >0$, define $\mathcal{Q}_1(\tau_*, \mathcal{K}_*, E_*)$ to be the collection of all configurations $Q$ such that $\tau_{\min}(Q) \ge \tau_*$, $\mathcal{K}_* \le \mathcal{K}_{\min}(Q) \le K_{\max}(Q) \le \mathcal{K}_*^{-1}$, and $E_{\max}(Q) \le E_*$. The horizon for $Q \in \mathcal{Q}_1(\tau_*, \mathcal{K}_*, E_*)$ is allowed to be finite or infinite. Let $\mathcal{F}_1(\tau_*, \mathcal{K}_*, E_*)$ be the corresponding set of billiard maps induced by the configurations in $\mathcal{Q}_1$. It follows from \cite{demers zhang} that for any $T \in \mathcal{F}_1(\tau_*, \mathcal{K}_*, E_*)$, $\mathcal{L}_T$ has a spectral gap in $\B$. We prove the following theorems in Section~\ref{perts}. \begin{theorem} \label{thm:F1} Fix $I_1, \ldots, I_d$ and let $\tau_*, \mathcal{K}_*, E_* >0$. The family $\mathcal{F}_1(\tau_*, \mathcal{K}_*, E_*)$ satisfies {\bf (H1)}-{\bf (H5)} with uniform constants depending only on $\tau_*$, $\mathcal{K}_*$ and $E_*$. As a consequence of Theorem~\ref{thm:uniform}, $\mathcal{L}_T$ is quasi-compact as an operator on $\B$ for each $T \in \mathcal{F}_1(\tau_*, \mathcal{K}_*, E_*)$ with uniform bounds on its essential spectral radius. \end{theorem} We fix an initial configuration of scatterers $Q_0 \in \mathcal{Q}_1(\tau_*, \mathcal{K}_*, E_*)$ and consider configurations $Q$ which alter each $\partial \Gamma_i$ in $Q_0$ to a curve $\partial \tilde{\Gamma}_i$ having the same arclength as $\partial \Gamma_i$. We consider each $\partial \Gamma_i$ as a parametrized curve $u_i:I_i \to M$ and each $\partial \tilde{\Gamma}_i$ as parametrized by $\tilde{u}_i$. Define $\Delta(Q, Q_0) = \sum_{i=1}^d |u_i - \tilde{u}_i|_{C^2(I_i,M)}$. \begin{theorem} \label{thm:deform} Choose $\gamma \le \min \{ \tau_*/2, \mathcal{K}_*/2 \}$ and let $\mathcal{F}_A(Q_0, E_*; \gamma)$ be the set of all billiard maps corresponding to configurations $Q$ such that $\Delta(Q,Q_0) \le \gamma$ and $E_{\max}(Q) \le E_*$. Then $\mathcal{F}_A(Q_0, E_*; \gamma) \subset \mathcal{F}_1(\tau_*/2, \mathcal{K}_*/2, E_*)$ and $d_\mathcal{F}(T_1, T_2) \le C|\gamma|^{2/15}$ for any $T_1, T_2 \in \mathcal{F}_A(Q_0, E_*; \gamma)$. If all $T_i \in \mathcal{F}_A(Q_0, E_*; \gamma)$ have uniformly bounded finite horizon, then $d_\mathcal{F}(T_1, T_2) \le C|\gamma|^{1/3}$. As a consequence, the eigenvalues outside a disk of radius $\sigma <1$ and the corresponding spectral projectors of $\mathcal{L}_T$ vary H\"older continuously for all $T \in \mathcal{F}_A(Q_0, E_*; \gamma)$ and all $\gamma$ sufficiently small. \end{theorem} \begin{remark} \label{rem:arclength} (a) A remarkable aspect of this result is that it allows us to move configurations from finite to infinite horizon without interrupting H\"older continuity of the statistical properties such as the rate of decay of correlations and the variance in the CLT, among others. \noindent (b) The requirement that all deformations of the initial configuration $Q_0$ maintain the same arclength can be relaxed. The purpose of this requirement is to define the corresponding transfer operators on fixed spaces $\B$ and $\B_w$. If a scatterer $\Gamma_i$ is deformed into $\Gamma_i'$ with a slight change in arclength, we can reparametrize $\Gamma_i'$ (no longer according to arclength) using the same interval $I_i$ as for $\Gamma_i$. This will change the derivatives of maps in the class $\mathcal{F}_B(Q_0, E_*; \gamma)$ slightly, but since properties {\bf (H1)}-{\bf (H5)} have some leeway built into the uniform constants, for small enough reparametrizations the same properties will hold with slightly weakened constants. \end{remark} \noindent {\bf B. Billiards Under Small External Forces with Kicks and Slips.} \\ As in part A, we fix $\tau_*, \mathcal{K}_*$ and $E_*$ and choose a fixed $Q_0 \in \mathcal{Q}_1(\tau_*, \mathcal{K}_*, E_*)$. In this section, we consider the dynamics of the billiard map on the table $Q_0$, but subject to external forces both during flight and at collisions. Let $\mathbf{q}=(x,y)$ be the position of a particle in a billiard table $Q_0$ and $\mathbf{p}$ be the velocity vector. For a $C^2$ stationary external force, $\mathbf{F}: \mathbb{T}^2 \times \mathbb{R}^2 \to \mathbb R^2$, the perturbed billiard flow $\Phi^t$ satisfies the following differential equation between collisions: \begin{equation}\label{flowf} \frac{d \mathbf{q}}{dt} =\mathbf{p}(t) , \qquad \frac{d \mathbf{p}}{dt} = \mathbf{F}(\mathbf{q}, \mathbf{p}) . \end{equation} At collision, the trajectory experiences possibly nonelastic reflections with slipping along the boundary: \begin{equation} \label{reflectiong} (\mathbf{q}^+(t_i), \mathbf{p}^+(t_i)) = (\mathbf{q}^-(t_i), \mathcal{R} \mathbf{p}^-(t_i)) +\mathbf G(\mathbf{q}^-(t_i), \mathbf{p}^-(t_i)) \end{equation} where $\mathcal{R} \mathbf{p}^-(t_i)= \mathbf{p}^-(t_i)+2(n(\mathbf{q}^-)\cdot \mathbf{p}^{-})n(\mathbf{q}^-))$ is the usual reflection operator, $n(\mathbf{q})$ is the unit normal vector to the billiard wall $\partial Q_0$ at $\mathbf{q}$ pointing inside the table $Q_0$, and $\mathbf{q}^-(t_i), \mathbf{p}^-(t_i)$, $\mathbf{q}^+(t_i)$ and $\mathbf{p}^+(t_i)$ refer to the incoming and outgoing position and velocity vectors, respectively. $\mathbf G$ is an external force acting on the incoming trajectories. Note that we allow $\bf G$ to change both the position and velocity of the particle at the moment of collision. The change in velocity can be thought of as a kick or twist while a change in position can model a slip along the boundary at collision. In \cite{Ch01, Ch08}, Chernov considered billiards under small external forces $\mathbf{F}$ with $\mathbf G=0$, and $\mathbf{F}$ to be stationary. In \cite{Z09} a twist force was considered assuming $\mathbf{F}=0$ and $\mathbf{G}$ depending on and affecting only the velocity, not the position. Here we consider a combination of these two cases for systems under more general forces $\mathbf{F}$ and $\mathbf{G}$. We make four assumptions, combining those in \cite{Ch01, Z09}. \\ \noindent\textbf{(A1)} (\textbf{Invariant space}) \emph{Assume the dynamics preserve a smooth function $\mathcal{E}(\mathbf{q},\mathbf{p})$. Its level surface $\Omega_c:=\mathcal{E}^{-1}(c)$, for any $c>0$, is a compact 3-d manifold such that $\|\mathbf{p}\|>0$ on $\Omega_c$ and for each $\mathbf{q}\in Q$ and $\mathbf{p}\in S^1$ the ray $\{(\mathbf{q}, t\mathbf{p}), t>0\}$ intersects the manifold $\Omega_c$ in exactly one point.} \\ Assumption ({\bf{A1}}) specifies an additional integral of motion, so that we only consider restricted systems on a compact phase space. In particular, ({\bf{A1}}) implies that the speed $p=\|\mathbf{p}\|$ of the billiard along any typical trajectory at time $t$ satisfies $$0<p_{\min}\leq p(t) \leq p_{\max}<\infty$$ for some constants $p_{\min}\leq p_{\max}$. Under this assumption the particle will not become overheated, and its speed will remain bounded. For any phase point $\mathbf{x}=(\mathbf{q}, \mathbf{p}) \in \Omega$ for the flow, let $\tau(\mathbf{x})$ be the length of the trajectory between $\mathbf{x}$ and its next non-tangential collision. \\ \noindent(\textbf{A2}) (\textbf{Finite horizon}) \emph{There exist $\tau_{\max}>\tau_{\min}>0$ such that free paths between successive reflections are uniformly bounded, $\tau_*/2 \le \tau_{\min} \le \tau( \mathbf{x} ) \le \tau_{\max} \le \tau_*^{-1}$, $\forall \mathbf{x} \in \Omega$. Since $Q_0 \in \mathcal{Q}_1(\tau_*, \mathcal{K}_*, E_*)$, the curvature $\mathcal{K}(r)$ of the boundary is also uniformly bounded for all $r \in \partial Q_0$.} \\ \noindent(\textbf{A3}) (\textbf{Smallness of the perturbation}). \emph{We assume there exists $\eps_1>0$ small enough, such that $$\|\mathbf{F}\|_{C^1}<\eps_1, \|\mathbf G\|_{C^1}<\eps_1 .$$ } Let $\mathbf{v}=(\cos\theta, \sin\theta)$ denote the unit velocity vector with $\theta\in [0,2\pi]$, and $\mathcal{M}$ be a level surface $\Omega_c$ with coordinates $(\mathbf{q},\theta)$, for some fixed $c>0$. Denote $T_{\mathbf{F},\mathbf{G}}: M\to M$ as the billiard map associated to the flow on $\mathcal{M}$, where $M$ is the collision space containing all post-collision vectors based at the boundary of the billiard table $Q_0$.\\ \noindent(\textbf{A4}) \emph{We assume both forces $\mathbf{F}$ and $\mathbf{G}$ are stationary and that $\mathbf{G}$ preserves tangential collisions. In addition, we assume that the singularity set of $T^{-1}_{\mathbf{F}, \mathbf{G}}$ is the same as that of $T^{-1}_{\mathbf{F}, \mathbf{0}}$.\footnote{The assumption on the singularity set of $T^{-1}_{\mathbf{F}, \mathbf{G}}$ is not essential to our approach, but is made to simplify the proofs in Section~\ref{kick}, since the paper is already quite long and we include a number of distinct applications.}} \\ The case $\mathbf{F} = \mathbf{G}=0$ corresponds to the classical billiard dynamics. It preserves the kinetic energy $\mathcal{E} =\frac{1}{2} \|\mathbf{p}\|^2$. We denote by $\mathcal{F}_B(Q_0, \tau_*, \eps_1)$ the class of all perturbed billiard maps defined by the dynamics \eqref{flowf} and \eqref{reflectiong} under forces $\mathbf{F}$ and $\mathbf{G}$, satisfying assumptions {\bf (A1)}-{\bf (A4)}. \begin{theorem} \label{thm:C1} For any $T\in \mathcal{F}(Q_0, \tau_*, \eps_1)$, the perturbed system $T$ satisfies {\bf (H1)}-{\bf (H5)} with uniform constants depending only on $\eps_1$, $\tau_*$, $\mathcal{K}_*$ and $E_*$. \end{theorem} \begin{theorem} \label{thm:C2} Within the class $\mathcal{F}_B(Q_0, \tau_*, \eps_1)$, the change of either the force $\mathbf{F}$ or $\mathbf{G}$ by a small amount $\delta$ yields a perturbation of size $\mathcal{O}(|\delta|^{1/2})$ in the distance $d_\mathcal{F}(\cdot, \cdot)$. As a consequence, the spectral gap enjoyed by the classical billiard $T_{\mathbf{0}, \mathbf{0}}$ persists for all $T_{\mathbf{F}, \mathbf{G}} \in \mathcal{F}(Q_0, \tau_*, \eps_1)$ for $\varepsilon_1$ sufficiently small so that we may apply the limit theorems of Corollary~\ref{cor:limit theorems} to any such $T_{\mathbf{F}, \mathbf{G}}$. \end{theorem} The limit theorems implied by Theorem~\ref{thm:C2} are new even for the simplified maps $T_{\mathbf{F},\mathbf{0}}$ and $T_{\mathbf{0}, \mathbf{G}}$. We provide the proofs of Theorems~\ref{thm:C1} and \ref{thm:C2} in Section~\ref{kick}. \subsection{Smooth random perturbations} \label{random} We follow the expositions in \cite{demers liverani, demers zhang}. Suppose $\mathcal{F}$ is a class of maps satisfying {\bf (H1)}-{\bf (H5)} and let $d_\mathcal{F}(\cdot, \cdot)$ be the distance in $\mathcal{F}$ defined in Section~\ref{distance}. For $T_0 \in \mathcal{F}$, $\varepsilon > 0$, define \[ X_\varepsilon(T_0) = \{ T \in \mathcal{F} : d_\mathcal{F}(T, T_0) < \varepsilon \}, \] to be the $\varepsilon$-neighborhood of $T_0$ in $\mathcal{F}$. Let $(\Omega, \nu)$ be a probability space and let $g: \Omega \times M \to \mathbb{R}^+$ be a measurable function satisfying: There exist constants $a, A >0$ such that \begin{enumerate} \item[(i)] $g(\omega, \cdot) \in \mathcal{C}^1(M, \mathbb{R}^+)$ and $|g(\omega, \cdot)|_{\mathcal{C}^1(M)} \le A$ for each $\omega \in \Omega$; \item[(ii)] $\int_\Omega g(\omega, x) d\nu(\omega) = 1$ for each $x \in M$; \item[(iii)] $g(\omega, x) \geq a $ for all $\omega \in \Omega$, $x \in M$. \end{enumerate} We define a random walk on $M$ by assigning to each $\omega \in \Omega$, a map $T \in X_\varepsilon(T_0)$. Starting at $x \in M$, we choose $T_\omega \in X_\varepsilon(T_0)$ according to the distribution $g(\omega, x)d\nu$. We apply $T_\omega$ to $x$ and repeat this process starting at $T_\omega x$. We say the process defined in this way has size $\Delta(\nu,g) \le \varepsilon$. Notice that if $\nu$ is the Dirac measure centered at $\omega_0$, then this process corresponds to the deterministic perturbation $T_{\omega_0}$ of $T_0$. If $g \equiv 1$, then the choice of $T_\omega$ is independent of the position $x$, while in general this formulation allows the choice of the next map to depend on the previous step taken. The transfer operator $\mathcal{L}_{(\nu,g)}$ associated with the random process is defined by \[ \mathcal{L}_{(\nu,g)}h(x) = \int_\Omega \mathcal{L}_{T_\omega} h(x) \, g(\omega, T_\omega^{-1}x) \, d\nu(\omega) \] for all $h \in L^1(M,m)$, where $m$ is Lebesgue measure on $M$. \begin{theorem} \label{thm:random} The transfer operator $\mathcal{L}_{(\nu,g)}$ satisfies the uniform Lasota-Yorke inequalities given by Theorem~\ref{thm:uniform}. Let $\varepsilon_0$ be given by \eqref{eq:s-unstable} and let $\varepsilon \le \varepsilon_0$. If $\Delta(\nu,g) \le \varepsilon$, then there exists a constant $C>0$ depending only on {\bf (H1)}-{\bf (H5)}, such that $||| \mathcal{L}_{(\nu,g)} - \mathcal{L}_{T_0} ||| \le C A \varepsilon^{\beta/2}$. It follows that all the operators $\mathcal{L}_{(\nu,g)}$ enjoy a spectral gap for $\varepsilon$ sufficiently small if $\mathcal{L}_{T_0}$ has a spectral gap and the limit theorems of Corollary~\ref{cor:limit theorems} apply to $\mathcal{L}_{(\nu,g)}$. \end{theorem} \subsection{Large perturbations: Large translations, rotations and deformations of scatterers} \label{large pert} If we fix $\tau_*, \mathcal{K}_* >0$ and $E_* < \infty$, then Theorems~\ref{thm:uniform} and \ref{thm:F1} imply that the transfer operator $\mathcal{L}_T$ corresponding to any $T \in \mathcal{F}_1(\tau_*, \mathcal{K}_*, E_*)$ is quasi-compact with essential spectral radius bounded by $\sigma<1$. In fact, \cite[Theorem 2.5]{demers zhang} implies that $\mathcal{L}_T$ has a spectral gap. Now choose a compact interval $J \subset \mathbb{R}$ and parametrize a continuous path in $\mathcal{F}_1(\tau_*, \mathcal{K}_*, E_*)$ according to the distance $d_\mathcal{F}(\cdot, \cdot)$. To each point $s \in J$ is assigned a map $T_s \in \mathcal{F}_1(\tau_*, \mathcal{K}_*, E_*)$ and a corresponding transfer operator $\mathcal{L}_s$. Fix $\sigma_1 > \sigma$. Due to Theorem~\ref{thm:close}, there exists $\varepsilon_s >0$ such that the spectra and spectral projectors of $\mathcal{L}_{s'}$ outside the disk of radius $\sigma_1$ vary H\"older continuously for $s' \in (s-\varepsilon_s, s+\varepsilon_s) =: B(s, \varepsilon_s)$. The balls $B(s, \varepsilon_s)$, $s \in J$ form an open cover of $J$ and since $J$ is compact, there is a finite subcover $\{ B(s_i, \varepsilon_{s_i}) \}_{i=1}^n$. Because these intervals overlap, as we move along the entire path from one end of $J$ to the other, the spectra and spectral projectors of $\mathcal{L}_s$ vary H\"older continuously in $s$. We have proved the following. \begin{theorem} \label{thm:large pert} Let $J \subset \mathbb{R}$ be a compact interval and let $\{ T_s \}_{s \in J} \subset \mathcal{F}_1(\tau_*, \mathcal{K}_*, E_*)$ be a continuously parametrized path according to the distance $d_\mathcal{F}(\cdot , \cdot)$. Then the spectra and spectral projectors of the associated transfer operators $\mathcal{L}_s$ vary H\"older continuously in the distance $d_\mathcal{F}(\cdot, \cdot)$. As a consequence, the related dynamical properties of $T_s$, such as the rate of decay of correlations and variance in the Central Limit Theorem, vary H\"older continuously even across large movements and deformations of scatterers as long as the resulting maps remain in $\mathcal{F}_1(\tau_*, \mathcal{K}_*, E_*)$. Indeed, since $J$ is compact, the continuity of the spectral data implies that the spectral gap is uniform along such paths even when the resulting configurations are no longer close to the original. This regularity holds as we move scatterers in such a way that the table changes from finite to infinite horizon. \end{theorem} \begin{remark} One could just as well apply the above large movements of scatterers to billiards under external forces in the uniform families $\mathcal{F}_B(Q_s, \tau_*, \varepsilon_1)$ and allow the configurations $Q_s$, $s \in J$, to change over a continuously parametrized path in $\mathcal{F}_1(\tau_*, \mathcal{K}_*, E_*)$ as long as the horizon along the path remains bounded uniformly above by $\tau_*^{-1}$. Theorem~\ref{thm:large pert} applies to such families of maps as well since they all possess spectral gaps by Theorem~\ref{thm:C2}. \end{remark} \section{Common Approach} \label{common} In this section, we describe the common approach we will take for each class of perturbations that we consider. We begin by formulating general conditions {\bf (H1)}-{\bf (H5)} under which the perturbations of a billiard map will satisfy the Lasota-Yorke inequalities \eqref{eq:pert LY} with uniform constants. We also introduce general conditions {\bf (C1)}-{\bf (C4)} to verify that a perturbation is small in the sense of \eqref{eq:pert small}. Theorems~\ref{thm:uniform} and \ref{thm:close} show that these conditions are sufficient to establish the framework of \cite{keller liverani}. Once this is accomplished, we only need to check that these conditions are satisfied for each class of perturbations described above. \subsection{A class of maps with uniform properties} \label{class of maps} We fix the phase space $M = \cup_i I_i \times [ - \frac{\pi}{2}, \frac{\pi}{2} ]$ of a billiard map associated with a periodic Lorentz gas as in Section~\ref{abstract}. We define the set $\mathcal{S}_0 = \{ \varphi = \pm \frac{\pi}{2} \}$ and for a fixed $k_0 \in \mathbb{N}$, we define for $k \geq k_0$, the homogeneity strips, \begin{equation}\label{homogeneity} \Ho_k = \{ (r,\varphi) : \pi/2 - k^{-2} < \varphi < \pi/2 - (k+1)^2 \}. \end{equation} The strips $\Ho_{-k}$ are defined similarly near $\varphi = -\pi/2$. We also define $\Ho_0 = \{ (r, \varphi) : -\pi/2 + k_0^{-2} < \varphi < \pi/2 - k_0^{-2} \}$. The set $\mathcal{S}_{0,H} = \mathcal{S}_0 \cup (\cup_{|k| \ge k_0} \partial \Ho_{\pm k} )$ is therefore fixed and will give rise to the singularity sets for the maps that we define below, i.e. for any map $T$ that we consider, we define $\mathcal{S}_{\pm n}^T = \cup_{i = 0}^n T^{\mp i} \mathcal{S}_{0,H}$ to be the singularity sets for $T^{\pm n}$, $n \ge 0$. Suppose there exists a class of invertible maps $\mathcal{F}$ such that for each $T \in \mathcal{F}$, $T : M \setminus \mathcal{S}_1^T \to M \setminus \mathcal{S}_{-1}^T$ is a $C^2$ diffeomorphism on each connected component of $M \setminus \mathcal{S}_1^T$. We assume that elements of $\mathcal{F}$ enjoy the following uniform properties. \noindent {\bf(H1)} {\em Hyperbolicity and singularities.} There exist continuous families of stable and unstable cones $C^s(x)$ and $C^u(x)$, defined on all of $M$, which are strictly invariant for the class $\mathcal{F}$, i.e., $DT(x) C^u(x) \subset C^u(Tx)$ and $DT^{-1}(x) C^s(x) \subset C^s(T^{-1}x)$ for all $T \in \mathcal{F}$ wherever $DT$ and $DT^{-1}$ are defined. \noindent We require that the cones $C^s(x)$ and $C^u(x)$ are uniformly transverse on $M$ and that $\mathcal{S}_{-n}^T$ is uniformly transverse to $C^s(x)$ for each $n \in \mathbb{N}$ and all $T \in \mathcal{F}$. We assume in addition that $C^s(x)$ is uniformly transverse to the horizontal and vertical directions on all of $M$.\footnote{This is not a restrictive assumption for perturbations of the Lorentz gas since the standard cones $\hat C^s$ and $\hat C^u$ for the billiard map satisfy this property (see for example \cite[Section 4.5]{chernov book}); the common cones $C^s(x)$ and $C^u(x)$ shared by all maps in the class $\mathcal{F}$ must therefore lie inside $\hat C^s(x)$ and $\hat C^u(x)$ and therefore satisfy this property. In any case, a weaker formulation of this assumption is necessary: we use in the compactness argument that the lengths of stable curves in the homogeneity strips $\Ho_k, k \ge k_0$, are proportional to the width of the strips. This is only true if stable curves are transverse to the horizontal direction in such strips.} \noindent Moreover, there exist constants $C_e>0$ and $\Lambda >1$ such that for all $T \in \mathcal{F}$, \begin{equation} \label{eq:uniform hyp} \| DT^n(x) v \| \ge C_e^{-1} \Lambda^n \| v\|, \forall v \in C^u(x), \; \; \; \mbox{and} \; \; \; \| DT^{-n}(x) v \| \ge C_e^{-1} \Lambda^n \| v\|, \forall v \in C^s(x), \end{equation} for all $n \ge 0$, where $\| \cdot \|$ is the Euclidean norm on the tangent space $\mathcal{T}_xM$. \noindent For any stable curve $W \in \widehat \mathcal{W}^s$ (see {\bf (H2)} below), the set $W \cap T\mathcal{S}_0$ (not counting homogeneity strips) is finite or countable and has at most $K$ accumulation points on $W$, where $K\geq 0$ is a constant uniform for $T \in \mathcal{F}$. Let $x_{\infty}$ be one of them and let $\{x_n\}$ denote the monotonic sequence of points $W \cap T\mathcal{S}_0$ converging to $x_{\infty}$. We denote the part of $W$ between $x_n$ and $x_{n+1}$ by $W_n$. We assume there exists $C_a >0$ such that the expansion factor on $W_n$ satisfies \begin{equation} \label{eq:expansion} C_a n [\cos \varphi(T^{-1}x)]^{-1} \| v \| \leq \|DT^{-1}(x) v\| \leq C_a^{-1} n [\cos \varphi(T^{-1}x)]^{-1} \| v \| ,\,\,\,\,\,\,\forall x\in W_n, \forall v\in C^s(x), \end{equation} where $\varphi(y)$ denotes the angle at the point $y = (r, \varphi) \in M$. Let exp$_x$ denote the exponential map from $\mathcal{T}_xM$ to $M$. We require the following bound on the second derivative, \begin{equation} \label{eq:2 deriv} C_a n^2 [\cos \varphi(T^{-1}x)]^{-3} \leq \|D^2T^{-1}(x) v\| \leq C_a^{-1} n^2 [\cos \varphi(T^{-1}x)]^{-3},\,\,\,\,\,\,\forall x\in W_n, \end{equation} for all $v \in \mathcal{T}_xM$ such that $T^{-1}(\mbox{exp}_x(v))$ and $T^{-1}x$ lie in the same homogeneity strip. \noindent We assume there exist constants $c_s, \upsilon_0 >0$ such that if $x \in W_n$ and $T^{-1}x \in \Ho_k$, then \begin{equation} \label{eq:3 deriv} k \ge c_s n^{\upsilon_0} .\end{equation} \noindent If $K=0$ (i.e. the billiard has finite horizon) then the indexing scheme above based on $n$ is finite and \eqref{eq:expansion}, \eqref{eq:2 deriv} and \eqref{eq:3 deriv} hold with $n=1$. If $W \cap T\mathcal{S}_0$ has no accumulating points, we assume (\ref{eq:expansion}), \eqref{eq:2 deriv} hold with $n=1$. \noindent {\bf(H2)} {\em Families of stable and unstable curves.} We call $W$ a {\em stable curve} for a map $T \in \mathcal{F}$ if the tangent line to $W$, $\mathcal{T}_xW$ lies in $C^s(x)$ for all $x \in W$. We call $W$ {\em homogeneous} if $W$ is contained in one homogeneity strip $\Ho_k$. Unstable curves are defined similarly. \noindent We assume that there exists a family of smooth stable curves, $\widehat\mathcal{W}^s$, such that each $W \in \widehat\mathcal{W}^s$ is a $\mathcal{C}^2$ stable curve with curvature bounded above by a uniform constant $B >0$. The family $\widehat\mathcal{W}^s$ is required to be invariant under $\mathcal{F}$ in the following sense: For any $W \in \widehat\mathcal{W}^s$ and $T \in \mathcal{F}$, the connected components of $T^{-1}W$ are again elements of $\widehat \mathcal{W}^s$. \noindent A family of unstable curves $\widehat\mathcal{W}^u$ is defined analogously, with obvious modifications: For example, we require the connected components of $TW$ to be elements of $\widehat\mathcal{W}^u$ for all $W \in \widehat\mathcal{W}^u$ and $T \in \mathcal{F}$. \noindent {\bf(H3)} {\em One-step expansion.} We formulate the one-step expansion in terms of an adapted norm $\| \cdot \|_*$, uniformly equivalent to $\| \cdot \|$, in which the constant $C_e$ in \eqref{eq:uniform hyp} can be taken to be $1$, i.e. we have expansion and contraction in one step in the adapted norm. We assume such a norm exists for maps in the class $\mathcal{F}$. \noindent Let $W \in \widehat W^s$. For any $T \in \mathcal{F}$, we partition the connected components of $T^{-1}W$ into maximal pieces $V_i = V_i(T)$ such that each $V_i$ is a homogeneous stable curve in some $\Ho_k$, $k\geq k_0$, or $\Ho_0$. Let $|J_{V_i}T|_*$ denote the minimum contraction on $V_i$ under $T$ in the metric induced by the adapted norm $\| \cdot \|_*$. We assume that for some choice of $k_0$, \begin{equation} \label{eq:step1} \limsup_{\delta \to 0} \sup_{T \in \mathcal{F}} \sup_{|W|<\delta} \sum_i |J_{V_i}T|_* < 1, \end{equation} where $|W|$ denotes the arclength of $W$. \noindent In addition, we require that the above sum converges even when the expansion on each piece is weakened slightly in the following sense: There exists $\varsigma_0 < 1$ such that for all $\varsigma > \varsigma_0$, there exists $C_\varsigma = C_\varsigma (\varsigma, \delta)$ such that for all $T \in \mathcal{F}$ and any $W \in \widehat\mathcal{W}^s$ with $|W| < \delta$, \begin{equation} \label{eq:weakened step1} \sum_i |J_{V_i}T|^\varsigma_{\mathcal{C}^0(V_i)} < C_\varsigma , \end{equation} where $J_{V_i}T$ denotes the stable Jacobian of $T$ along the curve $V_i$ with respect to arc length. We formulate \eqref{eq:weakened step1} in terms of the usual Euclidean norm since we do not need $C_\varsigma <1$, i.e. we only need the above sum to be finite in some uniform sense. \noindent {\bf(H4)} {\em Bounded distortion.} There exists a constant $C_d>0$ with the following properties. Let $W' \in \widehat\mathcal{W}^s$ and for any $T \in \mathcal{F}$, $n \in \mathbb{N}$, let $x, y \in W$ for some connected component $W \subset T^{-n}W'$ such that $T^iW$ is a homogeneous stable curve for each $0 \le i \le n$. Then, \begin{equation} \label{eq:distortion stable} \left| \frac{J_\mu T^n(x)}{J_\mu T^n(y)} -1 \right| \; \leq \; C_d d_W(x,y)^{1/3} \; \; \mbox{and} \; \; \left| \frac{J_WT^n(x)}{J_WT^n(y)} -1 \right| \; \leq \; C_d d_W(x,y)^{1/3}, \end{equation} where $J_\mu T^n$ is the Jacobian of $T^n$ with respect to the smooth measure $d\mu = \cos \varphi dr d\varphi$. \noindent We assume the analogous bound along unstable leaves: If $W \in \widehat\mathcal{W}^u$ is an unstable curve such that $T^iW$ is a homogeneous unstable curve for $0\le i \le n$, then for any $x, y \in W$, \begin{equation} \label{eq:D u dist} \left| \frac{J_\mu T^n(x)}{J_\mu T^n(y)} -1 \right| \; \leq \; C_d d(T^nx,T^ny)^{1/3} . \end{equation} \noindent {\bf(H5)} {\em Control of Jacobian.} Let $\beta, q < 1$ be from the definition of the norms in Section~\ref{norms} and let $\theta_*<1$ be from \eqref{eq:one step contract}. Assume there exists a constant $\eta < \min \{ \Lambda^\beta, \Lambda^q, \theta_*^{\alpha-1} \}$ such that for any $T \in \mathcal{F}$, \[ (J_\mu T(x))^{-1} \le \eta, \; \; \; \mbox{wherever $J_\mu T(x)$ is defined.} \] \subsection{Transfer operator} \label{transfer} Recall the family of stable curves $\widehat \mathcal{W}^s$ defined by {\bf (H2)}. We define a subset $\mathcal{W}^s \subset \widehat \mathcal{W}^s$ as follows. By {\bf (H3)} we may choose $\delta_0 > 0$ for which there exists $\theta_*<1$ such that \begin{equation} \label{eq:one step contract} \sup_{T \in \mathcal{F}} \sup_{|W| \le \delta_0} \sum_i |J_{V_i}T|_* \le \theta_* . \end{equation} We shrink $\delta_0$ further if necessary so that the graph transform argument in Lemma~\ref{lem:angles}(a) holds. The set $\mathcal{W}^s$ comprises all those stable curves $W \in \widehat\mathcal{W}^s$ such that $|W| \le \delta_0$. For any $T \in \mathcal{F}$, we define scales of spaces using the set of stable curves $\mathcal{W}^s$ on which the {\em transfer operator} $\mathcal{L}_T$ associated with $T$ will act. Define $T^{-n}\mathcal{W}^s$ to be the set of homogeneous stable curves $W$ such that $T^n$ is smooth on $W$ and $T^iW \in \mathcal{W}^s$ for $0 \leq i \le n$. It follows from {\bf (H2)} that $T^{-n}\mathcal{W}^s \subset \mathcal{W}^s$. We denote (normalized) Lebesgue measure on $M$ by $m$. For $W \in T^{-n}\mathcal{W}^s$, a complex-valued test function $\psi: M \to \mathbb{C}$, and $0<p\le 1$ define $H^p_W(\psi)$ to be the H\"older constant of $\psi$ on $W$ with exponent $p$ measured in the Euclidean metric. Define $H^p_n(\psi) = \sup_{W \in T^{-n}\mathcal{W}^s} H^p_W(\psi)$ and let $\tilde{\mathcal{C}}^p(T^{-n}\mathcal{W}^s) = \{ \psi : M \to \mathbb{C} \mid H^p_n(\psi) < \infty \}$, denote the set of complex-valued functions which are H\"older continuous on elements of $T^{-n}\mathcal{W}^s$. The set $\tilde{\mathcal{C}}^p(T^{-n}\mathcal{W}^s)$ equipped with the norm $| \psi |_{\mathcal{C}^p(T^{-n}\mathcal{W}^s)} = |\psi|_\infty + H^p_n(\psi)$ is a Banach space. Similarly, we define $\tilde \mathcal{C}^p(\widehat \mathcal{W}^u)$, the set of functions which are H\"older continuous with exponent $p$ on unstable curves $\widehat \mathcal{W}^u$. It follows from \eqref{eq:C1 C0} that if $\psi \in \tilde{\mathcal{C}^p}(T^{-(n-1)}\mathcal{W}^s)$, then $\psi \circ T \in \tilde{\mathcal{C}}^p(T^{-n}\mathcal{W}^s)$. Thus if $h\in(\tilde \mathcal{C}^p(T^{-n}\mathcal{W}^s))'$, is an element of the dual of $\tilde \mathcal{C}^p(T^{-n}\mathcal{W}^s)$, then $\mathcal{L}_T :(\tilde \mathcal{C}^p(T^{-n}\mathcal{W}^s))'\to (\tilde \mathcal{C}^p(T^{-(n-1)}\mathcal{W}^s))'$ acts on $h$ by \[ \mathcal{L}_T h(\psi) = h(\psi \circ T) \quad \forall \psi \in \tilde \mathcal{C}^p(T^{-(n-1)}\mathcal{W}^s). \] Recall that $d\mu = c \cos \varphi dr d\varphi$ denotes the smooth invariant measure for the unperturbed Lorentz gas. If $h \in L^1(M,\mu)$, then $h$ is canonically identified with a signed measure absolutely continuous with respect to $\mu$, which we shall also call $h$, i.e., $ h(\psi) = \int_M \psi h \, d\mu. $ With the above identification, we write $L^1(M,\mu) \subset (\tilde \mathcal{C}^p(T^{-n}\mathcal{W}^s))'$ for each $n \in \mathbb{N}$. Then restricted to $L^1(M,\mu)$, $\mathcal{L}_T$ acts according to the familiar expression \[ \mathcal{L}_T^n h = h \circ T^{-n} \; (J_\mu T^n(T^{-n}))^{-1} \; \; \; \mbox{for any $n \geq 0$ and $h \in L^1(M,\mu)$.} \] \begin{remark} In \cite{demers zhang}, we used Lebesgue measure as a reference measure to show that the functional analytic framework developed there did not need to assume the existence of a smooth invariant measure. Now that $\mu$ has been established in our function space $\B$, however, we find it more convenient to use it as a starting point in our study of the classes of perturbations considered here. It also simplifies our norms and estimates slightly since for example, it eliminates the need for the $\cos W$ weight in our test functions that was used in \cite{demers zhang}. We do not assume that $\mu$ is an invariant measure for $T \in \mathcal{F}$; indeed, the SRB measures for such $T$ are in general singular with respect to Lebesgue measure. \end{remark} \subsection{Definition of the Norms} \label{norms} The norms are defined via integration on the set of stable curves $\mathcal{W}^s$. Before defining the norms, we define the notion of a distance $d_{\mathcal{W}^s}(\cdot, \cdot)$ between such curves as well as a distance $d_q(\cdot, \cdot)$ defined among functions supported on these curves. Due to the transversality condition on the stable cones $C^s(x)$ given by {\bf (H1)}, each stable curve $W$ can be viewed as the graph of a function $\varphi_W(r)$ of the arc length parameter $r$. For each $W \in \mathcal{W}^s$, let $I_W$ denote the interval on which $\varphi_W$ is defined and set $G_W(r) = (r, \varphi_W(r))$ to be its graph so that $W = \{ G_W(r) : r \in I_W \}$. We let $m_W$ denote the unnormalized arclength measure on $W$. Let $W_1, W_2 \in \mathcal{W}^s$ and identify them with the graphs $G_{W_i}$ of their functions $\varphi_{W_i}$, $i = 1,2$. Suppose $W_1, W_2$ lie in the same component of $M$ and let $I_{W_i}$ be the $r$-interval on which each curve is defined. Denote by $\ell(I_{W_1} \triangle I_{W_2})$ the length of the symmetric difference between $I_{W_1}$ and $I_{W_2}$. Let $\Ho_{k_i}$ be the homogeneity strip containing $W_i$. We define the distance between $W_1$ and $W_2$ to be, \[ d_{\mathcal{W}^s} (W_1,W_2) = \eta(k_1, k_2) + \ell( I_{W_1} \triangle I_{W_2}) + |\varphi_{W_1} -\varphi_{W_2}|_{\mathcal{C}^1(I_{W_1} \cap I_{W_2})} \] where $\eta(k_1,k_2) = 0$ if $k_1=k_2$ and $\eta(k_1,k_2) = \infty$ otherwise, i.e., we only compare curves which lie in the same homogeneity strip. For $0 \leq p \leq 1$, denote by $\tilde{\mathcal{C}}^p(W)$ the set of continuous complex-valued functions on $W$ with H\"{o}lder exponent $p$, measured in the Euclidean metric, which we denote by $d_W(\cdot, \cdot)$. We then denote by $\mathcal{C}^p(W)$ the closure of $\mathcal{C}^\infty(W)$ in the $\tilde{\mathcal{C}}^p$-norm\footnote{While $\mathcal{C}^p(W)$ may not contain all of $\tilde{\mathcal{C}}^p(W)$, it does contain $\mathcal{C}^{p'}\!(W)$ for all $p'>p$.}: $| \psi |_{\mathcal{C}^p(W)} = |\psi|_{\mathcal{C}^0(W)} + H^p_W(\psi)$, where $H^p_W(\psi)$ is the H\"older constant of $\psi$ along $W$. Notice that with this definition, $|\psi_1 \psi_2 |_{\mathcal{C}^p(W)} \le |\psi_1|_{\mathcal{C}^p(W)} |\psi_2|_{\mathcal{C}^p(W)}$. We define $\tilde{\mathcal{C}}^p(M)$ and $\mathcal{C}^p(M)$ similarly. Given two functions $\psi_i\in\mathcal{C}^q(W_i,\mathbb{C})$, $q >0$, we define the distance between $\psi_1$, $\psi_2$ as \[ d_q(\psi_1,\psi_2) =|\psi_1\circ G_{W_1}-\psi_2\circ G_{W_2}|_{\mathcal{C}^q(I_{W_1} \cap I_{W_2})}. \] We will define the required Banach spaces by closing $\mathcal{C}^1(M)$ with respect to the following set of norms. For $s,p \geq 0$, define the following norms for test functions, \[ |\psi|_{W,s,p}:=|W|^s \cdot|\psi|_{\mathcal{C}^p(W)} . \] Now fix $0 < p \le \frac 13 $. Given a function $h \in \mathcal{C}^1(M)$, define the \emph{weak norm} of $h$ by \begin{equation} \label{eq:weak} |h|_w:=\sup_{W\in\mathcal{W}^s}\sup_{\substack{\psi \in\mathcal{C}^p(W)\\ |\psi|_{W,0,p} \leq 1}}\int_W h \psi \; dm_W . \end{equation} Choose\footnote{The restrictions on the constants are placed according to the dynamical properties of $T$. For example, $p \le 1/3$ due to the distortion bounds in {\bf (H4)}, while $\alpha < 1-\varsigma_0$ so that Lemma~\ref{lem:growth}(d) can be applied with $\varsigma = 1 - \alpha > \varsigma_0$.} $\alpha$, $\beta$, $q >0$ such that $\alpha < 1-\varsigma_0$, $q < p$ and $\beta \leq \min \{ \alpha, p-q \}$. We define the \emph{strong stable norm} of $h$ as \begin{equation} \label{eq:s-stable} \|h\|_s:=\sup_{W\in\mathcal{W}^s}\sup_{\substack{\psi\in\mathcal{C}^q(W)\\ |\psi|_{W,\alpha,q}\leq 1}}\int_W h \psi \; dm_W \end{equation} and the \emph{strong unstable norm} as \begin{equation} \label{eq:s-unstable} \|h\|_u:=\sup_{\varepsilon \leq \varepsilon_0} \; \sup_{\substack{W_1, W_2 \in \mathcal{W}^s \\ d_{\mathcal{W}^s} (W_1,W_2)\leq \varepsilon}}\; \sup_{\substack{\psi_i \in \mathcal{C}^p(W_i) \\ |\psi_i|_{W_i,0,p}\leq 1\\ d_q(\psi_1,\psi_2) \leq \varepsilon}} \; \frac{1}{\varepsilon^\beta} \left| \int_{W_1} h \psi_1 \; dm_W - \int_{W_2} h \psi_2 \; dm_W \right| \end{equation} where $\varepsilon_0 > 0$ is chosen less than $\delta_0$, the maximum length of $W \in \mathcal{W}^s$ which is determined by \eqref{eq:one step contract}. We then define the \emph{strong norm} of $h$ by \[ \|h\|_\B = \|h\|_s + b \|h\|_u \] where $b$ is a small constant chosen in Section~\ref{uniform}. We define $\B$ to be the completion of $\mathcal{C}^1(M)$ in the strong norm\footnote{As a measure, $h \in \mathcal{C}^1(M)$ is identified with $hd\mu$ according to our earlier convention. As a consequence, Lebesgue measure $dm = (\cos \varphi)^{-1} d\mu$ is not automatically included in $\B$ since $(\cos \varphi)^{-1} \notin \mathcal{C}^1(M)$. We will prove in Lemma~\ref{lem:lebesgue} that in fact, $m \in \B$ (and $\B_w$).} and $\B_w$ to be the completion of $\mathcal{C}^1(M)$ in the weak norm. \subsection{Distance in $\mathcal{F}$} \label{distance} We define a distance in $\mathcal{F}$ as follows. Let $\varepsilon_0$ be from \eqref{eq:s-unstable}. For $T_1, T_2 \in \mathcal{F}$ and $\varepsilon \le \varepsilon_0$, let $N_\varepsilon(\mathcal{S}^i_{-1})$ denote the $\varepsilon$-neighborhood in $M$ of the singularity set $\mathcal{S}^i_{-1}$ of $T_i^{-1}$, $i = 1,2$. We say $d_{\mathcal{F}} (T_1, T_2) \le \varepsilon$ if the maps are close away from their singularity sets in the following sense: For $x \notin N_\varepsilon(\mathcal{S}^1_{-1} \cup \mathcal{S}^2_{-1})$, \noindent \parbox{.07 \textwidth}{\bf(C1)} \parbox[t]{.91 \textwidth}{ $ \displaystyle d(T_1^{-1}(x) , T_2^{-1}(x)) \le \varepsilon$; } \noindent \parbox{.07 \textwidth}{\bf(C2)} \parbox[t]{.91 \textwidth}{ $ \displaystyle \left| \frac{J_\mu T_i(x)}{ J_\mu T_j(x)} - 1 \right| \le \varepsilon$, $i,j = 1,2$; } \noindent \parbox{.07 \textwidth}{\bf(C3)} \parbox[t]{.91 \textwidth}{ $ \displaystyle \left| \frac{J_WT_i(x)}{ J_WT_j(x)} - 1 \right| \le \varepsilon$, for any $W \in \mathcal{W}^s$, $i,j = 1,2$, and $x \in W$; } \noindent \parbox{.07 \textwidth}{\bf(C4)} \parbox[t]{.91 \textwidth}{ $ \displaystyle \| DT_1^{-1}(x) v - DT_2^{-1}(x) v \| \le \sqrt{\varepsilon}$, for any unit vector $v \in \mathcal{T}_xW$, $W \in \mathcal{W}^s$. } \subsection{Preliminary estimates} \label{preliminary} Before proving the Lasota-Yorke inequalities, we show how {\bf (H1)}-{\bf (H5)} imply several other uniform properties for our class of maps $\mathcal{F}$. In particular, we will be interested in iterating the one-step expansion relations given by {\bf (H3)}. We recall the estimates we need from \cite[Section 3.2]{demers zhang}. Let $T \in \mathcal{F}$ and $W \in \mathcal{W}^s$. Let $V_i$ denote the maximal connected components of $T^{-1}W$ after cutting due to singularities and the boundaries of the homogeneity strips. To ensure that each component of $T^{-1}W$ is in $\mathcal{W}^s$, we subdivide any of the long pieces $V_i$ whose length is $>\delta_0$, where $\delta_0$ is chosen in \eqref{eq:one step contract}. This process is then iterated so that given $W \in \mathcal{W}^s$, we construct the components of $T^{-n}W$, which we call the $n^{\mbox{\scriptsize th}}$ generation $\G_n(W)$, inductively as follows. Let $\G_0(W) = \{ W \}$ and suppose we have defined $\G_{n-1}(W) \subset \mathcal{W}^s$. First, for any $W' \in \G_{n-1}(W)$, we partition $T^{-1}W'$ into at most countably many pieces $W'_i$ so that $T$ is smooth on each $W'_i$ and each $W'_i$ is a homogeneous stable curve. If any $W'_i$ have length greater than $\delta_0$, we subdivide those pieces into pieces of length between $\delta_0/2$ and $\delta_0$. We define $\G_n(W)$ to be the collection of all pieces $W^n_i \subset T^{-n}W$ obtained in this way. Note that each $W^n_i$ is in $\mathcal{W}^s$ by {\bf (H2)}. At each iterate of $T^{-1}$, typical curves in $\G_n(W)$ grow in size, but there exist a portion of curves which are trapped in tiny homogeneity strips and in the infinite horizon case, stay too close to the infinite horizon points. In Lemma~\ref{lem:growth}, we make precise the sense in which the proportion of curves that never grow to a fixed length decays exponentially fast. For $W \in \mathcal{W}^s$, $n \geq 0$, and $0 \le k \le n$, let $\G_k(W) = \{ W^k_i \}$ denote the $k^{\mbox{\scriptsize th}}$ generation pieces in $T^{-k}W$. Let $B_k(W) = \{ i : |W^k_i|< \delta_0/3 \}$ and $L_k(W) = \{ i : |W^k_i| \ge \delta_0/3 \}$ denote the index of the short and long elements of $\G_k(W)$, respectively. We consider $\{\G_k \}_{k=0}^n$ as a tree with $W$ as its root and $\G_k$ as the $k^{\mbox{\scriptsize th}}$ level. At level $n$, we group the pieces as follows. Let $W^n_{i_0} \in \G_n(W)$ and let $W^k_j \in L_k(W)$ denote the most recent long ``ancestor" of $W^n_{i_0}$, i.e.\ $k = \max \{ 0 \leq \ell \le n : T^{n-\ell}(W^n_{i_0}) \subset W^\ell_j \; \mbox{and} \; j \in L_\ell \}$. If no such ancestor exists, set $k=0$ and $W^k_j = W$. Note that if $W^n_{i_0}$ is long, then $W^k_j = W^n_{i_0}$. Let \[ \I_n(W^k_j) = \{ i : W^k_j \in L_k(W) \; \mbox{is the most recent long ancestor of} \; W^n_i \in \G_n(W) \}. \] The set $\I_n(W)$ represents those curves $W^n_i$ that belong to short pieces in $\G_k(W)$ at each time step $1 \leq k \le n$, i.e.\ such $W^n_i$ are never part of a piece that has grown to length $\geq \delta_0/3$. We collect the results of \cite[Section~3.2]{demers zhang} in the following lemma. \begin{lemma} \label{lem:growth} (\cite{demers zhang}) Let $W \in \mathcal{W}^s$, $T \in \mathcal{F}$ and for $n \geq 0$, let $\I_n(W)$ and $\G_n(W)$ be defined as above. There exist constants $C_1, C_2, C_3 >0$, independent of $W$ and $T$, such that for any $n\geq 0$, \begin{itemize} \item[(a)] $\displaystyle \sum_{i \in \I_n(W)} |J_{W^n_i}T^n|_{\mathcal{C}^0(W^n_i)} \leq C_1 \theta_*^n $; \item[(b)] $\displaystyle \sum_{W^n_i \in \G_n(W)} |J_{W^n_i}T^n|_{\mathcal{C}^0(W^n_i)} \le C_2 $; \item[(c)] for any $0 \leq \varsigma \leq 1$, $\displaystyle \sum_{W^n_i \in \G_n(W)} \frac{|W^n_i|^\varsigma}{|W|^\varsigma} \; |J_{W^n_i}T^n|_{\mathcal{C}^0(W^n_i)} \le C_2^{1-\varsigma} $; \item[(d)] for $\varsigma > \varsigma_0$, $\displaystyle \sum_{W^n_i \in \G_n(W)} |J_{W^n_i}T^n|_{\mathcal{C}^0(W^n_i)}^\varsigma \le C_3^n$, where $C_3$ depends on $\varsigma$. \end{itemize} \end{lemma} \begin{proof} The proofs of these items are combinatorial and require no more specific information about the maps than the uniform properties given by {\bf (H2)}, {\bf (H3)} and {\bf (H4)}. \noindent (a) This is Lemma 3.1 of \cite{demers zhang}. The constant $C_1$ depends only on the constant relating the Euclidean norm $\| \cdot \|$ to the adapted norm $\| \cdot \|_*$. As such, $C_1$ is independent of $T \in \mathcal{F}$, $W \in \mathcal{W}^s$ and $n \in \mathbb{N}$. \noindent (b) This statement is \cite[Lemma 3.2]{demers zhang}. The constant $C_2=C_2(\delta_0,\theta_*,C_1, C_d)$. \noindent (c) This is \cite[Lemma 3.3]{demers zhang}. It follows from (b) by an application of Jensen's inequality. \noindent (d) This follows from \eqref{eq:weakened step1} and is proved in \cite[Lemma 3.4]{demers zhang}. The constant $C_3 = \delta_0^{-1} C_\varsigma (1+C_d)^{2\varsigma}$ is uniform for $T \in \mathcal{F}$, but depends on $\varsigma$. \end{proof} Next we prove a distortion bound for the stable Jacobian of $T$ along different stable curves in the following context. Let $W^1, W^2 \in \mathcal{W}^s$ and suppose there exist $U^k \subset T^{-n}W^k$, $k=1,2$, such that for $0 \le i \le n$, \begin{enumerate} \item[(i)] $T^iU^k \in \mathcal{W}^s$ and the curves $T^iU^1$ and $T^iU^2$ lie in the same homogeneity strip; \item[(ii)] $U^1$ and $U^2$ can be put into a 1-1 correspondence by a smooth foliation $\{ \gamma_x \}_{x \in U^1}$ of curves $\gamma_x \in \widehat\mathcal{W}^u$ such that $\{ T^n\gamma_x \} \subset \widehat\mathcal{W}^u$ creates a 1-1 correspondence between $T^nU^1$ and $T^nU^2$; \item[(iii)] $|T^i\gamma_x| \le 2 \max \{ |T^iU^1|, |T^iU^2| \}$, for all $x \in U^1$. \end{enumerate} Let $J_{U^k}T^n$ denote the stable Jacobian of $T^n$ along the curve $U^k$ with respect to arclength. \begin{lemma} \label{lem:angles} In the setting above, for $x \in U^1$, define $x^* \in \gamma_x \cap U^2$. There exists $C_0 > 0$, independent of $T \in \mathcal{F}$, $W \in \mathcal{W}^s$ and $n \ge 0$ such that \begin{enumerate} \item[(a)] $d_{\mathcal{W}^s}(U^1, U^2) \le C_0 \Lambda^{-n} d_{\mathcal{W}^s}(W^1, W^2)$; \item[(b)] $\displaystyle \left| \frac{J_{U^1}T^n(x)}{J_{U^2}T^n(x^*)} -1 \right| \; \leq \; C_0[d(T^nx,T^n x^*)^{1/3} +\theta(T^nx, T^n x^*)] $, \end{enumerate} where $\theta(T^nx, T^n x^*) $ is the angle formed by the tangent lines of $T^nU^1$ and $T^nU_2$ at $T^nx$ and $T^n x^*$, respectively. \end{lemma} \begin{proof} (a) This is essentially a graph transform argument adapted for this class of maps satisfying {\bf (H1)}. What we need to show here is that we do not need to cut curves lying in homogeneity strips any further in order to get the required contraction and control on distortion. First notice that due to the uniform expansion of $\gamma_x$ under $T^n$ given by \eqref{eq:uniform hyp} of {\bf (H1)}, we have $|\gamma_x| \le C_e C_t \Lambda^{-n} d_{\mathcal{W}^s}(W^1, W^2)$, where $C_t$ is a constant depending only on the minimum angle between $C^u(x)$ and $C^s(x)$ and between $C^u(x)$ and the horizontal direction. Again by the transversality of $\gamma_x$ with $U^1$ and $U^2$, the $r$-intervals on which the functions $\varphi_{U^1}$, $\varphi_{U^2}$ describing the curves $U^1$, $U^2$ are defined can differ by no more than $C_e C_t^2 \Lambda^{-n} d_{\mathcal{W}^s}(W^1, W^2)$. Letting $I$ denote the intersection of intervals on which both functions are defined and recalling the definition of $d_{\mathcal{W}^s}(\cdot, \cdot)$ from Section~\ref{norms}, it remains to estimate $|\varphi_{U^1} - \varphi_{U^2}|_{\mathcal{C}^1(I)}$. By the same observation as above, we have $|\varphi_{U^1} - \varphi_{U^2}|_{\mathcal{C}^0(I)} \le C_t^2 C_e \Lambda^{-n} d_{W^s}(W^1, W^2)$. In order to show that the slopes of these curves also contract exponentially, we make the usual graph transform argument using charts in the adapted norm $\| \cdot \|_*$ from {\bf (H3)}. Fix $x \in U^1$ and define charts along the orbit of $x$ so that $x_i := T^ix$, $0 \le i \le n$, corresponds to the origin in each chart with the stable direction at $x_i$ given by the horizontal axis and the unstable direction by the vertical axis in the charts. Let $\vartheta < 1$ denote the maximum absolute value of slopes of stable curves in the chart. Due to property (iii) before the statement of the lemma, we may choose the size of the charts to have stable and unstable diameters $\le C |T^iU1|$ for each $i$, for some uniform constant $C$. The dynamics induced by $T^{-1}$ on these charts is defined by \[ \tilde T^{-1}_{x_i} = \chi^{-1}_{x_{i-1}} \circ T^{-1} \circ \chi_{x_i} \] where $\chi_{x_i}$ are smooth maps with $|\chi_{x_i}|_{\mathcal{C}^2}, |\chi^{-1}_{x_i}|_{\mathcal{C}^2} \le C$ for some uniform constant $C$. Note that $D\tilde T_{x_i}^{-1}$ and $D^2\tilde T_{x_i}^{-1}$ satisfy {\bf (H1)} with possibly larger $C_a$ and $C_e=1$. In the chart coordinates, since $\tilde T^{-1}_{x_i}(0) = 0$, we have \[ \tilde T^{-1}_{x_i} (s, t) = (A_i s + \alpha_i(s,t), B_i t + \beta_i(s,t)) \] where $A_i$ is the expansion at $x_i$ in the stable direction and $B_i$ is the contraction at $x_i$ in the unstable direction given by $DT^{-1}_{x_i}(0)$. The nonlinear functions $\alpha_i, \beta_i$ satisfy $\alpha_i(0,0)=\beta_i(0,0) =0$ and their Lipschitz constants are bounded by the maximum of \begin{equation} \label{eq:alpha lip} \| D\tilde T_{x_i}^{-1}(u) - D\tilde T_{x_i}^{-1}(v) \| \le \| D^2 \tilde T_{x_i}(z) \| \| u- v \| \end{equation} where $u, v, z$ range over the chart at $x_i$. We fix $i$ and let $\varphi_1$, $\varphi_2$ denote two Lipschitz functions whose graphs lie in the stable cone of the chart at $x_i$ and satisfy $\varphi_j(0)=0$, $j=1,2$. Define $L(\varphi_1, \varphi_2) = \sup_{s \neq 0} \frac{| \varphi_1(s) - \varphi_2(s) |}{|s|}$. Let $\varphi_1' = \tilde T_*^{-1} \varphi_1$ and $\varphi_2'=\tilde T_*^{-1} \varphi_2$ denote the graphs of the images of these two curves in the chart at $x_{i-1}$. We wish to estimate $L(\varphi_1', \varphi_2')$. For $s$ on the horizontal axis in the chart at $x_i$, we write, \[ \begin{split} |\varphi_1'(A_i s + & \alpha_i(s, \varphi_1(s))) - \varphi_2'(A_is + \alpha_i(s, \varphi_1(s))) | \le |\varphi_1'(A_i s + \alpha_i(s, \varphi_1(s))) - \varphi_2'(A_is + \alpha_i(s, \varphi_2(s))) | \\ & \qquad + |\varphi_2'(A_i s + \alpha_i(s, \varphi_2(s))) - \varphi_2'(A_is + \alpha_i(s, \varphi_1(s))) | \\ & \le |B_i| | \varphi_1(s) - \varphi_2(s) | + | \beta_i(s, \varphi_1(s)) - \beta_i(s, \varphi_2(s)) | + \vartheta | \alpha_i(s, \varphi_1(s)) - \alpha_i(s, \varphi_2(s)) | \\ & \le (|B_i| + \mbox{Lip}(\beta_i) + \vartheta \mbox{Lip}(\alpha_i)) | \varphi_1(s) - \varphi_2(s) | \end{split} \] On the other hand, by \eqref{eq:expansion}, \[ |A_i s + \alpha_i(s, \varphi_1(s))| \ge (|A_i| - \mbox{Lip}(\alpha_i)(1+\vartheta))|s| . \] Putting these together, we see that, \begin{equation} \label{eq:lip} L(\varphi_1', \varphi_2') \le \sup_{s \neq 0} \frac{(|B_i| + \mbox{Lip}(\beta_i) + \vartheta \mbox{Lip}(\alpha_i)) | \varphi_1(s) - \varphi_2(s) |}{(|A_i| - \mbox{Lip}(\alpha_i)(1+\vartheta))|s|} \le \frac{|B_i| + \mbox{Lip}(\beta_i) + \vartheta \mbox{Lip}(\alpha_i)}{|A_i| - \mbox{Lip}(\alpha_i)(1+\vartheta)} L(\varphi_1, \varphi_2) . \end{equation} Suppose that $x_{i-1}$ lies in the homogeneity strip $\Ho_k$ and $x_i$ lies on a curve with index $n$ according to the index given by {\bf (H1)}. Then by \eqref{eq:alpha lip} and \eqref{eq:expansion} and \eqref{eq:2 deriv} of {\bf (H1)}, the Lipshitz constants of $\alpha_i$ and $\beta_i$ are bounded by $C_a^{-1} n^2 k^6 (C_a^{-1} n^{-1} k^{-5}) = C_a^{-2} nk$ since the size of the chart is taken to be on the order of the length of the curve $T^iU^1$ by property (iii) of the matching. Thus, \[ L(\varphi_1', \varphi_2') \le \frac{\Lambda^{-1} + C_a^{-2} nk (1+ \vartheta)}{C_a nk^2 - C_a^{-2} nk (1+\vartheta)} L(\varphi_1, \varphi_2) \le \frac{4C_a^{-3}}{k} L(\varphi_1, \varphi_2), \] for large $k$, which can be made smaller than $\Lambda^{-1}$. Note that since $k \ge c_s n^{\upsilon_0}$ by {\bf (H1)}, this bound is also small for large $n$. Thus we may choose $N_0, K_0 >0$ such that the contraction is less than $\Lambda^{-1}$ on all curves with index $n \ge N_0$ or landing in homogeneity strip $\Ho_k$, $k \ge K_0$. On the remainder of $M$, the first and second derivatives of $T^{-1}$ are uniformly bounded by constants depending on $N_0$ and $K_0$. For curves in this part of $M$, we choose $\delta_0$, the maximum length of stable curves in $\mathcal{W}^s$, sufficiently small that the distortion given by \eqref{eq:alpha lip} is less than $\frac 12 ( \Lambda^{-1/2} - \Lambda^{-1})$. Then by \eqref{eq:lip}, since $\vartheta <1$, the contraction on these pieces is less than $\Lambda^{-1}$ as well. If $\varphi_1$ and $\varphi_2$ do not pass through the origin, the exponential contraction in the $C^0$ norm coupled with the above argument yields the required contraction. \noindent (b) It is equivalent to estimate the ratio $\log \frac{J_{T^nU_1}T^{-n}(T^nx)}{J_{T^n_U2}T^{-n}(T^nx^*)}$. We write \begin{equation} \label{eq:jac split} \log \frac{J_{T^nU_1}T^{-n}(T^nx)}{J_{T^n_U2}T^{-n}(T^nx^*)} \le \sum_{i=1}^n \frac{1}{A_i} | J_{T^iU_1}T^{-1} (T^ix) - J_{T^iU_2}T^{-1}(T^ix^*)| \end{equation} where $A_i = \min \{ J_{T^iU_1}T^{-1}(T^ix), J_{T^iU_2}T^{-1}(T^ix^*) \}$. We estimate the differences one term at a time and assume without loss of generality that the minimum for $A_i$ is attained at $T^ix$. Set $x_i = T^ix$, $x_i^* = T^ix^*$. Let $\vec{u}_1(x_i)$ denote the unit tangent vector to $T^iU^1$ at $x_i$ and notice that $J_{T^iU_1}T^{-1}(x_i) = \| DT^{-1}(x_i) \vec{u}_1 \|$. Define $\vec{u}_2(x_i^*)$ similarly. Then \[ \begin{split} | \, \| DT^{-1}(x_i) \vec{u}_1 \| - \| DT^{-1}(x_i^*) \vec{u}_2 \| \, | & \le | \, \| DT^{-1}(x_i) \vec{u}_1 \| - \| DT^{-1}(x_i) \vec{u}_2 \| \, | \\ & \qquad + | \, \| DT^{-1}(x_i) \vec{u}_2 \| - \| DT^{-1}(x_i^*) \vec{u}_2 \| \, | \\ & \le \| DT^{-1}(x_i) \| \, \| \vec{u}_1 - \vec{u}_2 \| + \| D^2T^{-1}(z_i) \| d(x_i, x_i^*) , \end{split} \] where $z_i$ is some point on $T^i\gamma_x$. Suppose $T^{-1}x_i$ lies in the homogeneity strip $\Ho_k$ and $x_i$ lies on some curve $W_n$ according to the index given in {\bf (H1)}. Then $\| DT^{-1}(x_i) \| / \| DT^{-1}(x_i) \vec{u} \| \le C$ where $C$ is some uniform constant for all unit vectors $\vec{u} \in C^s(x_i)$. Also by {\bf (H1)}, we have $|T^iU^j| \le C_a/nk^5$, $j=1,2$, so that by property (iii) before the statement of the lemma, $d(x_i, x_i^*) \le 2C_a/(nk^5)$. Thus \[ \frac{\| D^2T^{-1}(z_i) \| d(x_i, x_i^*)}{\| DT^{-1}(x_i) \vec{u}_1 \|} \le \frac{(C_a n^2 k^6) (2C_a/(nk^5))}{C_a^{-1} nk^2} \le \frac{2C_a^3}{k} \le 2C_a^3 d(x_{i-1}, x_{i-1}^*)^{1/3} . \] Using these estimates in \eqref{eq:jac split}, we have \[ \log \frac{J_{T^nU_1}T^{-n}(T^nx)}{J_{T^n_U2}T^{-n}(T^nx^*)} \le C \sum_{i=1}^n \| \vec{u}_1(x_i) - \vec{u}_2 (x_i^*) \| + d(x_{i-1}, x_{i-1}^*)^{1/3}. \] Now $\| \vec{u}_1(x_i) - \vec{u}_2(x_i^*) \| \le \theta(x_i, x_i^*) \le C_0 \Lambda^{i-n} \theta(T^nx, T^nx^*)$ by part (a) of the lemma together with the fact that curves in $\mathcal{W}^s$ have $\mathcal{C}^2$ norm uniformly bounded above. Finally, by {\bf (H1)}, $d(x_{i-1}, x_{i-1}^*) \le C_e \Lambda^{i-n-1} d(T^nx, T^nx^*)$, which completes the proof of the lemma. \end{proof} \subsection{Properties of the Banach spaces} \label{recall property} We first prove that the weak and strong norms dominate distributional norms on $M$ in the following sense. \begin{lemma} \label{lem:distr} There exists $C >0$ such that for any $h \in \B_w$, $T \in \mathcal{F}$, $n \ge 0$ and $\psi \in \mathcal{C}^p(T^{-n}\mathcal{W}^s)$, \[ |h(\psi)| \le C |h|_w (|\psi|_\infty + H^p_n(\psi)) . \] \end{lemma} This is the analogue of Lemma~3.9 of \cite{demers zhang}, but it does not follow from the argument given there since condition {\bf (H5)} and the weakened Lasota-Yorke inequalities in Theorem~\ref{thm:uniform} suggest that the spectral radius of $\mathcal{L}_T$ can be as much as $\eta >1$. It is a consequence of Lemma~\ref{lem:distr} that the spectral radius is in fact 1 (see Section~\ref{uniform}, proof of Theorem~\ref{thm:uniform}). \begin{proof}[Proof of Lemma~\ref{lem:distr}] On each $M_\ell = \Gamma_\ell \times [-\pi/2, \pi/2]$, we partition the set $\Ho_0$ into finitely many boxes $B_j$ whose boundary curves are elements of $\mathcal{W}^s$ and $\mathcal{W}^u$ as well as the horizontal lines $\pm \pi/2 \mp 1/k_0^2$. We construct the boxes so that each $B_j$ has diameter $\le \delta_0$ and is foliated by a smooth family of stable curves $\{W_\xi\}_{\xi \in E_j} \subset \mathcal{W}^s$, each of whose elements completely crosses $B_j$ in the approximate stable direction. We decompose the smooth measure $d\mu = \cos \varphi dm$ on $B_j$ into $d\mu = \hat \mu(d\xi) d\mu_\xi$, where $\mu_\xi$ is the conditional measure of $\mu$ on $W_\xi$ and $\hat \mu$ is the transverse measure on $E_j$. We normalize the measures so that $\mu_\xi(W_\xi) = \int_{W_\xi} \cos \varphi \, dm_{W_\xi}$. Since the foliation is smooth, $d\mu_\xi = \rho_\xi \cos \varphi dm_{W_\xi}$ where $|\rho_\xi|_{\mathcal{C}^1(W_\xi)} \le C$ for some constant $C$ independent of $\xi$. Note that $\hat \mu(E_j) \le C \delta_0$ due to the transversality of curves in $\mathcal{W}^s$ and $\mathcal{W}^u$. Next we choose on each homogeneity strip $\Ho_t$, $t \ge k_0$, a smooth foliation $\{W_\xi \}_{\xi \in E_t} \subset \mathcal{W}^s$ whose elements all have endpoints lying in the two boundary curves of $\Ho_t$. We again decompose $\mu$ on $\Ho_t$ into $d\mu = \hat \mu(d\xi) d\mu_\xi$, $\xi \in E_t$, and $d\mu_\xi = \rho_\xi \cos \varphi dm_{W_\xi}$ is normalized as above. By construction, $\hat \mu(E_t) = \mathcal{O}(1)$. Given $h \in \mathcal{C}^1(M)$, $\psi \in \mathcal{C}^p(T^{-n}\mathcal{W}^s)$, since $T^{-n}M = M$ {\em (mod 0)}, we have $h(\psi) = \int_M h \psi \, d\mu = \int _M \mathcal{L}^n h \, \psi \circ T^{-n} \, d\mu$. We split $M = \cup_\ell M_\ell$ and integrate one $\ell$ at a time. \[ \begin{split} & \int_{M_\ell} \mathcal{L}^n h \, \psi \circ T^{-n} \, d\mu = \sum_j \int_{B_j} \mathcal{L}^n h \, \psi \circ T^{-n} \, d\mu + \sum_{|t|\geq k_0} \int_{\Ho_t} \mathcal{L}^n h \, \psi \circ T^{-n} \, d\mu \\ & = \sum_j \int_{E_j} \int_{W_\xi} \mathcal{L}^nh \, \psi\circ T^{-n} \, \rho_\xi \, d\mu_W d\hat{\mu}(\xi) + \sum_{|t| \geq k_0} \int_{E_t} \int_{W_\xi} \mathcal{L}^nh \, \psi\circ T^{-n} \, \rho_\xi \, d\mu_W d\hat \mu(\xi) \end{split} \] We change variables and estimate the integrals on one $W_\xi$ at a time. Letting $W^n_{\xi,i}$ denote the components of $\G_n(W_\xi)$ defined in Section~\ref{preliminary} and recalling that $J_{W^n_{\xi,i}}T^n$ denotes the stable Jacobian of $T^n$ along the curve $W^n_{\xi,i}$, we write, \[ \begin{split} \int_{W_\xi} & \mathcal{L}^n h \, \psi \circ T^{-n} \, \rho_\xi \, d\mu_W = \sum_i \int_{W^n_{\xi,i}} h \psi (J_\mu T^n)^{-1} J_{W^n_{\xi,i}}T^n \rho_\xi \circ T^n \cos \varphi \circ T^n \, dm_W \\ & \leq \sum_i |h|_w |\psi|_{\mathcal{C}^p(W^n_{\xi,i})} |(J_\mu T^n)^{-1} J_{W^n_{\xi,i}}T^n|_{\mathcal{C}^p(W^n_{\xi,i})} | \rho_\xi \circ T^n|_{\mathcal{C}^p(W^n_{\xi,i})} | \cos \varphi \circ T^n|_{\mathcal{C}^p(W^n_{\xi,i})} . \end{split} \] By \eqref{eq:C1 C0}, we have $| \rho_\xi \circ T^n|_{\mathcal{C}^p(W^n_{\xi, i})} \le C |\rho_\xi|_{\mathcal{C}^p(W_\xi)} \le C$ for some uniform constant $C$. The disortion bounds given by {\bf (H4)}, equation \eqref{eq:distortion stable}, imply that \begin{equation} \label{eq:holder c0} | (J_\mu T^n)^{-1}J_{W^n_{\xi,i}}T^n|_{\mathcal{C}^p(W^n_{\xi,i})} \leq (1+2C_d) | (J_\mu T^n)^{-1} J_{W^n_{\xi, i}}T^n|_{\mathcal{C}^0(W^n_{\xi, i})} . \end{equation} For $W \in \mathcal{W}^s$, let $\cos W_\xi$ denote the average value of $\cos \varphi$ on $W$. Note that there exists $C_c >0$, depending only on $k_0$ and the uniform transversality of $C^s(x)$ with the horizontal direction, such that $C_c^{-1} \cos W \le \cos \varphi(x) \le C_c \cos W$ for all $x \in W$. Thus $| \cos \varphi \circ T^n|_\infty \le C_c \cos W_\xi$. Then since $|\cos \varphi \circ T^n(x) - \cos \varphi \circ T^n(y)| \le d_W(T^n x, T^n y)$ for $x,y \in W^n_{\xi,i}$, we have $H^p_{W^n_{\xi,i}} (\cos \varphi \circ T^n) \le C_e |W_\xi|^{1-p}$, where we have used {\bf (H1)}. If $W_\xi \subset \Ho_t$, then $\cos W_\xi \ge c t^{-2}$ while $|W_\xi| \le C' t^{-3}$ for uniform constants $c, C' >0$, depending on the minimum angle between $C^s(x)$ and the horizontal. Thus since $p \le 1/3$, we have $|\cos \varphi \circ T^n |_{\mathcal{C}^p(W^n_{\xi,i})} \le C \cos W_\xi$ for some uniform constant $C$. Gathering these estimates together, we have \begin{equation} \label{eq:single} \int_{W_\xi} \mathcal{L}^n h \, \psi \circ T^{-n} \, \rho_\xi \, d\mu_W \le C |h|_w (|\psi|_\infty + H^p_n(\psi)) \cos (W_{\xi}) \sum_i |(J_\mu T^n)^{-1} J_{W^n_{\xi,i}}T^n|_{\mathcal{C}^0(W^n_{\xi,i})}, \end{equation} where $C$ is uniform in $T$ and $n$. We group the pieces $W^n_{\xi,i} \in \G_n(W_\xi)$ according to most recent long ancestor $W^k_{\xi, j} \in \G_k(W_\xi)$ as described in Section~\ref{preliminary}. Then splitting up the Jacobians according to times $k$ and $n-k$ and using {\bf (H5)}, we have \begin{equation} \label{eq:long short} \begin{split} \sum_i & |(J_\mu T^n)^{-1} J_{W^n_{\xi,i}}T^n|_{\mathcal{C}^0(W^n_{\xi,i})} \le \sum_{i \in \I_n(W)} \eta^n |J_{W^n_{\xi,i}}T^n|_{\mathcal{C}^0(W^n_{\xi,i})} \\ & \; \; \; \; \; + \sum_{k=1}^n \sum_{j \in L_k(W_\xi)} |(J_\mu T^k)^{-1}J_{W^k_{\xi,j}}T^k|_{\mathcal{C}^0(W^k_{\xi,j})} \left( \sum_{i \in \I_n(W^k_j)} \eta^{n-k} |J_{W^n_{\xi,i}}T^{n-k}|_{\mathcal{C}^0(W^n_{\xi,i})} \right) \\ & \le C_1 (\eta \theta_*)^n + \sum_{k=1}^n \sum_{j \in L_k(W_\xi)} |(J_\mu T^k)^{-1}J_{W^k_{\xi,j}}T^k|_{\mathcal{C}^0(W^k_{\xi,j})} C_1 (\eta \theta_*)^{n-k} \end{split} \end{equation} where we have used Lemma~\ref{lem:growth}(a) on each of the terms involving $\I_n(W^k_{\xi,j})$ from time $k$ to time $n$. For each $k$, since $|W^k_{\xi,j}| \ge \delta_0/3$, we have by bounded distortion {\bf (H4)}, \[ \begin{split} \sum_{j \in L_k(W_\xi)} |(J_\mu T^k)^{-1}J_{W^k_{\xi,j}}T^k|_{\mathcal{C}^0(W^k_{\xi,j})} & \le (1+C_d)^2 3 \delta_0^{-1} \sum_{j \in L_k(W_\xi)} \int_{W^k_{\xi,j}} (J_\mu T^k)^{-1} J_{W^k_{\xi,j}}T^k \, dm_W \\ & \le C \delta_0^{-1} \int_{W_\xi} (J_\mu T^k)^{-1} \, dm_W . \end{split} \] Putting this estimate together with \eqref{eq:single} and \eqref{eq:long short} and bringing $\cos W_\xi$ into the integral, \[ \int_{W_\xi} \mathcal{L}^n h \, \psi \circ T^{-n} \, \rho_\xi \, d\mu_W \leq C |h|_w (|\psi|_\infty + H^p_n(\psi)) \Big(\cos W_\xi + \sum_{k=1}^n (\eta \theta_*)^{n-k} \int_{W_\xi} (J_\mu T^k)^{-1} \, d\mu_W \Big) \] for some uniform constant $C$. Thus \[ \begin{split} \Big| \int_{M_\ell} \mathcal{L}^n h \, & \psi \circ T^{-n} \, dm \Big| \le C |h|_w (|\psi|_\infty + H^p_n(\psi)) \Big( \sum_j \int_{E_j} \cos W_\xi \, \hat \mu(d\xi) + \sum_{|t| \geq k_0} \int_{E_t} \cos W_\xi \, \hat \mu(d \xi) \\ & \qquad \qquad + \sum_j \sum_{k=1}^n (\eta \theta_*)^{n-k} \int_{B_j} (J_\mu T^k)^{-1} \, d\mu + \sum_{|t| \geq k_0} \sum_{k=1}^n (\eta \theta_*)^{n-k} \int_{\Ho_t} (J_\mu T^k)^{-1} \, d\mu \\ & \! \! \! \! \! \le C |h|_w ( |\psi|_\infty + H^p_n(\psi)) \Big( \sum_j \hat \mu(E_j) + \sum_{|t| \geq k_0} t^{-2} \hat \mu(E_t) + \sum_{k=1}^n (\eta \theta_*)^{n-k} \int_{M_\ell} (J_\mu T^k)^{-1} d\mu \Big) \end{split} \] where in the last line we have used the fact that $\cos W \leq Ct^{-2}$ for $W \subset \Ho_t$. The first two sums are finite since there are only finitely many $E_j$ and $\hat \mu(E_t)$ is of order $1$ for each $t$. Since there are only finitely many $M_\ell$, the first two sums remain finite when we sum over $\ell$. For the third sum, we sum over $\ell$ and use the fact that $\int_M (J_\mu T^k)^{-1} \, d\mu = 1$ for each $k \ge 1$. Thus the contribution from the third sum is uniformly bounded in $n$ using the fact that $\eta \theta_* <1$ by {\bf (H5)}. \end{proof} Several other properties of the spaces $\B$ and $\B_w$ proved in \cite{demers zhang} do not need to be reproved since their proofs remain essentially unchanged. They are as follows. \begin{itemize} \item[(i)] (\cite[Lemma 3.7]{demers zhang}) $\B$ contains piecewise H\"older continuous functions $h$ with exponent greater than $2\beta$ provided the discontinuities of $h$ are uniformly transverse to the stable cones $C^s(x)$. \item[(ii)] (\cite[Lemma 2.1]{demers zhang}) $\mathcal{L}$ is well-defined as a continuous linear operator on both $\B$ and $\B_w$. Moreover, there is a sequence of embeddings $\mathcal{C}^\gamma(M) \hookrightarrow \B \hookrightarrow \B_w \hookrightarrow (\mathcal{C}^p(M))'$, for all $\gamma > 2\beta$. \item[(iii)] (\cite[Lemma 3.10]{demers zhang}) The unit ball of $(\B, \| \cdot \|_\B)$ is compactly embedded in $(\B_w, | \cdot |_w)$. \end{itemize} Lemma~\ref{lem:distr} and items (i) and (ii) characterize the spaces $\B$ and $\B_w$ as spaces of distributions containing all H\"older continuous and certain classes of piecewise H\"older continuous functions. The last item is necessary in order to deduce the quasi-compactness of $\mathcal{L}_T$ from the Lasota-Yorke inequalities given by Theorem~\ref{thm:uniform}. There remains one final fact to establish. As mentioned earlier, since we identify $h \in \mathcal{C}^1(M)$ with the measure $h\mu$ as an element of $\B$, {\em a priori} Lebesgue measure may not be in $\B$. The following Lemma shows that Lebesgue measure is in fact in $\B$ and therefore so is $h dm$ for any $h \in \mathcal{C}^1(M)$. \begin{lemma} \label{lem:lebesgue} The function $(\cos \varphi)^{-1}$ is in $\B$. Therefore, Lebesgue measure $m = (\cos \varphi)^{-1} \,\mu$ is also in $\B$ and so is $h m$ for any $h \in \mathcal{C}^1(M)$. Indeed, any piecewise H\"older continuous function as in item (i) above times Lebesgue belongs to $\B$. \end{lemma} \begin{proof} In order to show $(\cos \varphi)^{-1} \in \B$, we must show that $(\cos \varphi)^{-1}$ can be approximated by functions $h \in \mathcal{C}^1(M)$ in the $\| \cdot \|_\B$ norm. Since $\| f \|_\B = \sup_{k} \| f|_{\Ho_k} \|_\B$, our strategy will be to show that $\| (\cos \varphi)^{-1}|_{\Ho_k} \|_\B \le Ck^{-1/2}$ for some uniform constant $C$. We can then approximate $(\cos \varphi)^{-1}$ by $0$ in homogeneity strips of sufficiently high index. More precisely, given $\varepsilon >0$, we choose $K$ such that $CK^{-1/2} < \varepsilon$. Then on the remaining strips $k < K$, $(\cos \varphi)^{-1}$ has finite $\mathcal{C}^1$-norm and satisfies the assumptions of \cite[Lemma 3.7]{demers zhang}. Thus we may find $f_\varepsilon \in C^1(M)$ as in the proof of that lemma such that \[ \| (\cos \varphi)^{-1} - f_\varepsilon \|_\B \le \sup_{k \ge K} \| (\cos \varphi)^{-1} |_{\Ho_k} \|_\B + \sup_{k < K} \| ((\cos \varphi)^{-1} - f_\varepsilon)|_{\Ho_k} \|_B < 2 \varepsilon , \] proving that $(\cos \varphi)^{-1} \in \B$. It remains to prove the claim $\| (\cos \varphi)^{-1}|_{\Ho_k} \|_\B \le Ck^{-1/2}$. Choose $W \in \mathcal{W}^s$, $W \subset \Ho_k$, and let $\psi \in \mathcal{C}^q(W)$ with $|\psi|_{W, \alpha , q} \le 1$. Then, \[ \int_W (\cos \varphi)^{-1} \, \psi \, dm_W \le |(\cos \varphi)^{-1} |_{\mathcal{C}^0(W)} |\psi|_{\mathcal{C}^0(W)} |W| \le |(\cos \varphi)^{-1} |_{\mathcal{C}^0(W)} |W|^{1-\alpha} \] since $|\psi|_{\mathcal{C}^p(W)} \le |W|^{-\alpha}$. On $\Ho_k$, we have $|W| \le ck^{-3}$ and $|\cos\varphi |^{-1} \le Ck^2$ for some uniform constants $c$ and $C$ which depend only on the minimum angle of $C^s(x)$ with the horizontal. Then, since $\alpha < 1/6$, \begin{equation} \label{eq:k balance} |(\cos \varphi)^{-1} |_{\mathcal{C}^0(W)} |W|^{1-\alpha} \le cC k^2 k^{-3+3 \alpha} \le C'k^{-1/2} . \end{equation} Taking the suprema over $W \subset \Ho_k$ and $\psi$ with $|\psi|_{W,\alpha, q} \le 1$, we have $\| (\cos \varphi)^{-1} |_{\Ho_k} \|_s \le C' k^{-1/2}$, completing the estimate on the strong stable norm. To estimate the strong unstable norm, let $\varepsilon \le \varepsilon_0$ and choose two curves in $\Ho_k$, $W^1, W^2 \in \mathcal{W}^s$, such that $d_{\mathcal{W}^s}(W^1, W^2) \le \varepsilon$. For $i=1, 2$, let $\psi_i \in \mathcal{C}^p(W^i)$ with $|\psi_i|_{\mathcal{C}^p(W^i)} \le 1$ and $d_q(\psi_1, \psi_2) \le \varepsilon$. Recalling the notation of Section~\ref{norms}, denote $W^i = \{ G_{W^i}(r) = (r, \varphi_{W^i}(r)) : r \in I_{W^i} \}$, $i = 1, 2$ and note that by definition of $d_{\mathcal{W}^s}(\cdot, \cdot)$, $W^1$ and $W^2$ can be put into one-to-one correspondence by a foliation of vertical line segments of length at most $\varepsilon$, except possibly near their endpoints. Denote by $U^i$ the single matched connected component of $W^i$ and by $V^i_j$ the at most 2 unmatched components of $W^i$. We let $\Theta: U^1 \to U^2$ denote the holonomy map along the vertical foliation. We estimate, \begin{equation} \label{eq:cos split} \begin{split} \int_{W^1} (\cos \varphi)^{-1} \psi_1 \, dm_W - \int_{W^2} & (\cos \varphi)^{-1} \psi_2 \, dm_W = \sum_{i,j} \int_{V^i_j} (\cos \varphi)^{-1} \psi_i \, dm_W \\ & \;\; \; + \int_{U^1} (\cos \varphi)^{-1} \psi_1 \, dm_W - \int_{U^2} (\cos \varphi)^{-1} \psi_2 \, dm_W. \end{split} \end{equation} We first estimate over the unmatched pieces $V^i_j$. Note that $|V^i_j| \le C\varepsilon$ where $C$ depends only on the minimum angle of $C^s(x)$ with the vertical. Recalling that $| \psi_i |_{\mathcal{C}^0(W^i)} \le 1$ and using \eqref{eq:k balance} since $\beta \le \alpha$, we estimate \begin{equation} \label{eq:cosine unmatched} \left| \sum_{i,j} \int_{V^i_j} (\cos \varphi)^{-1} \psi_i \, dm_W \right| \le \sum_{i,j} |V^i_j|^\beta |V^i_j|^{1-\beta} |(\cos \varphi)^{-1}|_{C^0(V^i_j)} \le C \varepsilon^\beta k^{-1/2} . \end{equation} To estimate the difference on the matched pieces $U^i$, we change variables to $U^1$ using $\Theta$, \[ \begin{split} \int_{U^1} (\cos \varphi)^{-1} \psi_1 \, dm_W & - \int_{U^2} (\cos \varphi)^{-1} \psi_2 \, dm_W = \int_{U^1} (\cos \varphi)^{-1} \psi_1 - [(\cos \varphi)^{-1} \psi_2] \circ \Theta \, J\Theta \, dm_W \\ & \le |U^1| |(\cos \varphi)^{-1} \psi_1 - [(\cos \varphi)^{-1} \psi_2] \circ \Theta \, J\Theta|_{\mathcal{C}^0(U^1)} . \end{split} \] To estimate the $\mathcal{C}^0$ norm of the test function, we split the difference into 3 terms and use the fact that $|\psi_i|_{\mathcal{C}^0} \le 1$, \begin{equation} \label{eq:cosine test} \begin{split} |(\cos \varphi)^{-1} \psi_1 & - [(\cos \varphi)^{-1} \psi_2] \circ \Theta \, J\Theta|_{\mathcal{C}^0(U^1)} \le |(\cos \varphi)^{-1} - (\cos \varphi)^{-1} \circ \Theta|_{\mathcal{C}^0(U^1)} \\ & + |(\cos \varphi)^{-1}|_{\mathcal{C}^0(U^2)} |\psi_1 - \psi_2 \circ \Theta|_{\mathcal{C}^0(U^1)} + |(\cos \varphi)^{-1}|_{\mathcal{C}^0(U^2)} |1 - J\Theta|_{\mathcal{C}^0(U^1)}. \end{split} \end{equation} For the first term above, note that for $x \in U^1$, $| \cos \varphi(x) - \cos \varphi \circ \Theta(x) | \le d(x, \Theta(x)) \le \min \{ \varepsilon , Ck^{-3} \}$ for some uniform constant $C>0$. Thus \[ |(\cos \varphi)^{-1}(x) - (\cos \varphi)^{-1} \circ \Theta(x)| \le \frac{d(x, \Theta(x))}{\cos \varphi(x) \cos \varphi \circ \Theta(x)} \le C' \frac{\varepsilon^\beta k^{-3(1-\beta)}}{k^{-4}} \le C' \varepsilon^\beta k^{3/2} , \] since $\beta \le 1/6$. To estimate the second term in \eqref{eq:cosine test}, denote $x \in U^1$ by $x = G_{W^1}(r)$ for some $r \in I_{W^1} \cap I_{W^2} =: I$. Then $|\psi_1(x) - \psi_2 \circ \Theta(x)| = |\psi_1 \circ G_{W^1}(r) - \psi_2 \circ G_{W^2(r)}| \le \varepsilon$ by definition of $d_q(\cdot , \cdot)$. Thus \[ |(\cos \varphi)^{-1}|_{\mathcal{C}^0(U^2)} |\psi_1 - \psi_2 \circ \Theta|_{\mathcal{C}^0(U^1)} \le Ck^2 \varepsilon . \] Finally, we estimate the third term of \eqref{eq:cosine test} by noting that \[ |1-J\Theta| = \left| 1- \frac{\sqrt{1 + (\varphi'_{W^1})^2}}{\sqrt{1 + (\varphi'_{W^2})^2}} \right| \le |\varphi'_{W^1} - \varphi'_{W^2}| \le \varepsilon, \] where we have used the fact that the derivative of $\sqrt{1+ t^2}$, $\frac{t}{\sqrt{1+t^2}}$, is bounded by 1 for $t \geq 0$. Putting these 3 estimates together in \eqref{eq:cosine test}, we estimate the norm on the matched pieces by \[ \left| \int_{U^1} (\cos \varphi)^{-1} \psi_1 \, dm_W - \int_{U^2} (\cos \varphi)^{-1} \psi_2 \, dm_W \right| \le |U^1| C(\varepsilon^\beta k^{3/2} + \varepsilon k^2 + \varepsilon k^2) \le C' \varepsilon^\beta k^{-1}, \] using the fact that $|U^1| \le Ck^{-3}$. This, combined with \eqref{eq:cosine unmatched}, yields the required estimate on the strong unstable norm. \end{proof} \section{Proof of Theorem~\ref{thm:uniform}} \label{uniform} The proof of Theorem~\ref{thm:uniform} relies on the following proposition. \begin{proposition} \label{prop:ly} There exists $C>0$, depending only on {\bf (H1)}-{\bf(H5)}, such that for any $T \in \mathcal{F}$, $h \in \B$ and $n \ge 0$, \begin{eqnarray} |\mathcal{L}_T^n h|_w & \le & C \eta^n |h|_w \label{eq:weak norm} \\ \| \mathcal{L}_T^n h \|_s & \le &C ( \theta_1^{(1-\alpha)n} + \Lambda^{-qn}) \eta^n \|h\|_s + C \eta^n \delta_0^{-\alpha}|h|_w \label{eq:stable norm} \\ \| \mathcal{L}_T^n h \|_u &\le & C\eta^n \Lambda^{-\beta n} \| h\|_u + C \eta^n C_3^n \|h \|_s \label{eq:unstable norm} \end{eqnarray} \end{proposition} \begin{proof}[Proof of Theorem~\ref{thm:uniform} given Proposition~\ref{prop:ly}] Choose $1 > \sigma > \eta \max \{ \theta_1^{1-\alpha}, \Lambda^{-q}, \Lambda^{-\beta} \}$ and choose $N \ge 0$ such that \[ \begin{split} \| \mathcal{L}_T^N h \|_\B & = \| \mathcal{L}^N_T h \|_s + b \| \mathcal{L}_T^N h \|_u \le \frac{\sigma^N}{2} \| h \|_s + C \delta_0^{-\alpha} \eta^N |h|_w + b \sigma^N \| h \|_u + b C\eta^N C_3^N \|h\|_s \\ & \leq \sigma^N \| h \|_\B + C_{\delta_0} \eta^N |h|_w \end{split} \] providing $b$ is chosen sufficiently small with respect to $N$. This is the required inequality \eqref{eq:uniform LY} for Theorem~\ref{thm:uniform} which implies the essential spectrum of $\mathcal{L}_T$ is less than $\sigma$. Outside the disk of radius $\sigma$, the spectrum of $\mathcal{L}_T$ has finitely many eigenvalues, each with finite multiplicity. This follows using the compactness of the unit ball of $\B$ in $\B_w$ \cite[Lemma 3.10]{demers zhang}. Despite the fact that $\eta$ may be greater than 1, the spectral radius of $\mathcal{L}_T$ equals 1. To see this, suppose $z \in \mathbb{C}$, $|z|>1$, satisfies $\mathcal{L}_T h = z h$ for some $h \in \B$, $h \neq 0$. For $\psi \in \mathcal{C}^p(M)$, Lemma~\ref{lem:distr} implies that, \[ |h(\psi)| = |z^{-n} \mathcal{L}_T^n h(\psi)| = |z^{-n} h(\psi \circ T^n)| \le |z|^{-n} C |h|_w (|\psi|_\infty + H^p_n(\psi \circ T^n)) \xrightarrow[n \to \infty]{} 0 \] since $H^p_n(\psi \circ T^n) \le C_e \Lambda^{-pn} |\psi|_{\mathcal{C}^p(M)}$ by \eqref{eq:C1 C0}. Thus $h = 0$, contradicting the assumption on $z$. The characterization of the peripheral spectrum follows from Lemmas 5.1 and 5.2 of \cite{demers zhang}. \end{proof} To prove Proposition~\ref{prop:ly}, we fix $T \in \mathcal{F}$ and prove the required Lasota-Yorke inequalities \eqref{eq:weak norm}-\eqref{eq:unstable norm}. It is shown in \cite[Section 4]{demers zhang} that $\mathcal{L}_T$ is a continuous operator on both $\B$ and $\B_w$ so that it suffices to prove the inequalities for $h \in \mathcal{C}^1(M)$. They extend to the completions by continuity. Since these estimates are similar to those in \cite{demers zhang}, our purpose in repeating them is to show how they depend explicitly on the uniform constants given by {\bf (H1)}-{\bf (H5)} and do not require additional information. \subsection{Estimating the weak norm} \label{weak norm} Let $h \in \mathcal{C}^1(M)$, $W \in \mathcal{W}^s$ and $\psi \in \mathcal{C}^p(W)$ such that $|\psi|_{W,0,p} \leq 1$. For $n \geq0$, we write, \begin{equation} \label{eq:start} \int_W\mathcal{L}^nh \, \psi \, dm_W =\sum_{W^n_i \in\G_n(W)}\int_{W^n_i}h\frac{J_{W^n_i}T^n}{J_\mu T^n}\psi \circ T^n dm_W \end{equation} where $J_{W^n_i}T^n$ denotes the Jacobian of $T^n$ along $W^n_i$. Using the definition of the weak norm on each $W^n_i$, we estimate \eqref{eq:start} by \begin{equation} \label{eq:weak estimate} \int_W\mathcal{L}^nh \, \psi \, dm_W \; \leq \; \sum_{W^n_i \in\G_n} |h|_w | (J_\mu T^n)^{-1}J_{W^n_i}T^n|_{\mathcal{C}^p(W^n_i)} |\psi\circ T^n|_{\mathcal{C}^p(W^n_i)} . \end{equation} For $x,y \in W^n_i$, we use {\bf (H1)} to estimate, \begin{equation} \label{eq:C1 C0} \frac{|\psi (T^nx) - \psi (T^ny)|}{d_W(T^nx,T^ny)^p}\cdot \frac{d_W(T^nx,T^ny)^p}{d_W(x,y)^p} \leq |\psi|_{\mathcal{C}^p(W)} |J_{W^n_i}T^n|^p_{\mathcal{C}^0(W^n_i)} \leq C_e \Lambda^{-pn} |\psi|_{\mathcal{C}^p(W)} , \end{equation} so that $|\psi \circ T^n|_{\mathcal{C}^p(W^n_i)} \leq C_e |\psi|_{\mathcal{C}^p(W)} \le C_e$. We use this estimate together with {\bf (H5)} and \eqref{eq:holder c0} to bound \eqref{eq:weak estimate} by \[ \int_W \mathcal{L}^nh \, \psi \, dm_W \leq C_e (1+2C_d) \eta^n |h|_w \sum_{W^n_i \in \G_n} |J_{W^n_i}T^n|_{\mathcal{C}^0(W^n_i)} \le C' \eta^n |h|_w, \] where $C' = C_e (1+2C_d) C_2$ and we have used Lemma~\ref{lem:growth}(b) for the last inequality. Taking the supremum over all $W \in \mathcal{W}^s$ and $\psi \in \mathcal{C}^p(W)$ with $|\psi|_{W,0,p} \leq 1$ yields \eqref{eq:weak norm} expressed with uniform constants given by {\bf (H1)}-{\bf (H5)}. \subsection{Estimating the strong stable norm} \label{stable norm} Let $W \in \mathcal{W}^s$ and let $W^n_i$ denote the elements of $\G_n(W)$ as defined above. For $\psi \in \mathcal{C}^q(W)$, $|\psi|_{W,\alpha, q} \le 1$, define $\overline{\psi}_i = |W^n_i|^{-1} \int_{W^n_i} \psi \circ T^n \, dm_W$. Using equation \eqref{eq:start}, we write \begin{equation} \label{eq:stable split} \int_W\mathcal{L}^nh\, \psi \, dm_W = \sum_{i} \int_{W^n_i}h \, \frac{J_{W^n_i}T^n}{J_\mu T^n} \,(\psi \circ T^n- \overline{\psi}_i) \, dm_W + \overline{\psi}_i \int_{W^n_i}h \, \frac{J_{W^n_i}T^n}{J_\mu T^n} \, dm_W . \end{equation} To estimate the first term of \eqref{eq:stable split}, we first estimate $|\psi \circ T^n - \overline{\psi}_i|_{\mathcal{C}^q(W^n_i)}$. If $H_W^q(\psi)$ denotes the H\"older constant of $\psi$ along $W$, then equation~\eqref{eq:C1 C0} implies \begin{equation} \label{eq:H^q} \frac{|\psi (T^nx) - \psi (T^ny)|}{d_W(x,y)^q} \leq C_e \Lambda^{-nq} H_W^q(\psi) \end{equation} for any $x,y \in W^n_i$. Since $\overline{\psi}_i$ is constant on $W^n_i$, we have $H^q_{W^n_i}(\psi \circ T^n - \overline{\psi}_i) \leq C_e \Lambda^{-qn} H^q_W(\psi)$. To estimate the $\mathcal{C}^0$ norm, note that $\overline{\psi}_i = \psi \circ T^n(y_i)$ for some $y_i \in W^n_i$. Thus for each $x \in W^n_i$, \[ | \psi \circ T^n(x) - \overline{\psi}_i| = |\psi \circ T^n(x) - \psi \circ T^n(y_i)| \leq H^q_{W^n_i}(\psi \circ T^n) |W^n_i|^q \leq C_e H^q_W(\psi) \Lambda^{-nq} . \] This estimate together with \eqref{eq:H^q} and the fact that $|\varphi|_{W,\alpha,q} \leq 1$, implies \begin{equation} \label{eq:C^q small} |\psi \circ T^n - \overline{\psi}_i|_{\mathcal{C}^q(W^n_i)} \leq C_e \Lambda^{-nq} |\psi|_{\mathcal{C}^q(W)} \leq C_e \Lambda^{-qn} |W|^{-\alpha} . \end{equation} We apply \eqref{eq:holder c0}, \eqref{eq:C^q small} and the definition of the strong stable norm to the first term of \eqref{eq:stable split}, \begin{equation} \label{eq:first stable} \begin{split} \sum_i \int_{W^n_i} h & \frac{J_{W^n_i}T^n}{J_\mu T^n} \, (\psi \circ T^n - \overline{\psi}_i) \, dm_W \leq (1+2C_d) C_e \sum_i \|h\|_s \frac{|W^n_i|^\alpha}{|W|^\alpha} \left| \frac{J_{W^n_i}T^n}{J_\mu T^n} \right|_{C^0(W^n_i)} \Lambda^{-qn} \\ & \leq \; \eta^n (1+2C_d) C_e \Lambda^{-qn} \|h\|_s \sum_i \frac{|W^n_i|^\alpha}{|W|^\alpha} |J_{W^n_i}T^n|_{\mathcal{C}^0(W^n_i)} \; \leq \; C_4 \eta^n \Lambda^{-qn} \|h\|_s , \end{split} \end{equation} where $C_4 = (1+2C_d) C_e C_2^{1-\alpha}$ and in the second line we have used {\bf (H5)} and Lemma~\ref{lem:growth}(c) with $\varsigma = \alpha$. For the second term of \eqref{eq:stable split}, we use the fact that $|\overline{\psi}_i| \leq |W|^{-\alpha} $ since $|\psi|_{W, \alpha, q} \le 1$. Recall the notation introduced before the statement of Lemma~\ref{lem:growth}. Grouping the pieces $W^n_i \in \G_n(W)$ according to most recent long ancestors $W^k_j \in L_k(W)$, we have \[ \begin{split} \sum_{i} |W|^{-\alpha} \int_{W^n_i}h \frac{J_{W^n_i}T^n}{J_\mu T^n} \, dm_W = & \sum_{k=1}^n \sum_{j\in L_k(W)}\sum_{ i\in \I_n(W^k_j)} |W|^{-\alpha} \int_{W^n_i}h \frac{J_{W^n_i}T^n}{J_\mu T^n} \, dm_W \\ & + \sum_{ i\in \I_n(W)} |W|^{-\alpha} \int_{W^n_i} h \frac{J_{W^n_i}T^n}{J_\mu T^n} \, dm_W \end{split} \] where we have split up the terms involving $k=0$ and $k \geq 1$. We estimate the terms with $k \geq 1$ by the weak norm and the terms with $k=0$ by the strong stable norm. Using again \eqref{eq:holder c0} and {\bf (H5)}, \[ \begin{split} \sum_{i} |W|^{-\alpha} \int_{W^n_i}h \frac{J_{W^n_i}T^n}{J_\mu T^n} \, dm_W & \leq \eta^n (1+2C_d) \sum_{k=1}^n\sum_{j\in L_k(W)} \sum_{i\in \I_n(W^k_j)} |W|^{-\alpha} |h|_w | J_{W^n_i}T^n |_{\mathcal{C}^0(W^n_i)}\\ & + \eta^n (1+2C_d) \sum_{\ i\in \I_n(W)} \frac{|W^n_i|^\alpha }{|W|^\alpha } \|h\|_s |J_{W^n_i}T^n|_{\mathcal{C}^0(W^n_i)} . \end{split} \] In the first sum above corresponding to $k\geq 1$, we write \[ |J_{W^n_i}T^n|_{\mathcal{C}^0(W^n_i)} \leq |J_{W^n_i}T^{n-k}|_{\mathcal{C}^0(W^n_i)} |J_{W^k_j}T^k|_{\mathcal{C}^0(W^k_j)} . \] Thus using Lemma~\ref{lem:growth}(a) from time $k$ to time $n$, \[ \begin{split} \sum_{k=1}^n \sum_{j \in L_k} \sum_{i \in \I_n(W^k_j)} |W|^{-\alpha} |J_{W^n_i}T^n|_{\mathcal{C}^0(W^n_i)} & \leq \sum_{k=1}^n \sum_{j \in L_k(W)} |J_{W^k_j}T^k|_{\mathcal{C}^0(W^k_j)} |W|^{-\alpha} \sum_{i \in \I_n(W^k_j)} |J_{W^n_i}T^{n-k}|_{\mathcal{C}^0(W^n_i)} \\ & \le 3 \delta_0^{-\alpha} \sum_{k=1}^n \sum_{j \in L_k(W)} |J_{W^k_j}T^k|_{\mathcal{C}^0(W^k_j)} \, \frac{|W^k_j|^\alpha}{|W|^\alpha} C_1 \theta_*^{n-k}, \end{split} \] since $|W^k_j| \ge \delta_0/3$. The inner sum is bounded by $C_2^{1-\alpha}$ for each $k$ by Lemma~\ref{lem:growth}(c) while the outer sum is bounded by $C_1/(1-\theta_*)$ independently of $n$. Finally, for the sum corresponding to $k=0$, since \[ |J_{W^n_i}T^n|_{\mathcal{C}^0(W^n_i)} \le (1+C_d) |T^nW^n_i| |W^n_i|^{-1} \le (1+C_d) |J_{W^n_i}T^n|_{\mathcal{C}^0(W^n_i)}, \] we use Jensen's inequality and Lemma~\ref{lem:growth}(a) to estimate, \[ \sum_{i \in \I_n(W)} \frac{|W^n_i|^\alpha}{|W|^\alpha} |J_{W^n_i}T^n|_{\mathcal{C}^0(W^n_i)} \le (1+C_d) \left( \sum_{i \in \I_n(W)} \frac{|T^nW^n_i|}{|W^n_i|} \right)^{1-\alpha} \le (1+C_d)C_1 \theta_*^{n(1-\alpha)}. \] Gathering these estimates together, we have \begin{equation} \label{eq:second stable} \sum_{i} |W|^{-\alpha}\left| \int_{W^n_i}h(J_\mu T^n)^{-1}J_{W^n_i}T^n \, dm_W\right| \; \leq \; C_5 \eta^n \delta_0^{-\alpha}|h|_w + C_6 \|h\|_s \eta^n \theta_*^{n(1-\alpha)} , \end{equation} where $C_5 = 3(1+2C_d) C_1C_2^{1-\alpha} /(1-\theta_*)$ and $C_6 = (1+2C_d)^2 C_1$. Putting together \eqref{eq:first stable} and \eqref{eq:second stable} proves \eqref{eq:stable norm}, \[ \|\mathcal{L}^n h\|_s \leq C' \eta^n \left( \Lambda^{-q n}+\theta_*^{n(1-\alpha)}\right)\|h\|_s + C' \eta^n \delta_0^{-\alpha} |h|_w , \] with $C' = \max \{ C_4, C_5, C_6 \}$, a uniform constant depending only on {\bf (H1)}-{\bf (H5)}. \subsection{Estimating the strong unstable norm} \label{unstable norm} Fix $\varepsilon \le \varepsilon_0$ and consider two curves $W^1, W^2 \in\mathcal{W}^s$ with $d_{\mathcal{W}^s}(W^1,W^2) \leq \varepsilon$. For $n \geq 1$, we describe how to partition $T^{-n}W^\ell$ into ``matched'' pieces $U^\ell_j$ and ``unmatched'' pieces $V^\ell_k$, $\ell=1,2$. In the what follows, we use $C_t$ to denote a transversality constant which depends only on the minimum angle between various transverse directions: the minimum angle between $C^s(x)$ and $C^u(x)$, between $S^T_{-n}$ and $C^s(x)$, and between $C^s(x)$ and the vertical and horizontal directions. Let $\omega$ be a connected component of $W^1 \setminus \mathcal{S}_{-n}^T$ such that $T^{-n}\omega \in \G_n(W)$. To each point $x \in T^{-n}\omega$, we associate a smooth curve $\gamma_x \in \widehat\mathcal{W}^u$ of length at most $C_t C_e \Lambda^{-n}\varepsilon$ such that its image $T^n\gamma_x$, if not cut by a singularity or the boundary of a homogeneity strip, will have length $C_t \varepsilon$. By {\bf (H2)}, $T^i\gamma_x \in \widehat\mathcal{W}^u$ for each $i \ge 0$. Doing this for each connected component of $W^1 \setminus \mathcal{S}_{-n}^T$, we subdivide $W^1 \setminus \mathcal{S}_{-n}^T$ into a countable collection of subintervals of points for which $T^n\gamma_x$ intersects $W^2 \setminus \mathcal{S}_{-n}^T$ and subintervals for which this is not the case. This in turn induces a corresponding partition on $W^2 \setminus \mathcal{S}_{-n}^T$. We denote by $V^\ell_k$ the pieces in $T^{-n}W^\ell$ which are not matched up by this process and note that the images $T^nV^\ell_k$ occur either at the endpoints of $W^\ell$ or because the curve $\gamma_x$ has been cut by a singularity. In both cases, the length of the curves $T^nV^\ell_k$ can be at most $C_t \varepsilon$ due to the uniform transversality of $\mathcal{S}_{-n}^T$ with $C^s(x)$ and of $C^s(x)$ with $C^u(x)$. In the remaining pieces the foliation $\{ T^n\gamma_x \}_{x \in T^{-n}W^1}$ provides a one to one correspondence between points in $W^1$ and $W^2$. We partition these pieces in such a way that the lengths of their images under $T^{-i}$ are less than $\delta_0$ for each $0 \le i \le n$ and the pieces are pairwise matched by the foliation $\{\gamma_x\}$. We call these matched pieces $\widetilde U^\ell_j$ and note that $T^i \widetilde U^\ell_j \in \G_{n-i}(W^\ell)$ for each $i = 0, 1, \ldots n$. For convenience, we further trim the $\widetilde U^\ell_j$ to pieces $U^\ell_j$ so that $U^1_j$ and $U^2_j$ are both defined on the same arclength interval $I_j$. The at most two components of $T^n(\widetilde U^\ell_j \setminus U^\ell_j)$ have length less than $C_t \varepsilon$ due to the uniform transversality of $C^s(x)$ with the vertical direction. We attach these trimmed pieces to the adjacent $U^\ell_i$ or $V^\ell_k$ as appropriate so as not to create any additional components in the partition. We further relabel any pieces $U^\ell_j$ as $V^\ell_j$ and consider them unmatched if for some $i$, $0 \le i \le n$, $|T^i\gamma_x| > 2 |T^iU^\ell_j|$. i.e. we only consider pieces matched if at each intermediate step, the distance between them is at most of the same order as their length. We do this in order to be able to apply Lemma~\ref{lem:angles} to the matched pieces. Notice that since the distance between the curves at each intermediate step is at most $C_t C_e \varepsilon$ and due to the uniform contraction of stable curves going forward, we have $|T^nV^\ell_k| \le C_t C_e^2 \varepsilon$ for all such pieces considered unmatched by this last criterion. In this way we write $W^\ell = (\cup_j T^nU^\ell_j) \cup (\cup_k T^nV^\ell_k)$. Note that the images $T^nV^\ell_k$ of the unmatched pieces must have length $\le C_v \varepsilon$ for some uniform constant $C_v$ while the images of the matched pieces $U^\ell_j$ may be long or short. Recalling the notation of Section~\ref{norms}, we have arranged a pairing of the pieces $U^\ell_j$ with the following property: \begin{equation} \label{eq:match} \begin{split} \mbox{If } \; & U^1_j = G_{U^1_j}(I_j) = \{ (r, \varphi_{U^1_j}(r)) : r \in I_j \}, \\ \mbox{then } \; & U^2_j = G_{U^2_j}(I_j) = \{ (r, \varphi_{U^2_j}(r)) : r \in I_j \}, \end{split} \end{equation} so that the point $x = (r, \varphi_{U^1_j}(r)) \in U^1_j$ can associated with the point $\bar x = (r, \varphi_{U^2_j}(r)) \in U^2_j$ by the vertical line $\{(r,s)\}_{s\in[-\pi/2, \pi/2]}$, for each $r \in I_j$. In addition, the $U^\ell_j$ satisfy the assumptions of Lemma~\ref{lem:angles}. Given $\psi_\ell$ on $W^\ell$ with $|\psi_\ell|_{W^\ell,0,p} \leq 1$ and $d_q(\psi_1, \psi_2) \leq \varepsilon$, with the above construction we must estimate \begin{align} \label{eq:unstable split} & \left|\int_{W^1} \mathcal{L}^nh \, \psi_1 \, dm_W - \int_{W^2} \mathcal{L}^nh \, \psi_2 \, dm_W \right| \; \leq \; \sum_{\ell,k} \left|\int_{V^\ell_k} h (J_\mu T^n)^{-1}J_{V^\ell_k}T^n \psi_\ell \circ T^n \, dm_W \right|\nonumber\\ & + \sum_j \left| \int_{U^1_j} h (J_\mu T^n)^{-1}J_{U^1_j}T^n \psi_1\circ T^n \, dm_W - \int_{U^2_j} h (J_\mu T^n)^{-1}J_{U^2_j}T^n \psi_2\circ T^n \, dm_W \right| \end{align} We do the estimate over the unmatched pieces $V^\ell_k$ first using the strong stable norm. Note that by \eqref{eq:C1 C0}, $|\psi_\ell \circ T^n|_{\mathcal{C}^q(V^\ell_k)} \leq C_e |\psi_\ell|_{\mathcal{C}^p(W^\ell)} \leq C_e $. We estimate as in Section~\ref{stable norm}, using the fact that $|T^nV^\ell_k| \le C_v \varepsilon$, as noted above, \begin{equation} \label{eq:first unstable} \begin{split} \sum_{\ell,k} \Big| \int_{V^\ell_k}h (J_\mu T^n)^{-1}J_{V^\ell_k}T^n\psi_\ell\circ T^n \, & dm_W \Big| \leq C_e \sum_{\ell,k} \|h\|_s |V^\ell_k|^\alpha |(J_\mu T^n)^{-1}J_{V^\ell_k}T^n|_{\mathcal{C}^q(V^\ell,k)} \\ & \leq C_e (1+2C_d) \eta^n \| h \|_s \sum_{\ell,k} |V^\ell_k|^\alpha |J_{V^\ell_k}T^n|_{\mathcal{C}^0(V^\ell_k)} \\ & \leq C' \varepsilon^{\alpha} \eta^n \|h\|_s \sum_{\ell,k} |J_{V^\ell_k}T^n|_{\mathcal{C}^0(V^\ell_k)}^{1-\alpha} \le 2 C' \varepsilon^\alpha \eta^n \|h\|_s C_3^n , \end{split} \end{equation} with $C' = C_e (1+2C_d)^2 C_v^\alpha$, where we have applied Lemma~\ref{lem:growth}(d) with $\varsigma = 1-\alpha > \varsigma_0$ since there are at most two $V^\ell_k$ corresponding to each element $W^{\ell, n}_i \in \G_n(W^\ell)$ as defined in Section~\ref{preliminary} and $|J_{V^\ell_k}T^n|_{\mathcal{C}^0(V^\ell_k)} \leq |J_{W^{\ell,n}_i}T^n|_{\mathcal{C}^0(W^{\ell,n}_i)}$ whenever $V^\ell_k \subseteq W^{\ell,n}_i$. Next, we must estimate \[ \sum_j\left|\int_{U^1_j}h(J_\mu T^n)^{-1}J_{U^1_j}T^n \, \psi_1\circ T^n \, dm_W - \int_{U^2_j}h (J_\mu T^n)^{-1}J_{U^2_j}T^n \, \psi_2\circ T^n \, dm_W \right| . \] We fix $j$ and estimate the difference. Define \[ \phi_j = ((J_\mu T^n)^{-1}J_{U^1_j}T^n \, \psi_1 \circ T^n) \circ G_{U^1_j} \circ G_{U^2_j}^{-1} . \] The function $\phi_j$ is well-defined on $U^2_j$ and we can write, \begin{equation} \label{eq:stepone} \begin{split} &\left|\int_{U^1_j}h(J_\mu T^{-1}J_{U^1_j}T^n \, \psi_1\circ T^n- \int_{U^2_j}h (J_\mu T^n)^{-1}J_{U^2_j}T^n \, \psi_2\circ T^n\right|\\ &\leq \left|\int_{U^1_j}h (J_\mu T^n)^{-1}J_{U^1_j}T^n \, \psi_1\circ T^n- \int_{U^2_j}h \,\phi_j \right| +\left|\int_{U^2_j}h (\phi_j - (J_\mu T^n)^{-1}J_{U^2_j}T^n \, \psi_2\circ T^n)\right| . \end{split} \end{equation} We estimate the first term on the right hand side of~\eqref{eq:stepone} using the strong unstable norm. Using {\bf (H5)}, \eqref{eq:holder c0} and \eqref{eq:C1 C0}, \begin{equation} \label{eq:c1-unst 1} | (J_\mu T^n)^{-1}J_{U^1_j}T^n\cdot \psi_1 \circ T^n|_{\mathcal{C}^p(U^1_j)} \le C_e (1+2C_d) \eta^n |J_{U^1_j}T^n|_{\mathcal{C}^0(U^1_j)}. \end{equation} Notice that \begin{equation} \label{eq:graph bound} |G_{U^1_j} \circ G^{-1}_{U^2_j}|_{\mathcal{C}^1(U^2_j)} \le \sup_{r \in U^2_j} \frac{\sqrt{1 + (d\varphi_{U^1_j}/dr)^2}}{\sqrt{1 + (d\varphi_{U^2_j}/dr)^2}} \le \sqrt{1 + \Gamma^2} =: C_g, \end{equation} where $\Gamma$ is the maximum slope of curves in $\mathcal{W}^s$ given by {\bf (H2)}. Using this, we estimate as in \eqref{eq:c1-unst 1}, \[ |\phi_j|_{\mathcal{C}^p(U^2_j)} \le C_g C_e (1+2C_d) \eta^n |J_{U^1_j}T^n|_{\mathcal{C}^0(U^1_j)}. \] By the definition of $\phi_j$ and $d_q(\cdot, \cdot)$, \[ d_q((J_\mu T^n)^{-1}J_{U^1_j}T^n\psi_1\circ T^n, \phi_j) = \left| \left[ (J_\mu T^n)^{-1}J_{U^1_j}T^n\psi_1\circ T^n \right] \circ G_{U^1_j} - \phi_j \circ G_{U^2_j} \right|_{\mathcal{C}^q(I_j)} \; = \; 0 . \] By Lemma~\ref{lem:angles}(a), we have $d_{\mathcal{W}^s}(U^1_j,U^2_j)\leq C_0\Lambda^{-n} \varepsilon =: \varepsilon_1$. In view of \eqref{eq:c1-unst 1} and following, we renormalize the test functions by $R_j = C_7 \eta^n |J_{U^1_j}T^n|_{\mathcal{C}^0(U^1_j)}$ where $C_7 = C_g C_e (1+ 2C_d)$. Then we apply the definition of the strong unstable norm with $\varepsilon_1$ in place of $\varepsilon$. Thus, \begin{equation} \label{eq:second unstable} \sum_j \left|\int_{U^1_j}h J_\mu T^n)^{-1} J_{U^1_j}T^n \, \psi_1\circ T^n - \int_{U^2_j} h \, \phi_j \; \right| \leq C_7 C_0^\beta \varepsilon^\beta \Lambda^{-\beta n} \eta^n \|h\|_u \sum_j |J_{U^1_j}T^n|_{\mathcal{C}^0(U^1_j)} \end{equation} where the sum is $\le C_2$ by Lemma~\ref{lem:growth}(b) since there is at most one matched piece $U^1_j$ corresponding to each element $W^{1,n}_i \in \G_n(W^1)$ and $|J_{U^1_j}T^n|_{\mathcal{C}^0(U^1_j)} \le |J_{W^{1,n}_i}T^n|_{\mathcal{C}^0(W^{1,n}_i)}$ whenever $U^1_j \subseteq W^{1,n}_i$. It remains to estimate the second term in \eqref{eq:stepone} using the strong stable norm. \begin{equation} \label{eq:unstable strong} \left|\int_{U^2_j}h(\phi_j - (J_\mu T^n)^{-1}J_{U^2_j}T^n \psi_2 \circ T^n) \right| \leq \; \|h\|_s |U^2_j|^\alpha \left|\phi_j - (J_\mu T^n)^{-1} J_{U^2_j}T^n \psi_2 \circ T^n\right|_{\mathcal{C}^q(U^2_j)} . \end{equation} In order to estimate the $\mathcal{C}^q$-norm of the function in \eqref{eq:unstable strong}, we split it up into two differences. Since $|G_{U^\ell_j}|_{\mathcal{C}^1} \le C_g$ and $|G_{U^\ell_j}^{-1}|_{\mathcal{C}^1} \le 1$, $\ell = 1,2$, we write \begin{equation} \label{eq:diff} \begin{split} & | \phi_j - ((J_\mu T^n)^{-1}J_{U^2_j}T^n)\cdot \psi_2 \circ T^n|_{\mathcal{C}^q(U^2_j)} \\ \leq& \; \left |\left[ ((J_\mu T^n)^{-1}J_{U^1_j}T^n)\cdot\psi_1 \circ T^n\right]\circ G_{U^1_j} - \left[((J_\mu T^n)^{-1}J_{U^2_j}T^n)\cdot\psi_2 \circ T^n\right] \circ G_{U^2_j}\right|_{\mathcal{C}^q(I_j)}\\ \leq& \; \left | ((J_\mu T^n)^{-1}J_{U^1_j}T^n)\circ G_{U^1_j} \left[ \psi_1 \circ T^n\circ G_{U^1_j} -\psi_2 \circ T^n\circ G_{U^2_j}\right]\right|_{\mathcal{C}^q(I_j)}\\ &+ \left|\left[((J_\mu T^n)^{-1}J_{U^1_j}T^n) \circ G_{U^1_j}-((J_\mu T^n)^{-1}J_{U^2_j}T^n) \circ G_{U^2_j}\right]\psi_2\circ T^n\circ G_{U^2_j}\right|_{\mathcal{C}^q(I_j)}\\ \leq& \; C_g (1+2C_d) | (J_\mu T^n)^{-1}J_{U^1_j}T^n|_{\mathcal{C}^0(U^1_j)} \left|\psi_1 \circ T^n\circ G_{U^1_j} -\psi_2 \circ T^n\circ G_{U^2_j}\right|_{\mathcal{C}^q(I_j)}\\ &+ C_g C_e \left|((J_\mu T^n)^{-1}J_{U^1_j}T^n) \circ G_{U^1_j}-((J_\mu T^n)^{-1}J_{U^2_j}T^n) \circ G_{U^2_j}\right|_{\mathcal{C}^q(I_j)} \end{split} \end{equation} To bound the two differences above, we need the following lemma. \begin{lemma} \label{lem:test} There exist constants $C_8, C_9>0$, depending only on {\bf (H1)}-{\bf (H5)}, such that, \begin{itemize} \item[(a)] $\displaystyle |((J_\mu T^n)^{-1} J_{U^1_j}T^n)\circ G_{U^1_j} -(J_\mu T^n)^{-1} J_{U^2_j}T^n)\circ G_{U^2_j}|_{\mathcal{C}^q(I_j)}\leq C_8 | (J_\mu T^n)^{-1} J_{U^2_j}T^n|_{C^0(U^2_j)} \varepsilon^{1/3-q};$ \item[(b)] $\displaystyle |\psi_1 \circ T^n \circ G_{U^1_j} - \psi_2 \circ T^n \circ G_{U^2_j} |_{\mathcal{C}^q(I_{r_j})} \le C_9 \varepsilon^{p-q} .$ \end{itemize} \end{lemma} We postpone the proof of the lemma to Section~\ref{lemma proofs} and show how this completes the estimate on the strong unstable norm. Note that using \eqref{eq:comparable J}, we may replace $|J_{U^1_j}T^n|_{\mathcal{C}^0(U^1_j)}$ by $C|J_{U^1_j}T^n|_{\mathcal{C}^0(U^1_j)}$ where it appears in our estimates for some uniform constant $C$. Starting from \eqref{eq:unstable strong}, we apply Lemma~\ref{lem:test} to \eqref{eq:diff} to obtain, \begin{equation} \label{eq:unstable three} \begin{split} & \sum_j \Big| \int_{U^2_j} h(\phi_j - (J_\mu T^n)^{-1} J_{U^2_j}T^n \psi_2 \circ T^n ) \, dm_W \Big| \\ & \le \bar C \|h\|_s \sum_j |U^2_j|^\alpha |(J_\mu T^n)^{-1} J_{U^2_j}T^n|_{\mathcal{C}^0(U^2_j)} \, \varepsilon^{p -q} \le \bar C \eta^n \|h\|_s \varepsilon^{p-q} \sum_j |J_{U^2_j}T^n|_{\mathcal{C}^0(U^2_j)}, \end{split} \end{equation} for some uniform constant $\bar C$ where again the sum is finite as in \eqref{eq:second unstable}. This completes the estimate on the second term in \eqref{eq:stepone}. Now we use this bound, together with \eqref{eq:first unstable} and \eqref{eq:second unstable} to estimate \eqref{eq:unstable split} \[ \begin{split} \left|\int_{W^1} \mathcal{L}^nh \, \psi_1 \, dm_W - \int_{W^2} \mathcal{L}^nh \, \psi_2 \, dm_W \right| \; \leq \; CC_3^n \eta^n \|h\|_s \varepsilon^\alpha + C \|h\|_u \Lambda^{-\beta n} \eta^n \varepsilon^\beta + C \eta^n \|h\|_s \varepsilon^{p-q} , \end{split} \] where again $C$ depends only on {\bf (H1)}-{\bf (H5)} through the estimates above. Since $p-q \ge \beta$ and $\alpha \ge \beta$, we divide through by $\varepsilon^\beta$ and take the appropriate suprema to complete the proof of \eqref{eq:unstable norm}. \subsubsection{Proof of Lemma~\ref{lem:test}} \label{lemma proofs} First we prove the following general fact and then use it to prove Lemma~\ref{lem:test}. \begin{lemma} \label{lem:general} Let $(N,d)$ be a metric space and let $0<r<s \le 1$. Suppose $g_1, g_2 \in \mathcal{C}^s(N, \mathbb{R})$ satisfy $|g_1 - g_2|_{\mathcal{C}^0(N)} \le D_1 \varepsilon^s$ for some constant $D_1>0$. Then $|g_1 - g_2|_{\mathcal{C}^r(N)} \le 3 \varepsilon^{s-r} \max \{ D_1, H^s(g_1)+ H^s(g_2) \}$, where $H^s(\cdot)$ denotes the H\"older constant with exponent $s$ on $N$. \end{lemma} \begin{proof} Since $| \cdot |_{\mathcal{C}^r(N)} = | \cdot |_{\mathcal{C}^0(N)} + H^r(\cdot)$, we must estimate $H^r(g_1 - g_2)$. Let $x, y \in N$. Then on the one hand, since $|g_1 -g_2| \le D_1 \varepsilon^s$, we have \[ \frac{|(g_1(x) - g_2(x)) - (g_1(y)-g_2(y)|}{d(x,y)^r} \leq 2 D_1 \varepsilon^s d(x,y)^{-r} \] On the other hand, using the fact that $g_1, g_2 \in \mathcal{C}^s(N)$, we have \[ \frac{|(g_1(x) - g_2(x)) - (g_1(y)-g_2(y)|}{d(x,y)^r} \leq (H^s(g_1)+H^s(g_2)) d(x,y)^{s-r} . \] These two estimates together imply that the H\"older constant of $g_1 - g_2$ is bounded by \[ H^r(g_1 - g_2) \le \sup_{x,y \in N} \min \{ 2 D_1 \varepsilon^s d(x,y)^{-r}, (H^s(g_1)+H^s(g_2)) d(x,y)^{s-r} \}. \] This expression is maximized when $ 2 D_1 \varepsilon^s d(x,y)^{-r} = (H^s(g_1)+H^s(g_2)) d(x,y)^{s-r}$, i.e., when $d(x,y) = \varepsilon \left( \frac{2D_1}{H^s(g_1) + H^s(g_2)} \right)^{1/s}$. Thus the H\"older constant of $g_1-g_2$ satisfies, \[ H^r(g_1-g_2) \le \varepsilon^{s-r} (2D_1)^{1-\frac rs}(H^s(g_1) + H^s(g_2))^{\frac rs} . \] \end{proof} \begin{proof}[Proof of Lemma~\ref{lem:test}(a)] Throughout the proof, for ease of notation we write $J_\ell^n$ for $(J_\mu T^n)^{-1} J_{U^\ell_j}T^n$. For any $r \in I_j$, $x=G_{U^1_j}(r)$ and $\bar x=G_{U_j^2}(r)$ lie on a common vertical segment. By the construction at the beginning of Section~\ref{unstable norm}, $U^1_j$, $U^2_j$ lie in two homogeneous stable curves $\widetilde U^1_j$ and $\widetilde U^2_j$ which are connected by the foliation $\{ \gamma_x \}$. Thus $x^* := \gamma_x \cap \widetilde U^2_j$ is uniquely defined for all $x \in U^1_j$. Then $T^n(x)$ and $T^n(x^*)$ lie on the element $T^n\gamma_x \in \mathcal{W}^u$ which intersects $W^1$ and $W^2$ and has length at most $C_t \varepsilon$. By \eqref{eq:D u dist} and Lemma~\ref{lem:angles}(b), \[ |J_1^n(x) - J_2^n(x^*)| \leq C_d C_0 |J_2^n|_{\mathcal{C}^0(U^2_j)} (d_W(T^nx,T^n x^*)^{1/3} + \theta(T^nx, T^n x^*)) , \] where $\theta(T^nx, T^n x^*)$ is the angle between the tangent line to $W^1$ at $T^nx$ and the tangent line to $W^2$ at $T^n x^*$. Let $y \in W^2$ be the unique point in $W^2$ which lies on the same vertical segment as $T^nx$. Since by assumption $d_{\mathcal{W}^s}(W^1, W^2) \le \varepsilon$, we have $\theta(T^nx, y) \le \varepsilon$. Due to the uniform transversality of curves in $\mathcal{W}^u$ and $\mathcal{W}^s$ and the fact that $W^2$ is the graph of a $\mathcal{C}^2$ function with $\mathcal{C}^2$ norm bounded by $B$ from {\bf (H2)}, we have $\theta(y, T^n x^*) \le BC_t\varepsilon$ and so $\theta(T^nx, T^n x^*) \le (1+BC_t)\varepsilon$. Thus \begin{equation} \label{eq:inter} |J_1^n(x) - J_2^n(x^*)| \leq C_d C_0 (C_t+1+BC_t) \varepsilon^{1/3} |J_2^n|_{\mathcal{C}^0(U^2_j)} . \end{equation} Also, by \eqref{eq:distortion stable}, since $x^*$ and $\bar x$ are both on $\widetilde U^2_j$, we have $|J_2^n(x^*) - J_2^n(\bar x)| \le C_d^2 |J_2^n|_{\mathcal{C}^0(U^2_j)}d_W(x^*, \bar x)^{1/3}$. Putting this together with \eqref{eq:inter} and using the fact that $d_W(x^*,\bar x) \le C_t \varepsilon$ by the transversality of $\gamma_x$ with $\mathcal{W}^s$ yields, \begin{equation} \label{eq:J C0} |J_1^n(x) - J_2^n(\bar x)| \leq C' \varepsilon^{1/3} |J_2^n|_{\mathcal{C}^0(U^2_j)}, \end{equation} where $C' = C_d C_0 (2 C_t + 1 + BC_t)$. Now using the fact that $|G_{U^\ell_j}|_{\mathcal{C}^1(I_j)} \le C_g$ from \eqref{eq:graph bound}, we apply Lemma~\ref{lem:general} with $D_1 = C_1 |J_2^n|_{\mathcal{C}^0(U^2_j)}$ and $g_i = J_i^n \circ G_{U^i_j}$, $i=1,2$. By \eqref{eq:J C0}, we have \begin{equation} \label{eq:comparable J} |J_1^n|_{\mathcal{C}^0(U^1_j)} \le (1+C' \varepsilon^{1/3}) |J_2^n|_{\mathcal{C}^0(U^2_j)} , \end{equation} and invoking \eqref{eq:distortion stable}, we complete the proof of (a). \end{proof} \begin{proof}[Proof of (b)] Let $\varphi_{W^\ell}$ be the function whose graph is $W^\ell$, defined for $r \in I_{W^\ell}$, and set $f^\ell_j:=G_{W^\ell}^{-1}\circ T^n\circ G_{U^\ell_j}$, $k=1,2$. Notice that since $|G_{W^\ell}^{-1}|_{\mathcal{C}^1} \le 1$ and $|G_{U^\ell_j}|_{\mathcal{C}^1} \le C_g$, and due to the uniform contraction along stable curves, we have Lip$(f^\ell_j) \le C_f$, where $C_f$ is independent of $W^\ell$, $T$ and $j$. We may assume that $f^\ell_j(I_j) \subset I_{W^1} \cap I_{W^2}$ since if not, by the transversality of $C^u(x)$ and $C^s(x)$, we must be in a neighborhood of one of the endpoints of $W^\ell$ of length at most $C_t \varepsilon$; such short pieces may be estimated as in \eqref{eq:first unstable} using the strong stable norm. Thus \begin{equation} \begin{split} \label{eq:test split} |\psi_1 \circ T^n \circ G_{U^1_j} - \psi_2 \circ T^n \circ G_{U^2_j} |_{\mathcal{C}^q(I_j)} & \le |\psi_1 \circ G_{W^1} \circ f^1_j - \psi_2 \circ G_{W^2} \circ f^1_j|_{\mathcal{C}^q(I_j)} \\ &+ |\psi_2 \circ G_{W^2} \circ f^1_j - \psi_2 \circ G_{W^2} \circ f^2_j |_{\mathcal{C}^q(I_j)} . \end{split} \end{equation} Using the above observation about $f^1_j$, we estimate the first term of \eqref{eq:test split} by \begin{equation} \label{eq:test 2} |\psi_1 \circ G_{W^1} \circ f^1_j - \psi_2 \circ G_{W^2} \circ f^1_j |_{\mathcal{C}^q(I_j)} \leq C_f |\psi_1 \circ G_{W^1} - \psi_2 \circ G_{W^2}|_{\mathcal{C}^q(f^1_j(I_j))} \le C_f \varepsilon , \end{equation} since $d_q(\psi_1, \psi_2) \le \varepsilon$. To estimate the second term of \eqref{eq:test split}, notice that since $U^1_j$ and $U^2_j$ are joined by the transverse foliation $\{ \gamma_x \} \subset \widehat \mathcal{W}^u$ and using the uniform contraction along stable curves under $T^n$, we have $|f^1_j - f^2_j|_{\mathcal{C}^0(I_j)} \le \tilde C \varepsilon$ for a constant $\tilde C$ depending only on the uniform hyperbolicity of {\bf (H1)} and the uniform transversality conditions in {\bf (H2)}. Thus for $r \in I_j$, \begin{equation} \label{eq:test C0} |\psi_2 \circ G_{W^2} \circ f^1_j(r) - \psi_2 \circ G_{W^2} \circ f^2_j(r)| \le C_g |\psi_2|_{\mathcal{C}^p} |f^1_j(r) - f^2_j(r)|^p \leq C_g \tilde C |\psi_2|_{\mathcal{C}^p} \varepsilon^p . \end{equation} Now we again apply Lemma~\ref{lem:general} to obtain \[ |\psi_2 \circ G_{W^2} \circ f^1_j - \psi_2 \circ G_{W^2} \circ f^2_j|_{\mathcal{C}^q(I_j)} \le C |\psi_2|_{\mathcal{C}^p} \varepsilon^{p-q} , \] for a uniform constant $C$. This estimate combined with \eqref{eq:test 2} proves part (b) since $|\psi_2|_{\mathcal{C}^p(W^2)} \le 1$. \end{proof} \section{Proof of Theorem~\ref{thm:close}} \label{close} Fix $\varepsilon < \varepsilon_0$ and suppose $T_1, T_2 \in \mathcal{F}$ with $d_\mathcal{F}(T_1, T_2) \le \varepsilon$. We denote by $\mathcal{S}^\ell_{-n}$ the singularity sets for $T_\ell$, $\ell=1,2$. Let $h \in \mathcal{C}^1(M)$, $\| h \|_\B \le 1$, and $W \in \mathcal{W}^s$. Let $\psi \in \mathcal{C}^p(W)$ with $|\psi|_{W, 0 , p} \le 1$. We must estimate \begin{equation} \label{eq:L diff} \begin{split} \int_W (\mathcal{L}_1 h & - \mathcal{L}_2 h) \psi \, dm_W = \int_W \mathcal{L}_1 h \psi \, dm_W - \int _W \mathcal{L}_2 h \psi \, dm_W \\ & = \int_{T_1^{-1}W} h \, \psi \circ T_1 (J_\mu T_1)^{-1} J_{T_1^{-1}W}T \, dm_W - \int_{T_2^{-1}W} h \, \psi \circ T_2 (J_\mu T_2)^{-1} J_{T_2^{-1}W}T_2 \, dm_W . \end{split} \end{equation} Notice that the estimate required is similar to that done in Section~\ref{unstable norm}, except that instead of two close stable curves iterated under the same map, we have one stable curve iterated under two different maps. We partition $T_1^{-1}W$ and $T_2^{-1}W$ into matched and unmatched pieces as in the beginning of Section~\ref{unstable norm}. Let $\G_1^\ell(W)$, $\ell=1,2$, denote the elements of $T_\ell^{-1}W$ as described in Section~\ref{preliminary}. Let $\omega \in \G_1^1(W)$. Due to {\bf (C1)}, to each point $x \in \omega$, we associate a curve $\gamma_x \in \widehat W^u$ of length at most $C_t \varepsilon$ which terminates on a piece of $T_2^{-1}W$ that lies in the same homogeneity strip, if one exists. We also require that $\gamma_x$ is not cut by $\mathcal{S}_{1}^1 \cup \mathcal{S}_{1}^2$. We denote by $V^\ell_k$ those components of $T_\ell^{-1}W$ not matched by this process. We also include in the set of $V^\ell_k$ all images of connected components of $W \cap N_{\varepsilon}(\mathcal{S}^1_{-1} \cup \mathcal{S}^2_{-1})$ under $T_\ell^{-1}$. Note that the $T_\ell V^\ell_k$ occur either at the endpoints of $W$ or near a singularity or the boundary of $N_\varepsilon(\mathcal{S}^1_{-1} \cup \mathcal{S}^2_{-1}$). In all cases, the length of the curves $T_\ell V^\ell_k$ can be at most $C_t C_e \varepsilon$ due to the uniform transversality of $\mathcal{S}^\ell_{-1}$ with $C^s$ and of $C^s$ with $C^u$. In the remaining pieces the foliation $\{ \gamma_x \}$ provides a one-to-one correspondence between points in $T_1^{-1}W$ and $T_2^{-1}W$. We further partition these pieces in such a way that their lengths are between $\delta_0/2$ and $\delta_0$ and the pieces are pairwise matched by the foliation $\{\gamma_x\}$. We call these matched pieces $\widetilde U^\ell_j$. As in Section~\ref{unstable norm}, we trim the $\widetilde U^\ell_j$ to pieces $U^\ell_j$ so that $U^1_j$ and $U^2_j$ are defined on the same arclength interval $I_j$. The at most two components of $T_\ell(\widetilde U^\ell_j \setminus U^\ell_j)$ have length at most $C_t C_e \Lambda^{-1} \varepsilon$. We adjoin these trimmed pieces to the adjacent $U^\ell_i$ or $V^\ell_k$ as appropriate so as not to create more pieces in the partition of $T_\ell^{-1} W$. In this way, we write $T^{-1}_\ell W = (\cup_j U^\ell_j) \cup (\cup_k V^\ell_k)$ and note that the images $T_\ell V^\ell_k$ have length at most $C_v \varepsilon$ for some uniform constant $C_v$, $\ell = 1,2$. Now using \eqref{eq:L diff}, we have \begin{equation} \label{eq:L close split} \begin{split} \int_W (\mathcal{L}_1 h & - \mathcal{L}_2 h) \psi \, dm_W = \sum_{\ell,k} \int_{V^\ell_k} h \, \psi \circ T_\ell \, (J_\mu T_\ell)^{-1} J_{V^\ell_k}T_\ell \, dm_W \\ & \; \; \; \; + \sum_j \int_{U^1_j} h \, \psi \circ T_1 \, (J_\mu T_1)^{-1} J_{U^1_j}T_1 \, dm_W - \int_{U^2_j} h \, \psi \circ T_2 \, (J_\mu T_2)^{-1} J_{U^2_j}T_2 \, dm_W . \end{split} \end{equation} We estimate the integral on short pieces $V^\ell_k$ first using the strong stable norm. By \eqref{eq:C1 C0}, we have $|\psi \circ T_\ell|_{\mathcal{C}^q(V^\ell_k)} \leq C_e |\psi|_{\mathcal{C}^p(W)} \leq C_e$. Following the estimate in \eqref{eq:first unstable}, we have \begin{equation} \label{eq:first close} \begin{split} \sum_{\ell, k} & \left|\int_{V^\ell_k}h (J_\mu T_\ell)^{-1}J_{V^\ell_k}T_\ell \, \psi \circ T_\ell \, dm\right| \leq C \varepsilon^{\alpha} \|h\|_s \sum_{\ell, k} |J_{V^\ell_k}T_\ell |_{\mathcal{C}^0(V^\ell_k)}^{1-\alpha} . \end{split} \end{equation} The sum is finite by \eqref{eq:weakened step1} of {\bf (H3)} with $\varsigma = 1-\alpha$ since there are at most two $V^\ell_k$ corresponding to each element $W^{\ell,1}_i \in \G^\ell_1(W)$ as defined in Section~\ref{preliminary} and $|J_{V^\ell_k}T_\ell |_{\mathcal{C}^0(V^\ell_k)} \leq |J_{W^{\ell, 1}_i}T_\ell |_{\mathcal{C}^0(W^{\ell, 1}_i)}$ whenever $V^\ell_j \subseteq W^{\ell, 1}_i$. The constant $C$ above depends only on properties {\bf (H1)}-{\bf (H5)}, but for brevity we do not write out the explicit dependence since these estimates are similar to those done in Section~\ref{unstable norm} and the constants are the same. Next, we must estimate \[ \sum_j\left|\int_{U^1_j}h \, (J_\mu T_1)^{-1}J_{U^1_j}T_1 \, \psi \circ T_1 \, dm_W - \int_{U^2_j}h \, (J_\mu T_2)^{-1}J_{U^2_j}T_2 \, \psi \circ T_2 \, dm_W \right| . \] Using notation analogous to \eqref{eq:match}, we fix $j$ and estimate the difference. Define \[ \phi_j = ((J_\mu T_1)^{-1}J_{U^1_j}T_1 \, \psi \circ T_1) \circ G_{U^1_j} \circ G_{U^2_j}^{-1} . \] The function $\phi_j$ is well-defined on $U^2_j$ and we can write, \begin{equation} \label{eq:L stepone} \begin{split} &\left|\int_{U^1_j}h \, (J_\mu T_1)^{-1}J_{U^1_j}T_1 \, \psi \circ T_1 - \int_{U^2_j}h \, (J_\mu T_2)^{-1}J_{U^2_j}T_2 \, \psi \circ T_2\right|\\ &\leq \left|\int_{U^1_j}h \, (J_\mu T_1)^{-1}J_{U^1_j}T_1 \, \psi \circ T_1 - \int_{U^2_j}h \,\phi_j \right| +\left|\int_{U^2_j}h (\phi_j - (J_\mu T_2)^{-1}J_{U^2_j}T_2 \, \psi \circ T_2)\right| . \end{split} \end{equation} To estimate the two terms above, we need the following adaptation of Lemma~\ref{lem:test}. \begin{lemma} \label{lem:close} There exists $\bar C>0$, independent of $W \in \mathcal{W}^s$ and $T_1, T_2 \in \mathcal{F}$, such that for each $j$, \begin{itemize} \item[(a)] $d_{\mathcal{W}^s}(U^1_j,U^2_j)\leq \bar C \varepsilon^{1/2}$ ; \item[(b)] $ \displaystyle |((J_\mu T_1)^{-1} J_{U^1_j}T_1)\circ G_{U^1_j} -((J_\mu T_2)^{-1} J_{U^2_j}T_2)\circ G_{U^2_j}|_{\mathcal{C}^q(I_j)}\leq \bar C | (J_\mu T_2)^{-1} J_{U^2_j}T_2|_{C^0(U^2_j)} \varepsilon^{1/3-q}$ ; \item[(c)] $ \displaystyle |\psi \circ T_1 \circ G_{U^1_j} - \psi \circ T_2 \circ G_{U^2_j} |_{\mathcal{C}^q(I_j)} \le \bar C \varepsilon^{p-q} $ . \end{itemize} \end{lemma} We estimate the first term in equation~\eqref{eq:L stepone} using the strong unstable norm. The estimates \eqref{eq:holder c0} and \eqref{eq:C1 C0} and property {\bf (H5)} imply that \begin{equation} \label{eq:L c1-unst 1} | (J_\mu T_1)^{-1}J_{U^1_j}T_1\cdot \psi \circ T_1|_{U^1_j,0,p} \le \eta C_e |J_{U^1_j}T_1|_{\mathcal{C}^0(U^1_j)}. \end{equation} Similarly, since by \eqref{eq:graph bound}, $|G_{U^1_j} \circ G_{U^2_j}^{-1}|_{\mathcal{C}^1} \le C_g$, we have $ |\phi_j|_{U^2_j, 0,p} \le C_g \eta C_e |J_{U^1_j}T_1|_{\mathcal{C}^0(U^1_j)}$. By the definition of $\phi_j$ and $d_q(\cdot, \cdot)$, \[ d_q((J_\mu T_1)^{-1}J_{U^1_j}T_1\psi \circ T_1, \phi_j) = \left| \left[ (J_\mu T_1)^{-1}J_{U^1_j}T_1\psi \circ T_1 \right] \circ G_{U^1_j} - \phi_j \circ G_{U^2_j} \right|_{\mathcal{C}^q(I_j)} \; = \; 0 . \] In view of \eqref{eq:L c1-unst 1} and following, we renormalize the test functions by $R_j = \eta C_g C_e |J_{U^1_j}T_1|_{\mathcal{C}^0(U^1_j)}$. Then we apply the definition of the strong unstable norm using Lemma~\ref{lem:close}(a) to obtain, \begin{equation} \label{eq:second close} \sum_j \left|\int_{U^1_j}h \, (J_\mu T_1)^{-1} J_{U^1_j}T_1 \, \psi_1\circ T_1 - \int_{U^2_j} h \, \phi_j \; \right| \leq C \varepsilon^{\beta/2} \|h\|_u \sum_j |J_{U^1_j}T_1|_{\mathcal{C}^0(U^1_j)}, \end{equation} where the sum is $\le C_2$ by Lemma~\ref{lem:growth}(b) since there is at most one matched piece $U^1_j$ corresponding to each curve $W^{1}_i \in \G_1^1(W)$. We estimate the second term in \eqref{eq:L stepone} using the strong stable norm. \begin{equation} \label{eq:L unstable strong} \left|\int_{U^2_j}h(\phi_j - (J_\mu T_2)^{-1}J_{U^2_j}T_2 \psi \circ T) \right| \leq \; C \|h\|_s |U^2_j|^\alpha \left|\phi_j - (J_\mu T_2)^{-1} J_{U^2_j}T_2 \psi \circ T_2\right|_{\mathcal{C}^q(U^2_j)} . \end{equation} In order to estimate the $\mathcal{C}^q$-norm of the function in \eqref{eq:L unstable strong}, we split it up into two differences. Following \eqref{eq:diff} line by line, we obtain \begin{equation} \label{eq:close diff} \begin{split} | \phi_j & - (J_\mu T_2)^{-1}J_{U^2_j}T_2\cdot \psi)\circ T_2|_{\mathcal{C}^q(U^2_j)} \\ & \leq \; C |\, (J_\mu T_1)^{-1}J_{U^1_j}T_1|_{\mathcal{C}^0(U^1_j)} \left|\psi \circ T_1\circ G_{U^1_j} -\psi \circ T_2\circ G_{U^2_j}\right|_{\mathcal{C}^q(I_j)}\\ &\qquad + C \left|((J_\mu T_1)^{-1}J_{U^1_j}T_1) \circ G_{U^1_j}-((J_\mu T_2)^{-1}J_{U^2_j}T_2) \circ G_{U^2_j}\right|_{\mathcal{C}^q(I_j)} \end{split} \end{equation} Notice that $|J_{U^1_j}T_1|_{\mathcal{C}^0(U^1_j)} \le C |J_{U^2_j}T_2|_{\mathcal{C}^0(U^2_j)}$ by \eqref{eq:J est}. Then using Lemma~\ref{lem:close}(b) and (c) together with \eqref{eq:close diff} yields by \eqref{eq:L unstable strong} \[ \sum_j \Big| \int_{U^2_j} h(\phi_j - (J_\mu T_2)^{-1} J_{U^2_j}T_2 \psi \circ T_2 ) \, dm_W \Big| \le C \|h\|_s \varepsilon^{p-q} \sum_j |J_{U^2_j}T_2|_{\mathcal{C}^0(U^2_j)} , \] where again the sum is finite by Lemma~\ref{lem:growth}(b). This completes the estimate on the second term in \eqref{eq:L stepone}. Now we use this bound, together with \eqref{eq:first close} and \eqref{eq:second close} to estimate \eqref{eq:L close split} \begin{equation} \label{eq:final close} \left|\int_{W} \mathcal{L}_1 h \, \psi \, dm_W - \int_{W} \mathcal{L}_2 h \, \psi \, dm_W \right| \; \leq \; C \|h\|_s \varepsilon^\alpha + C \|h\|_u \varepsilon^{\beta/2} + C \|h\|_s \varepsilon^{p-q} . \end{equation} Since $p-q \ge \beta$ and $\alpha \ge \beta$, the theorem is proved. \subsection{Proof of Lemma~\ref{lem:close}} \label{close lemma proofs} \begin{proof}[Proof of (a)] Note that by construction $U^1_j$ and $U^2_j$ lie in the same homogeneity strip. Also, they are both defined on the same interval $I_j$ so the length of the symmetric difference of their $r$-intervals is 0. Recalling the definition of $d_{\mathcal{W}^s}(U^1_j, U^2_j)$, we see that it remains only to estimate $|\varphi_{U^1_j} - \varphi_{U^2_j}|_{\mathcal{C}^1(I_j)}$ for their defining functions $\varphi_{U^k_j}$. For $x = (r, \varphi_{U^1_j}(r))$, define $\bar x = (r, \varphi_{ U^2_j}(r))$ and $x_\varepsilon = T_2^{-1} \circ T_1(x)$. We may assume that $x_\varepsilon$ lies on $\widetilde U^2_j$ since otherwise, we would be $\varepsilon$-close to one of the $V^2_k$ and such short curves can be estimated as in \eqref{eq:first close} using the strong stable norm. Since $x$ and $x_\varepsilon$ are images of the same point $u \in W$ under $T_1^{-1}$ and $T_2^{-1}$ respectively, it follows from {\bf (C1)} that $x$ and $x_\varepsilon$ are at most $\varepsilon$ apart. Then since all vectors in the stable cone have slope bounded away from $\pm \infty$, it follows that $x$ and $\bar x$ are at most $C\varepsilon$ apart (and so by the triangle inequality, also $\bar x$ and $x_\varepsilon$ are at most $C\varepsilon$ apart). This proves that $|\varphi_{U^1_j} - \varphi_{U^2_j}|_{\mathcal{C}^0(I_j)} \le C \varepsilon$. It remains to estimate $|\varphi'_{U^1_j} - \varphi'_{U^2_j}|$, where $\varphi_{U^\ell_j}'$ denotes the derivative of $\varphi_{U^\ell_j}$ with respect to $r$. Let $\vec{v}_W(u)$ be the unit tangent vector to $W$ at $u := T_1(x) = T_2(x)$, as before. The tangent vector to $U^\ell_j$ is given by $DT_\ell^{-1}(u)\vec{v}_W(u)$, $\ell =1,2$. By {\bf (C4)}, \begin{equation} \label{eq:close slope} \| DT_1^{-1}(u) \vec{v}_W(u) - DT_2^{-1}(u) \vec{v}_W(u) \| \le \varepsilon^{1/2} . \end{equation} Then since $\| DT_\ell^{-1}(u) \vec{v}_W(u) \| \ge C_e^{-1}$ by {\bf (H1)}, we have $\theta(x, x_\varepsilon) \le C_e \varepsilon^{1/2}$, where $\theta(x, x_\varepsilon)$ is the angle between the tangent vectors to $U^1_j$ and $U^2_j$ at $x$ and $x_\varepsilon$ respectively. For $y \in U^\ell_j$, let $\phi(y)$ denote the angle that $G_{U^\ell_j}$ makes with the positive $r$-axis at $y$. Then \[ |\varphi'_{U^1_j}(x) - \varphi'_{U^2_j}(\bar x) | = |\tan \phi(x) - \tan \phi(\bar x)| \leq \big[ \sup_{z \in U^\ell_j} \sec^2 \phi(z) \big] |\phi(x) - \phi(\bar x)| = \big[ \sup_{z \in U^\ell_j} \sec^2 \phi(z) \big] \theta(x, \bar x) . \] Since the slopes of curves in $C^s(x)$ are uniformly bounded away from $\pm \infty$, we have $\sec^2 \phi(z)$ uniformly bounded above for any $z \in U^k_j$. The proof of the lemma is completed by writing $\theta(x, \bar x) \le \theta(x, x_\varepsilon) + \theta(x_\varepsilon, \bar x)$. The first term is $\le C \varepsilon^{1/2}$ using \eqref{eq:close slope} and the second term is $\le C \varepsilon$ since $x_\varepsilon$ and $\bar x$ both lie on $\widetilde U^2_j$ and stable curves have a uniform $\mathcal{C}^2$ bound by {\bf (H2)}. \end{proof} \begin{proof}[Proof of (b)] We prove that the closeness condition {\bf (C3)} implies the existence of a constant $C>0$, independent of $W \in \mathcal{W}^s$ and $T_1, T_2 \in \mathcal{F}$, such that \begin{equation} \label{eq:J close} |J_{U^1_j}T_1 \circ G_{U^1_j} - J_{U^2_j}T_2 \circ G_{U^2_j}|_{\mathcal{C}^q(I_j)} \le C |J_{U^2_j}T_2|_{\mathcal{C}^0(U^2_j)} \varepsilon^{1/3-q}. \end{equation} The analogous statement concerning $(J_\mu T_k)^{-1}$ follows from condition {\bf (C2)}. Then since \[ |f_1 g_1 - f_2 g_2|_{\mathcal{C}^q} \le |f_1|_{\mathcal{C}^q} |g_1 - g_2|_{\mathcal{C}^q} + |g_2|_{\mathcal{C}^q} |f_1 - f_2|_{\mathcal{C}^q}, \] for any $\mathcal{C}^q$ functions $f_1, g_1, f_2, g_2$, part (b) of the lemma follows from these two estimates using the fact that $| \cdot |_{\mathcal{C}^q} \le (1+C_d) | \cdot |_{\mathcal{C}^0}$ by bounded distortion for the functions we are estimating. We proceed to prove \eqref{eq:J close}. For any $r \in I_j$, we write \begin{equation} \label{eq:split J} \begin{split} |J_{U^1_j}T_1 \circ G_{U^1_j}(r) - J_{U^2_j}T_2 \circ G_{U^2_j}(r)| \; \; \le \; \; & |J_{U^1_j}T_1 \circ G_{U^1_j}(r) - J_{U^1_j}T_2 \circ G_{U^1_j}(r)| \\ & + |J_{U^1_j}T_2 \circ G_{U^1_j}(r) - J_{U^2_j}T_2 \circ G_{U^2_j}(r)| \end{split} \end{equation} The first term above is $\le |J_{U^1_j}T_2|_{\mathcal{C}^0(I_j)} \varepsilon$ by {\bf (C3)}. Recall that $U^1_j, U^2_j$ lie inside the longer curves $\widetilde U^1_j, \widetilde U^2_j$ which are matched by the foliation $\{ \gamma_x \}_{x \in \widetilde U^1_j} \subset \widehat \mathcal{W}^u$. Thus $|J_{U^1_j}T_2|_{\mathcal{C}^0(I_j)} \le C |J_{U^2_j}T_2|_{\mathcal{C}^0(I_j)}$ by the same argument used to prove \eqref{eq:comparable J}, completing the estimate on the first term of \eqref{eq:split J}. The second term of \eqref{eq:split J} is $\le C' \varepsilon^{1/3} |J_{U^2_j}T_2|_{\mathcal{C}^0(I_j)}$ using \eqref{eq:J C0} since it involves the Jacobian of a single map in $\mathcal{F}$ evaluated on two stable curves that are matched by a foliation of unstable curves. Thus \begin{equation} \label{eq:J est} |J_{U^1_j}T_1 \circ G_{U^1_j}(r) - J_{U^2_j}T_2 \circ G_{U^2_j} (r)| \leq C \varepsilon^{1/3} |J_{U^2_j}T_2|_{\mathcal{C}^0(U^2_j)} . \end{equation} This implies in particular that $|J_{U^1_j}T_1|_{\mathcal{C}^0(U^1_j)} \le C |J_{U^2_j}T_2|_{\mathcal{C}^0(U^2_j)}$. Now we use \eqref{eq:distortion stable} and the fact that $|G_{U^\ell_j}|_{\mathcal{C}^1(I_j)} \le C_g$ to apply Lemma~\ref{lem:general} and complete the proof of \eqref{eq:J close}. \end{proof} \begin{proof}[Proof of (c)] Let $x = (r, \varphi_{U^1_j}(r))$ and as above, define $\bar x = (r, \varphi_{U^2_j}(r))$ and $x_\varepsilon = T_2^{-1} \circ T_1(x)$. Since $\bar x$ and $x_\varepsilon$ are at most $C \varepsilon$ apart and lie on $\widetilde U^2_j$, we have $d_W(T_2 \bar x, T_2 x_\varepsilon) \le C \varepsilon$ by the uniform contraction given by {\bf (H1)}. Thus, \begin{equation} \label{eq:psi close} |\psi \circ T_1 \circ G_{U^1_j}(r) - \psi \circ T_2 \circ G_{U^2_j}(r)| \leq |\psi|_{\mathcal{C}^p(W)} d_W(T_1x,T_2 \bar x)^p . \end{equation} Since $d_W(T_1 x, T_2 x_\varepsilon)= 0$, we may use the triangle inequality to conclude that the difference above is bounded by $C |\psi|_{\mathcal{C}^p(W)} \varepsilon^p$. Again applying Lemma~\ref{lem:general} with $|\psi|_{\mathcal{C}^p(W)} \le 1$ completes the proof of part (c). \end{proof} \section{Proofs of Applications: Movements and Deformations of Scatterers and Random Perturbations} \label{perts} In this section we prove Theorems~\ref{thm:F1}, \ref{thm:deform} and \ref{thm:random} and leave Theorems~\ref{thm:C1} and \ref{thm:C2} regarding external forces and kicks to Section~\ref{kick} since they require more background material. \subsection{Proof of Theorem~\ref{thm:F1}} We fix constants $\tau_*, \mathcal{K}_* >0$ and $E_* < \infty$ and denote $\mathcal{F}_1(\tau_*, \mathcal{K}_*, E_*)$ as simply $\mathcal{F}_1$ for brevity. Note that every $T \in \mathcal{F}_1$ is a billiard map corresponding to a standard Lorentz gas with convex scatterers so that we may recall known facts about such maps to establish {\bf (H1)}-{\bf (H5)} with constants depending only on the three quantities $\tau_*$, $\mathcal{K}_*$ and $E_*$. \noindent {\em (H1)}. For $x \in M$, define \[ \begin{split} C^s(x) & = \{ (dr, d\varphi) \in \mathcal{T}_xM : - \mathcal{K}_*^{-1} - \tau_*^{-1} \le d\varphi/dr \le - \mathcal{K}_* \} \\ \mbox{and} \; \; C^u(x) & = \{ (dr, d\varphi) \in \mathcal{T}_xM : \mathcal{K}_* \le d\varphi/dr \le \mathcal{K}_*^{-1} + \tau_*^{-1} \} . \end{split} \] Then for any $T \in \mathcal{F}_1$, $DT_xC^u(x) \subset C^u(Tx)$ and $DT^{-1}_xC^s(u) \subset C^s(T^{-1}x)$ whenever $DT_x$ and $DT^{-1}_x$ are defined. Moreover, \eqref{eq:uniform hyp} is satisfied with $\Lambda = 1 + 2\mathcal{K}_* \tau_*$ and \[ C_e = \frac{2\tau_* \mathcal{K}_*}{\Lambda}\frac{\sqrt{1 + \mathcal{K}_*^2}}{\sqrt{1 + (\mathcal{K}_*^{-1}+\tau_*^{-1})^2}}, \] (see \cite[Section 4.4]{chernov book}). Notice that $C^s$ and $C^u$ are uniformly transverse to each other and to the vertical and horizontal directions in $M$ as required. The bounds on the first and second derivatives of $T$ required by \eqref{eq:expansion} and \eqref{eq:2 deriv} are standard for such maps (\cite[Section 4.4]{chernov book}). Here, the index $n$ corresponds to the free flight time $\tau(T^{-1}x)$. For finite horizon, this has a uniform upper bound, while for infinite horizon, the relation between $k$ and $n$ is satisfied with $\upsilon_0 = 1/4$ (\cite[Section 5.10]{chernov book}). \noindent {\em (H2)}. We say a $\mathcal{C}^2$ curve $W$ in $M$ is stable if its tangent vectors $\mathcal{T}_xW$ lie in $C^s(x)$ as defined above for each $x \in W$. We call a stable curve homogeneous if it is contained in a single homogeneity strip $\Ho_k$. Since each stable curve $W$ has slope bounded away from infinity, we may identify $W$ with the graph of a function of $r$, which we denote by $\varphi_W(r)$. By \cite[Proposition 4.29]{chernov book}, we may choose $B$ depending only on $\tau_*$, $\mathcal{K}_*$ and $E_*$ such that if $\frac{d^2\varphi_W}{dr^2} \le B$, then each smooth component $W'$ of $T^{-1}W$ satisfies $\frac{d^2\varphi_{W'}}{dr^2} \le B$. We define $\widehat \mathcal{W}^s$ to be the set of all stable homogeneous curves $W$ such that $\frac{d^2\varphi_W}{dr^2} \le B$. The invariance of the family $\mathcal{C}^s(x)$ as well as the choice of $B$ guarantee that $\widehat \mathcal{W}^s$ is invariant as required. The set of unstable curves $\widehat \mathcal{W}^u$ is defined similarly. \noindent {\em (H3)}. Following \cite[Section 5.10]{chernov book}, we define the adapted norm in the tangent space at $x \in M$ by \[ \| v \|_* = \frac{\mathcal{K}(x) + |\mathcal{V}|}{\sqrt{1 + \mathcal{V}^2}} \| v \|, \; \; \; \forall v \in C^s(x) \cup C^u(x) \] where, $v = (dr, d\varphi)$ is a tangent vector, $\mathcal{V} = d\varphi/dr$ and $\mathcal{K}(x)$ is the curvature of the scatterer at $x$. Since the slopes of vectors in $C^s(x)$ and $C^u(x)$ are bounded away from $\pm \infty$, we may extend $\| \cdot \|_*$ to all of $\mathbb{R}^2$ in such a way that $\| \cdot \|_*$ is uniformly equivalent to $\| \cdot \|$. It is straightforward to check that for $v \in C^u(x)$, \[ \frac{\| DT(x) v \|_*}{\| v \|_*} \ge 1 + \mathcal{K}_* \tau_* = \Lambda . \] Uniform expansion in $C^s(x)$ under $DT^{-1}(x)$ follows similarly. Now \eqref{eq:step1} follows from \cite[Lemma 5.56]{chernov book} and \eqref{eq:weakened step1} follows from \cite[Sublemma 3.5]{demers zhang} with $\varsigma_0 = 1/6$. From this point forward, we consider $k_0$ to be fixed. \noindent {\em (H4)}. The bounded distortion constant $C_d$ in \eqref{eq:distortion stable} and \eqref{eq:D u dist} depends only on the choice of $k_0$ from {\bf (H3)} and the uniform hyperbolicity constants $C_e$ and $\Lambda$ (\cite[Lemma 5.27]{chernov book}). \noindent {\em (H5)}. For maps in $\mathcal{F}_1$, $DT(x) \equiv 1$ so we may take $\eta=1$. \subsection{Proof of Theorems~\ref{thm:deform}} Fix constants $\tau_*, \mathcal{K}_* >0$ and $E_* < \infty$ and consider a configuration $Q_0 \in \mathcal{Q}_1(\tau_*, \mathcal{K}_*, E_*)$ with scatterers $\Gamma_1, \ldots, \Gamma_d$. Choose $\gamma \le \frac 12 \min \{ \tau_*, \mathcal{K}_* \}$ and let $\tilde{Q} \in \mathcal{F}_B(Q_0, E_*; \gamma)$ with scatterers $\tilde{\Gamma}_1, \ldots, \tilde{\Gamma}_d$. Since $\ell(I_i) = |\partial \Gamma_i| = |\partial \tilde{\Gamma}_i|$ we may take the corresponding functions $u_i, \tilde{u}_i$ to be arclength parametrizations of $\partial \Gamma_i$ and $\partial \tilde{\Gamma}_i$ respectively. We denote by $u'_i$ and $u''_i$ the first and second derivatives of $u_i$ with respect to the arclength parameter $r$. Then the curvature of $\partial \Gamma_i$ is simply given by $\mathcal{K}(r) = \| u''_i(r) \|$ at each point $u_i(r) \in \partial \Gamma_i$, and similarly for $\partial \tilde{\Gamma}_i$. Thus on $\partial \tilde{\Gamma}_i$, we have by assumption on $\tilde{Q}$ and $\gamma$, \[ \tilde{\mathcal{K}}(r) = \| \tilde{u}''_i \| = \| u''_i + \tilde{u}''_i - u''_i \| \ge \mathcal{K}(r) - \gamma \ge \mathcal{K}_*/2. \] Also, $\tau_{\min}(\tilde{Q}) \ge \tau_{\min}(Q_0) - \gamma \ge \tau_*/2$ since $\| u_i - \tilde{u}_i \| \le \gamma$. Thus $\mathcal{F}_A(Q_0, E_*; \gamma) \subset \mathcal{F}_1(\tau_*/2, \mathcal{K}_*/2, E_*)$. Next we must show that $\tilde{Q} \in \mathcal{F}_A(Q_0, E_*; \gamma)$ represents a small perturbation in the distance $d_\mathcal{F}(\cdot, \cdot)$. We do this by first fixing $\Gamma_2, \ldots, \Gamma_d$ and considering a deformation of $\Gamma_1$ into $\tilde{\Gamma}_1$ such that $| u_1 - \tilde{u}_1 |_{\mathcal{C}^2} \le \gamma$. Let $T_0$ be the map corresponding to $Q_0$ and let $T_1$ be the map corresponding to $\tilde{Q}$. We fix $x = (r, \varphi) \in I_1 \times [-\pi/2, \pi/2]$ and compare $T_0^{-1}x$ with $T_1^{-1}x$. To do this, we let $\Phi^0_t$ and $\Phi^1_t$ denote the flow on the tables $Q_0$ and $\tilde{Q}$ respectively. We denote by $\pi_0(x)$ the projection of $x$ onto the flow space $\mathbb{T}^2 \times S^1$ corresponding to $Q_0$ and by $\pi_0^q$ and $\pi_0^\theta$ the projections onto the position and angular coordinates respectively. Let $\tau_0(x)$ denote the free flight time of $x$ under $\Phi^0_t$ and let $\mathcal{K}_0(\cdot)$ denote the curvature of the scatterers in $Q_0$. The analogous objects, $\pi_1, \pi_1^q, \pi_1^\theta, \tau_1(\cdot)$ and $\mathcal{K}_1(\cdot)$ are defined for the table $\tilde{Q}$. First suppose that $T_0^{-1}x$ and $T_1^{-1}x$ lie on the same scatterer $\Gamma_j$. Notice that the trajectories $\Phi^0_{-t}(\pi_0x)$ and $\Phi^1_{-t}(\pi_1x)$ begin from two points in $\mathbb{T}^2$ at most $\gamma$ apart and make an angle of at most $\gamma$ with one another. We decompose this motion into the sum of (I) two parallel trajectories starting a distance $\gamma$ apart and (II) two trajectories starting at the same point and making an angle $\gamma$. \noindent {\em I. Parallel trajectories.} It is an elementary estimate that two parallel lines a distance $\gamma$ apart will intersect a convex scatterer at a distance at most \begin{equation} \label{eq:parallel} d_{\mathbb{T}^2}(\pi_0^q(T^{-1}_0x), \pi_1^q(T_1^{-1}x)) \le \sqrt{3 \gamma/\mathcal{K}_{\min}(\Gamma_j)} \le \sqrt{3 \gamma/ \mathcal{K}_*}, \end{equation} where $d_{\mathbb{T}^2}$ denotes distance on $\mathbb{T}^2$. \noindent {\em II. Nonparallel trajectories making an angle $\gamma \neq 0$.} After time $t$ under the flow, the two trajectories will be at most $t\gamma$ apart in $\mathbb{T}^2$. Let $\tau(x_{-1}) = \max\{ \tau_0(T_0^{-1}x), \tau_1(T_1^{-1}x) \}$. Then in the case of a finite horizon Lorentz gas, by the same estimate as in \eqref{eq:parallel}, \begin{equation} \label{eq:finite horizon} d_{\mathbb{T}^2}(\pi_0^q(T^{-1}_0x), \pi_1^q(T_1^{-1}x)) \le \sqrt{3 \gamma \tau(x_{-1})/\mathcal{K}_{\min}(\Gamma_j)} \le \sqrt{3 \gamma \tau_{\max} / \mathcal{K}_*}. \end{equation} In the infinite horizon case, define $\hat \tau = \gamma^{-1/3}$. If $\tau(x_{-1}) \le \hat \tau$, then \eqref{eq:finite horizon} implies $d_{\mathbb{T}^2}(\pi_0^q(T^{-1}_0x), \pi_1^q(T_1^{-1}x)) \le \sqrt{3 / \mathcal{K}_*} \gamma^{1/3}$. On the other hand, suppose $\tau_0(T_0^{-1}x) > \hat \tau$. Then $x$ lies in a cell $D_n$ such that $c^{-1} n \le \tau_0(T_0^{-1}y) \le c n$ for some $c >0$ and all $y \in D_n$, and the width of $D_n$ in the stable direction is at most $C'/n$ (see \cite[Section 4.10]{chernov book}). Thus \begin{equation} \label{eq:tau} d_M(x, \mathcal{S}_{-1}^{T_0}) \le C' n^{-1} \le C' c \tau_0^{-1}(T_0^{-1}x) \le C' c \hat \tau^{-1} \le C' c \gamma^{1/3} . \end{equation} An identical estimate holds if $\tau_1(T_1^{-1}x) > \hat \tau$. Thus either $x \in N_{C\gamma^{1/3}}(\mathcal{S}_{-1}^{T_0} \cup \mathcal{S}_{-1}^{T_1})$ or \begin{equation} \label{eq:infinite horizon} d_{\mathbb{T}^2}(\pi_0^q(T^{-1}_0x, \pi_1^q(T_1^{-1}x)) \le \sqrt{3 / \mathcal{K}_*} \gamma^{1/3}. \end{equation} \noindent Concatenating these two estimates (I) and (II), we see that in terms of position coordinates, $T^{-1}_0x$ and $T^{-1}_1x$ in $I_j$ are of order $\gamma^{1/2}$ in the finite horizon case and of order $\gamma^{1/3}$ in the infinite horizon case. Since the normal direction of $\Gamma_j$ varies smoothly with the position, we have $d_M(T_0^{-1}x, T_1^{-1}x)$ of the same order. Similar estimates hold when starting from $x \in \Gamma_j$ and comparing images in $\Gamma_1$ and $\tilde{\Gamma}_1$. In the case when $T_0^{-1}x$ and $T_1^{-1}x$ do not lie on the same scatterer $\Gamma_j$, we must have $x \in N_{C\gamma^{1/3}}(\mathcal{S}_{-1}^{T_0} \cup \mathcal{S}_{-1}^{T_1})$ by the preceding arguments where $C = 4 \mathcal{K}_*^{-3/2}$ is sufficient. We have thus shown {\bf (C1)} holds with $\varepsilon = C \gamma^{1/3}$. Indeed, {\bf (C1)} holds with $\varepsilon = C \gamma^b$ for any $0 < b \le 1/3$ by the same argument. We can consider the deformation of $d$ scatterers as the concatenation of errors induced by deforming one scatterer at a time. The preceding analysis holds with $C$ increased by a factor of $d$. Condition {\bf (C2)} is trivial to check since $J_\mu T_i \equiv 1$ for $i = 0,1$. Next we prove {\bf (C4)}. By \cite[eq. (2.26)]{chernov book}, $DT^{-1}_0(x) = \frac{-1}{\cos \varphi(T_0^{-1}x)} A_0(x)$, where \[ \scriptsize A_0(x) = \left[ \begin{array}{cc} \tau_0(T_0^{-1}x) \mathcal{K}_0(x) + \cos \varphi(x) & - \tau_0(T_0^{-1}x) \\ - \mathcal{K}_0(T_0^{-1}x) ( \tau_0(T_0^{-1}x) \mathcal{K}_0(x) + \cos \varphi(x) ) - \mathcal{K}_0(x) \cos \varphi(T_0^{-1}x) & \tau_0(T_0^{-1}x) \mathcal{K}_0(T_0^{-1}x) + \cos \varphi(T_0^{-1}x) \end{array} \right], \] and $DT_1^{-1}x = \frac{-1}{\cos \varphi(T_1^{-1}x)} A_1(x)$, with a similar definition for $A_1(x)$. Thus \begin{equation} \label{eq:DT diff} \| DT_0^{-1}(x) - DT_1^{-1}x \| \le \Big| \frac{1}{\cos \varphi(T_0^{-1}x)} - \frac{1}{\cos \varphi(T_1^{-1}x)} \Big| \| A_0(x) \| + \frac{1}{\cos \varphi(T_1^{-1}x)} \| A_0(x) - A_1(x) \| \end{equation} Note that $\| A_i(x) \|$ is bounded by a uniform constant times $\tau_i(T_i^{-1}x)$ and \begin{equation} \label{eq:A bound} \| A_0(x) - A_1(x) \| \le K \tau(x_{-1})(d_M(T_0^{-1}x, T_1^{-1}x) + \gamma) \end{equation} where $K$ depends on $\tau_*$, $E_*$ and $\mathcal{K}_*$ and the $\gamma$ term is due to possible differences in the curvatures $\mathcal{K}_0$ and $\mathcal{K}_1$ at the same point. Notice that if $W \in \mathcal{W}^s$, then $|T_i^{-1}W| \le C |W|^{1/3}$ in the infinite horizon case and $|T_i^{-1}W| \le C |W|^{1/2}$ in the finite horizon case. Thus for $\delta < 1/k_0$, if $T^{-1}_ix \in N_\delta(\mathcal{S}_0)$, then $d_M(x, \mathcal{S}_{-1}^{T_i}) \le C_t\delta^2$ where $C_t$ is a uniform constant depending on the transversality of $C^s(x)$ with the horizontal direction and of $\mathcal{S}_{-1}^{T_i}$ with $C^s(x)$. Now choose $\varepsilon = \gamma^{a}$, where $a \le 1/3$ will be determined shortly. Suppose $x \notin N_\varepsilon(\mathcal{S}_{-1}^{T_0} \cup \mathcal{S}_{-1}^{T_1})$. Then by the above observation, $\cos \varphi(T_i^{-1}x) \ge C \varepsilon^{1/2}$, $i =0,1$, and also by \eqref{eq:tau}, $\tau(x_{-1}) \le C \varepsilon^{-1}$. Thus recalling that $d_M(T_0^{-1}x, T_1^{-1}x) \le C \gamma^{1/3}$, we estimate the first term of \eqref{eq:DT diff}, \begin{equation} \label{eq:cos diff} \begin{split} \| A_0(x) \| \Big| \frac{1}{\cos \varphi(T_0^{-1}x)} - \frac{1}{\cos \varphi(T_1^{-1}x)} \Big| & \le \frac{K \tau(T_0^{-1}x) }{\cos \varphi(T_0^{-1}x) \cos \varphi(T_1^{-1}x)} | \cos \varphi(T_1^{-1}x) - \cos \varphi(T_0^{-1}x)| \\ & \le C\varepsilon^{-2} d_M(T_0^{-1}x, T_1^{-1}x) \le C' \gamma^{1/3-2a} . \end{split} \end{equation} To estimate the second term of \eqref{eq:DT diff}, we use \eqref{eq:A bound} to estimate, \[ \frac{1}{\cos \varphi(T_1^{-1}x)} \| A_0(x) - A_1(x) \| \le C \varepsilon^{-3/2} \gamma^{1/3} = C \gamma^{1/3 - 3a/2} . \] Putting these estimates together, we have \[ \| DT_0^{-1}(x) - DT_1^{-1}(x) \| \le C'' \gamma^{1/3 - 2a}. \] Choosing $a = 2/15$ establishes {\bf (C4)}. Condition {\bf (C3)} follows similarly using the fact that the stable Jacobian along $W \in \mathcal{W}^s$ is simply the norm of the tangent vector to $W$ times $DT_i(x)$, $i = 0,1$. The improved estimate in {\bf (C3)} comes from the fact that instead of estimating \eqref{eq:cos diff} as above, we must estimate instead \[ \tau(x_{-1}) \left| \frac{\cos \varphi(T_0^{-1}x)}{\cos \varphi(T_1^{-1}x)} - 1 \right| \le C \varepsilon^{-3/2} d_M(T_0^{-1}x, T_1^{-1}x) \le C' \gamma^{1/3 -3a/2} = C' \gamma^{2/15} = C' \varepsilon \] with our choice of $a = 2/15$. If we restrict perturbations to the finite horizon case with horizons uniformly bounded by some $\tau_{\max} < \infty$, then our estimates above improve by omitting a factor of $\varepsilon^{-1}$ and $d(T_0^{-1}x , T_1^{-1}x) \le C \gamma^{1/2}$ by \eqref{eq:finite horizon}. In this case, the optimal choice of $b = 1/3$. \subsection{Proof of Theorem~\ref{thm:random}} \label{random proof} We fix a class of maps $\mathcal{F}$ for which {\bf (H1)}-{\bf (H5)} hold with uniform constants and choose $T_0 \in \mathcal{F}$. Define $X_\varepsilon(T_0) = \{ T \in \mathcal{F} : d_{\mathcal{F}}(T,T_0) \le \varepsilon \}$. Recall the transfer operator $\mathcal{L}_{(\nu,g)}$ associated with the random process drawn from $X_\varepsilon(T_0)$ as defined in Section~\ref{random}. Our first lemma is a generalization of Theorem~\ref{thm:close} which shows that the transfer operator $\mathcal{L}_{(\nu,g)}$ is close to $\mathcal{L}_{T_0}$ in the norms we have defined. \begin{lemma} \label{lem:random close} There exists $C>0$ such that if $\varepsilon \le \varepsilon_0$, then $||| \mathcal{L}_{(\nu,g)} - \mathcal{L}_{T_0} ||| \leq C A \varepsilon^\beta$. \end{lemma} \begin{proof} Let $h \in \mathcal{C}^1(M)$, $W \in \mathcal{W}^s$ and $\psi \in \mathcal{C}^p(W)$ with $|\psi|_{W,0,p} \le 1$. Then using \eqref{eq:final close}, \[ \begin{split} \left| \int_W \mathcal{L}_{(\nu,g)}h \, \psi \, dm_W - \int_W \mathcal{L}_{T_0}h \, \psi \, dm_W \right| & = \left| \int_{\Omega} \int_W ( \mathcal{L}_{T_\omega} h(x) - \mathcal{L}_{T_0}h(x)) \psi(x) g(\omega, T_\omega^{-1}x) \, dm_W d\nu \right| \\ & \le \int_\Omega Cb^{-1} \varepsilon^{\beta/2} \| h \| |g(\omega, \cdot)|_{\mathcal{C}^1(M)} d\nu(\omega) \le Cb^{-1} A \varepsilon^{\beta/2} \| h\|, \end{split} \] where we have interchanged order of integration since $\int_W \mathcal{L}_{T_w}( h) \, \psi \, g(\omega, \cdot) \, dm_W$ is uniformly and absolutely integrable for each $\omega \in \Omega$ by Theorem~\ref{thm:uniform}. \end{proof} It remains to prove the uniform Lasota-Yorke inequalities for $\mathcal{L}_{\nu,g}$. Let $\overline{\omega}_n = (\omega_1, \ldots, \omega_n) \in \Omega^n$ and define $T_{\overline{\omega}_n} = T_{\omega_n} \circ \cdots \circ T_{\omega_1}$. We first prove that the random compositions $T_{\overline{\omega}_n}$ have the same properties {\bf (H1)}-{\bf (H5)} as the maps $T_\omega \in \mathcal{F}$, with possibly modified constants. The singularity sets for $T_{\overline{\omega}_n}$ are $\mathcal{S}_n^{T_{\overline{\omega}_n}} = \cup_{k=1}^n T_{\omega_1}^{-1} \circ \cdots \circ T_{\omega_k}^{-1} \mathcal{S}_0$, for $n\ge 0$, and similarly for $\mathcal{S}_{-n}^{T_{\overline{\omega}_n}}$. Thus the transversality properties {\bf (H1)} of $\mathcal{S}_{-n}^{T_{\overline{\omega}_n}}$ with respect to $C^s$ and $C^u$ hold due to the uniformity of this transversality for all maps in $\mathcal{F}$. The family $\mathcal{W}^s$ is preserved under $T_{\overline{\omega}_n}^{-1}$ since it is preserved by each map in the composition. The uniform expansion given by \eqref{eq:uniform hyp} of {\bf (H1)} also holds since $DT_{\overline{\omega}_n} = \prod_{k=1}^n DT_{\omega_k} \circ T_{\overline{\omega}_{k-1}}$ and in the adapted metric $\| \cdot \|_*$ given by {\bf (H3)}, the expansion holds with $C_e=1$ for each map in the composition. Translating to the Euclidean norm at the last step, we get {\bf (H1)} with $C_e$ depending only on the uniform constant relating the adapted and Euclidean metrics. Equations \eqref{eq:expansion} and \eqref{eq:2 deriv} also hold trivially since they concern only one iterate of a map drawn from $\mathcal{F}$. {\bf (H5)} follows for the same reason. Due to the uniform expansion along stable and unstable leaves, \eqref{eq:distortion stable} and \eqref{eq:D u dist} of {\bf (H4)} hold with a possibly larger distortion constant $C_d^*$, again using the bounded distortion of each map in the composition $T_{\overline{\omega}_n}$. Finally, we establish that the iteration of the one-step expansion given in {\bf (H3)} holds for random sequences of maps in the class $\mathcal{F}$. As in Section~\ref{preliminary}, for $W \in \mathcal{W}^s$ we define the $n$th generation $\G_n^{\overline{\omega}_n}(W) \subset \mathcal{W}^s$ of smooth curves in $T_{\overline{\omega}_n}^{-1}W$. The elements of $\G_n^{\overline{\omega}_n}(W)$ are denoted by $W^n_i$ as before and long and short pieces are defined similarly. Analogously, $\I^{\overline{\omega}_n}_n(W^k_j)$ denotes the set of indices $i$ in generation $n$ such that $W^k_j$ is the most recent long ancestor of $W^n_i$ under $T_{\overline{\omega}_n}$. Thus $\I_n^{\overline{\omega}_n}(W)$ denotes the set of curves that are never part of a curve that has grown to length $\delta_0/3$ at each time step $1 \le k \le n$. \begin{lemma} \label{lem:random growth} Let $W \in \mathcal{W}^s$ and for $n \geq 0$, let $\I_n^{\overline{\omega}_n}(W)$ and $\G_n^{\overline{\omega}_n}(W)$ be defined as above. There exist constants $C_1, C_2, C_3 >0$, independent of $W \in \mathcal{W}^s$ and $\overline{\omega}_n \in \Omega^n$, such that for any $n\geq 0$, \begin{itemize} \item[(a)] $\displaystyle \sum_{i \in \I_n^{\overline{\omega}_n}(W)} |J_{W^n_i}T_{\overline{\omega}_n}|_{\mathcal{C}^0(W^n_i)} \leq C_1 \theta_*^n $; \item[(b)] $\displaystyle \sum_{W^n_i \in \G_n^{\overline{\omega}_n}(W)} |J_{W^n_i}T_{\overline{\omega}_n}|_{\mathcal{C}^0(W^n_i)} \le C_2 $; \item[(c)] for any $0 \leq \varsigma \leq 1$, $\displaystyle \sum_{W^n_i \in \G_n^{\overline{\omega}_n}(W)} \frac{|W^n_i|^\varsigma}{|W|^\varsigma} |J_{W^n_i}T_{\overline{\omega}_n}|_{\mathcal{C}^0(W^n_i)} \le C_2^{1-\varsigma} $; \item[(d)] for $\varsigma > \varsigma_0$, $\displaystyle \sum_{W^n_i \in \G_n^{\overline{\omega}_n}(W)} |J_{W^n_i}T_{\overline{\omega}_n}|_{\mathcal{C}^0(W^n_i)}^\varsigma \le C_3^n$. \end{itemize} \end{lemma} \begin{proof} (a) Fix $W\in \mathcal{W}^s$ and for $\overline{\omega}_n \in \Omega^n$, define $ \mathcal{Z}_n(W) =\sum_{i\in \mathcal I_n^{\overline{\omega}_n}(W)} |J_{W^n_i}T_{\overline{\omega}_n}|_*$, where $|J_{W^n_i}T_{\overline{\omega}_n}|_*$ denotes the least contraction on $W^n_i$ under $T_{\overline{\omega}_n}$ measured in the metric induced by the adapted norm. We will prove by induction on $n\in \mathbb{N}$ that $\mathcal{Z}_{n}(W)\leq \theta_*^{n}$. Then, since $\| \cdot \|_*$ is equivalent to $\| \cdot \|$, statement (a) follows. Note that at each iterate between $1$ and $n$, every piece $W^n_i$, $i \in \I_n^{\overline{\omega}_n}(W)$, is created by genuine cuts due to singularities and homogeneity strips and not by any artificial subdivisions, since those are only made when a piece has grown to length greater than $\delta_0$. Thus we may apply the one-step expansion \eqref{eq:step1} to conclude, \begin{equation} \label{thetaW} \mathcal{Z}_1(W)\leq \theta_*, \; \; \; \forall \; W \in \mathcal{W}^s. \end{equation} Assume that $\mathcal{Z}_{n}(W)\leq \theta_*^{n}$ is proved for some $n\geq 1$ and all $W \in \mathcal{W}^s$. We apply it to each component $W^1_i \in \G_1^{\omega_1}(W)$ such that $i \in \I_1^{\omega_1}(W)$. Then $ \mathcal{Z}_n(W^1_i) \leq \theta_*^{n}$ since $W^1_i \in \mathcal{W}^s$. Given $\overline{\omega}_n \in \Omega^n$, we use the notation $\overline{\omega}_{n-k}' = (\omega_n, \ldots, \omega_{n-k+1})$ so that we may split up compositions $\overline{\omega}_n = (\overline{\omega}_{n-k}', \overline{\omega}_k)$ into two pieces. Given a sequence $\overline{\omega}_{n+1}$, we group the components of $W_i^{n+1}\in \G_{n+1}^{\overline{\omega}_{n+1}}(W)$ with $i\in \I_{n+1}^{\overline{\omega}_{n+1}}(W)$ according to elements with index in $\I_1^{\omega_1}(W)$. More precisely, for $j \in \I_1^{\omega_1}(W)$, let $A_j = \{ i : W^{n+1}_i \in \G^{\overline{\omega}_{n+1}}_{n+1}(W), T_{\overline{\omega}_n'}W^{n+1}_i \subset W^1_j \}$. Note that $|J_{W^{n+1}_i}T_{\overline{\omega}_{n+1}}|_* \le |J_{W^{n+1}_i}T_{\overline{\omega}_n'}|_* |J_{W^1_j}T_{\omega_1}|_*$ whenever $T_{\overline{\omega}_n'}W^{n+1}_i \subseteq W^1_j$. Combining this and \eqref{thetaW} with the inductive hypothesis, we get \[ \begin{split} \mathcal{Z}_{n+1}(W) & = \sum_{j \in \I_1^{\omega_1}(W)}\sum_{i \in A_j} |J_{W^{n+1}_i}T_{\overline{\omega}_{n+1}}|_* \; \leq \; \sum_{j \in \I_1^{\omega_1}(W)} \left( \sum_{i \in A_j} |J_{W^{n+1}_i}T_{\overline{\omega}_n'}|_* \right) |J_{W^1_j}T_{\omega_1}|_* \\ &=\sum_{j \in \I_1^{\omega_1}(W)} \mathcal{Z}_n(W^1_j)\,\cdot |J_{W^1_j}(T_{\omega_1})|_* \; \leq \; \theta_*^{n+1} . \end{split} \] \noindent (b) Fix $W \in \mathcal{W}^s$ and $\overline{\omega}_n \in \Omega^n$. For any $0 \le k \le n$ and $W^n_i \in \G_n^{\overline{\omega}_n}(W)$, we have \begin{equation} \label{eq:long piece bound} |J_{W^n_i}T_{\overline{\omega}_n}|_{\mathcal{C}^0(W^n_i)} \le |J_{W^n_i}T_{\overline{\omega}_{n-k}'}|_{\mathcal{C}^0(W^n_i)} |J_{W^k_j}T_{\overline{\omega}_k}|_{\mathcal{C}^0(W^k_j)}, \end{equation} whenever $T_{\overline{\omega}_{n-k}'}W^n_i \subseteq W^k_j \in \G_k^{\overline{\omega}_k}(W)$. Now grouping $W^n_i \in \G_n^{\overline{\omega}_n}(W)$ by most recent long ancestor $W^k_j \in L_k^{\overline{\omega}_k}(W)$ as described in Section~\ref{preliminary} and using \eqref{eq:long piece bound}, we have \[ \begin{split} \sum_i & |J_{W^n_i}T_{\overline{\omega}_n}|_{\mathcal{C}^0(W^n_i)} = \sum_{k =0}^n \sum_{W^k_j \in L_k^{\overline{\omega}_k}(W)} \sum_{i \in \I_n^{\overline{\omega}_n}(W^k_j)} |J_{W^n_i}T_{\overline{\omega}_n}|_{\mathcal{C}^0(W^n_i)} \\ & \le \sum_{k=1}^{n} \sum_{W^k_j \in L_k^{\overline{\omega}_k}(W)} \Big( \sum_{i \in \I_n^{\overline{\omega}_n}(W^k_j)} |J_{W^n_i}T_{\overline{\omega}_{n-k}'}|_{\mathcal{C}^0(W^n_i)} \Big) |J_{W^k_j}T_{\overline{\omega}_k}|_{\mathcal{C}^0(W^k_j)} \; + \; \sum_{i \in \I_n^{\overline{\omega}_n}(W)} |J_{W^n_i}T_{\overline{\omega}_n}|_{\mathcal{C}^0(W^n_i)} , \end{split} \] where we have split off the terms involving $k=0$ that have no long ancestor. We have \[ |J_{W^k_j}T_{\overline{\omega}_k}|_{\mathcal{C}^0(W^k_j)} \le (1+C_d^*)|T_{\overline{\omega}_k}W^k_j| |W^k_j|^{-1} \le 3 \delta_0^{-1} (1+C_d^*)|T_{\overline{\omega}_k}W^k_j| \] since $|W^k_j| \ge \delta_0/3$. Since $\I_n^{\overline{\omega}_n}(W^k_j)$ and $\I_{n-k}^{\overline{\omega}_{n-k}'}(W^k_j)$ correspond to the same set of short pieces in the $(n-k)^{\mbox{\scriptsize th}}$ generation of $W^k_j$, we apply part (a) of this lemma to each of these sums. Thus, \[ \begin{split} \sum_i & |J_{W^n_i}T_{\overline{\omega}_n}|_{\mathcal{C}^0(W^n_i)} \le \sum_{k=0}^{n-1} \sum_{W^k_j \in L_k^{\overline{\omega}_k}(W)} C_1 \theta_*^{n-k} |J_{W^k_j}T_{\overline{\omega}_k}|_{\mathcal{C}^0(W^k_j)} \; + \; C_1 \theta_*^n \\ & \le C \delta_0^{-1} \sum_{k=0}^{n-1} \sum_{W^k_j \in L_k^{\overline{\omega}_k}(W)} \theta_*^{n-k}|T_{\overline{\omega}_k}W^k_j| + C \theta_*^n \; \le \; C \delta_0^{-1} |W| \sum_{k=0}^{n-1} \theta_*^{n-k} + C \theta_*^n , \end{split} \] which is uniformly bounded in $n$. \noindent (c) follows from (b) by an application of Jensen's inequality and (d) follows from {\bf (H3)} using an inductive argument similar to the proof of (a). \end{proof} We complete the proof of Theorem~\ref{thm:random} via the following proposition. The uniform Lasota-Yorke inequalities of Theorem~\ref{thm:uniform} then follow from the argument given at the beginning of Section~\ref{uniform}. \begin{proposition} \label{prop:random ly} Choose $\varepsilon \le \varepsilon_0$ sufficiently small that $\sigma(1+\varepsilon)<1$ and let $\Delta(\nu,g) \le \varepsilon$. There exists a constant $C$, depending on $a$, $A$, and {\bf (H1)}-{\bf (H5)} such that for $h \in \B$ and $n \ge 0$, \begin{eqnarray} |\mathcal{L}_{(\nu,g)}^n h|_w & \le & C \eta^n |h|_w \label{eq:random weak norm} \\ \| \mathcal{L}_{(\nu,g)}^n h \|_s & \le &C \eta^n ( \theta_*^{(1-\alpha)n} + \Lambda^{-qn}) \|h\|_s + C \delta_0^{-\alpha} \eta^n |h|_w \label{eq:random stable norm} \\ \| \mathcal{L}_{(\nu,g)}^n h \|_u &\le & C \eta^n \Lambda^{-\beta n} \| h\|_u + C \eta^n C_3^n \|h \|_s \label{eq:random unstable norm} \end{eqnarray} \end{proposition} \begin{proof} We record for future use, \[ \mathcal{L}^n_{(\nu,g)}h(x) = \int_{\Omega^n} h \circ T_{\overline{\omega}_n}^{-1} (J_\mu T_{\overline{\omega}_n}\circ T_{\overline{\omega}_n}^{-1})^{-1} \prod_{j=1}^n g(\omega_j, T_{\omega_j}^{-1} \circ \cdots \circ T_{\omega_n}^{-1}x) d\nu^n(\overline{\omega}_n) . \] The proofs of the inequalities are the same as in Section~\ref{uniform} except that we have the additional function $g(\omega,x)$. We show how to adapt the estimates of Section~\ref{uniform} to the operator $\mathcal{L}_{(\nu,g)}$ in the case of the strong stable norm. The other estimates are similar. \noindent {\bf Estimating the Strong Stable Norm.} Following Section~\ref{stable norm}, we write, \begin{equation} \begin{split} \label{eq:random split} \int_W & \mathcal{L}^n_{(\nu,g)} h \, \psi dm_W = \int_{\Omega^n} \sum _i \left\{ \int_{W^n_i} h (\psi \circ T_{\overline{\omega}_n} - \overline{\psi}_i) (J_\mu T_{\overline{\omega}_n})^{-1} J_{W^n_i}T_{\overline{\omega}_n} \prod_{j=1}^ng(\omega_j, T_{\overline{\omega}_{j-1}}x) dm_W \right. \\ & \left. + \overline{\psi}_i \int_{W^n_i} h (J_\mu T_{\overline{\omega}_n})^{-1} J_{W^n_i}T_{\overline{\omega}_n} \prod_{j=1}^n g(\omega_j, T_{\overline{\omega}_{j-1}}x) dm_W \right\} d\nu^n(\overline{\omega}_n), \end{split} \end{equation} where $\overline{\psi}_i = |W^n_i|^{-1} \int_{W^n_i} \psi \circ T_{\overline{\omega}_n} \, dm_W$. Since for each $\overline{\omega}_n$, $T_{\overline{\omega}_n}$ satisfies properties {\bf (H1)}-{\bf (H5)} with uniform constants, we may use the estimates of Section~\ref{uniform}. Accordingly, $|\psi \circ T_{\overline{\omega}_n} - \overline{\psi}_i|_{\mathcal{C}^q(W^n_i)} \le C \Lambda^{-qn} |W|^{-\alpha}$ using \eqref{eq:C1 C0}. Define $G_{\overline{\omega}_n}(x) = \prod_{j=1}^n g(\omega_j, T_{\overline{\omega}_{j-1}}x)$. We estimate the first term of \eqref{eq:random split} using \eqref{eq:first stable} \begin{equation} \begin{split} \label{eq:random first} \sum_i & \int_{W^n_i} h (\psi \circ T_{\overline{\omega}_n} - \overline{\psi}_i) \, (J_\mu T_{\overline{\omega}_n})^{-1} J_{W^n_i}T_{\overline{\omega}_n} G_{\overline{\omega}_n} \, dm_W \\ & \le \sum_i C \| h\|_s |W_i|^\alpha |(J_\mu T_{\overline{\omega}_n})^{-1} J_{W^n_i}T_{\overline{\omega}_n}|_{\mathcal{C}^q(W^n_i)} |\psi \circ T_{\overline{\omega}_n} - \overline{\psi}_i|_{\mathcal{C}^q(W^n_i)} |G_{\overline{\omega}_n}|_{\mathcal{C}^q(W^n_i)} \\ & \le C \| h \|_s \Lambda^{-qn} \eta^n \sum_i \frac{|W^n_i|^\alpha}{|W|^\alpha} |J_{W^n_i}T_{\overline{\omega}_n}|_{\mathcal{C}^0(W^n_i)} |G_{\overline{\omega}_n}|_{\mathcal{C}^q(W^n_i)} . \end{split} \end{equation} The only new term here is $|G_{\overline{\omega}_n}|_{\mathcal{C}^q(W^n_i)}$ which is addressed by the following lemma. \begin{sublem} \label{lem:G} There exists $C>0$, independent of $W$ and $\overline{\omega}_n$, such that if $W^n_i \in \G_n^{\overline{\omega}_n}(W)$, then \[ |G_{\overline{\omega}_n}|_{\mathcal{C}^1(W^n_i)} \le C G_{\overline{\omega}_n}(x) \; \; \mbox{for any } x \in W^n_i . \] \end{sublem} \begin{proof}[Proof of Sublemma] For $x,y \in W^n_i$, \[ \begin{split} \log \frac{\prod_{j=1}^n g(\omega_j, T_{\overline{\omega}_{j-1}} x)}{\prod_{j=1}^n g(\omega_j, T_{\overline{\omega}_{j-1}}y)} & \le \sum_{j=1}^n a^{-1} |g(\omega_j, \cdot)|_{\mathcal{C}^1(M)} d(T_{\overline{\omega}_{j-1}}x, T_{\overline{\omega}_{j-1}}y) \\ & \le \sum_{j=1}^\infty a^{-1} A C_e \Lambda^{-n}d(x,y) =: c_0 d(x,y) , \end{split} \] using properties (i) and (iii) of $g$. The distortion bound yields the lemma with $C = c_0 e^{c_0}$. \end{proof} We estimate \eqref{eq:random first} using the sublemma and Lemma~\ref{lem:random growth}(c), \begin{equation} \label{eq:random contract} \sum_i \int_{W^n_i} h (\psi \circ T_{\overline{\omega}_n} - \overline{\psi}_i) \, (J_\mu T_{\overline{\omega}_n})^{-1} J_{W^n_i}T_{\overline{\omega}_n} G_{\overline{\omega}_n} \, dm_W \le C \| h \|_s \eta^n \Lambda^{-qn} G_{\overline{\omega}_n}(x_0), \end{equation} where $x_0$ is some point in $T_{\overline{\omega}_n}^{-1}W$. Similarly, we estimate the second term in \eqref{eq:random split} using \eqref{eq:second stable}. In each term, $G_{\overline{\omega}_n}$ plays the role of a test function and we replace the occurrences of $|G_{\overline{\omega}_n}|_{\mathcal{C}^p(W^n_i)}$ and $|G_{\overline{\omega}_n}|_{\mathcal{C}^q(W^n_i)}$ as appropriate according to Sublemma~\ref{lem:G}. Thus following \eqref{eq:second stable}, we write, \[ \sum_i \overline{\psi}_i \int_{W^n_i} h (J_\mu T_{\overline{\omega}_n})^{-1} J_{W^n_i}T_{\overline{\omega}_n} G_{\overline{\omega}_n} dm_W \le C(\delta_0^{-\alpha} \eta^n |h|_w + \theta_*^{(1-\alpha)n} \eta^n \|h\|_s) G_{\overline{\omega}_n}(x_0) , \] choosing the same $x_0$ as in \eqref{eq:random contract}. Now Combining this expression with \eqref{eq:random contract} and \eqref{eq:random split}, we obtain \[ \int_W \mathcal{L}_{T_{\overline{\omega}_n}} h \psi \, dm_W \le C \eta^n (\|h\|_s(\Lambda^{-qn} + \theta_*^{(1-\alpha)n}) + \delta_0^{-\alpha} |h|_w) \prod_{j=1}^n g(\omega_j, T_{\overline{\omega}_{j-1}}x_0) . \] We integrate this expression one $\omega_j$ at a time, starting with $\omega_n$. Notice that $\int_\Omega g(\omega_n, T_{\overline{\omega}_{n-1}}x_0) d\nu(\omega_n) =1$ by property (ii) of $g$ since $T_{\overline{\omega}_{n-1}}$ is independent of $\omega_n$. Similarly, each factor in $G_{\overline{\omega}_n}(x_0)$ integrates to 1 so that \[ \| \mathcal{L}^n_{(\nu,g)} h \|_s \le C \|h\|_s \eta^n (\Lambda^{-qn} + \theta_*^{(1-\alpha)n}) + C \delta_0^{-\alpha} \eta^n |h|_w \] which is the required inequality for the strong stable norm. The inequalities for the weak norm and the strong unstable norm follow similarly, always using Sublemma~\ref{lem:G}. \end{proof} \section{Proofs of Applications: External Forces with Kicks and Slips} \label{kick} In this section we prove Theorem \ref{thm:C1} and \ref{thm:C2} for the perturbed dispersing billiards under external forces with kicks and slips. To simplify the analysis, for any fixed force $\mathbf{F}$, we will consider our system, denoted as $T_{\mathbf{F},\mathbf{G}}$, as a perturbation of the map $T_{\mathbf{F}, \mathbf{0}}$. We say a constant $C$ is uniform if $C=C(\eps_1, \tau_*, \mathcal{K}_*, E_*)$, where $\eps_1, \tau_*, \mathcal{K}_*$ and $E_*$ are from {\bf (A2)} and {\bf (A3)}. We begin by reviewing some properties of $T_\mathbf{F} = T_{\mathbf{F}, \mathbf{0}}$ proved in \cite{Ch01} and proving some additional ones that we shall need. \subsection{Properties of $T_\mathbf{F}$} \label{flow review} We assume the setup described in Section~\ref{concrete}.B, which is the billiard flow given by \eqref{flowf} and \eqref{reflectiong} with $\mathbf{G} = \mathbf{0}$. Let $\mathbf{x}=(\mathbf{q}, \theta)\in \mathcal{M}$ be any phase point with position $\mathbf{q}$, and $V\in \mathcal{T}_\mathbf{x} \mathcal{M}$ a tangent vector at $\mathbf{x}$. Pick a small number $\delta_0>0$ and a $C^{3}$ curve $c_s(0)=(\mathbf{q}_s, \theta_s)\subset \mathcal{M}$ tangent to the vector $V$, such that $c_0=\mathbf{x}$ and $\frac{d c_s}{ds}|_{s=0}=V$, and $s\in [0,\delta_0]$. Now we define $c_s(t)=\Phi^t c_s(0)$, for any $t\geq 0$. Since $\tau$ is the free path function, we have $d\tau=p dt$. In the calculation below, we denote differentiation with respect to $s$ by primes and that with respect to $\tau$ by dots. In particular, $\dot{c}_s(t)=(\dot \mathbf{q}, \dot \theta)=( \mathbf{v}, h)$, where $\mathbf{v} = \mathbf{p}/p = (\cos \theta, \sin \theta)$ and $h=h(\mathbf{q},\theta)$ is the geometric curvature of the billiard trajectory with initial condition $(\mathbf{q},\theta)$ on the table. If we assume $t_s$ to be the time that the trajectory of $c_s(0)$ hits the wall of the billiard table, then $\{c_s(t)\,|\, t\in [0, t_s], s\in [0, \delta_0]\}$ is a $C^{3}$ smooth $2$-d manifold in $\mathcal{M}$. We introduce two quantities $u=\mathbf{q}'\cdot \mathbf{v}$, and $w=\mathbf{q}'\cdot \mathbf{v}^{\perp}$, where $\mathbf{v}^{\perp}=(-\sin\theta, \cos\theta)$. Clearly $\mathbf{q}'=u \mathbf{v}+w \mathbf{v}^{\perp}$. Now let $\kappa=(\theta'-uh)/w$. We consider two vectors of the surface $U=(\mathbf{v},h)$ and $R=(\mathbf{v}^{\perp}, \kappa)$. Clearly $\dot c_s=U$ and $c_s'=u U+w R$. Define $p_U=\text{grad}(p) \cdot U$, $p_R=\text{grad} (p)\cdot R$, and $h_U=\text{grad} (h) \cdot U$, $h_R=\text{grad} (h)\cdot R$, respectively. Then it is straight forward to check that \begin{equation} \label{p'h'} p'=\text{grad}(p)\cdot c'_s = p_U u+p_R w \qquad h'=h_U u+h_Rw \qquad \text{and } \; \; \theta'=\kappa w+h u . \end{equation} In addition $ \dot p= p_U $ and $\dot h= h_U $. The derivation of these formulas can be found in \cite{Ch01}. The following lemma was proved in \cite[Lemmas 3.1, 3.2]{Ch01}. \begin{lemma}[\cite{Ch01}] The evolution of the quantities $\kappa$ and $w$ between collisions is given by the equations \begin{equation}\label{kappadot} \dot\kappa=-\kappa^2 + a + b\kappa \,\,\,\,\,\,\text{ and }\,\,\,\,\,\,\,\dot w=\kappa w, \end{equation} where $a=a(h)$, $b=b(h)$ are smooth functions whose $\mathcal{C}^0$ norms are bounded by $c_0\varepsilon_1$ for some uniform $c_0>0$. Furthermore, at the moment of collision, \begin{equation}\label{pm} u^+=u^-, \; \; w^+=-w^-\,\,\,\,\text{ and }\,\,\,\, \kappa^+=\kappa^-+\frac{2\mathcal{K}(r)+(h^++h^-)\sin\varphi}{\cos\varphi} . \end{equation} In addition the derivative of $r$ and $\varphi$ satisfies \begin{equation}\label{drdphi} dr/ds=\mp w^{\pm}/\cos\varphi\,\,\,\,\,\,\,\text{ and }\,\,\,\,\,\,\,d\varphi/dr=\mp\mathcal{K}(r)+\kappa^{\pm}\cos\varphi\mp h^{\pm}\sin\varphi . \end{equation} \end{lemma} We will calculate the differential of the map $T_{\mathbf{F}}$ (which is not contained in \cite{Ch01}). It follows from (\ref{kappadot}) that \begin{equation} \label{ddotw} \frac{d \dot w}{d\tau}=\frac{d}{d\tau}(\kappa w)=\dot\kappa w+\kappa \dot w=\kappa \dot w-\kappa^2 w+(a + b\kappa) w= aw + b\dot w . \end{equation} This implies that \begin{equation}\label{contw1} \left\{ \begin{array}{ll} \dot w(\tau)=\dot w(0)+\int_{0}^{\tau} aw + b\dot w\, d\gamma \\ w(\tau)=w(0)+\dot w(0) \tau+\int_0^{\tau} \int_0^{\xi} aw + b\dot w \,d\gamma\,d\xi \end{array} \right. . \end{equation} At the moment of collision, (\ref{pm}) implies that \begin{equation}\label{dotwpm} \left\{ \begin{array}{ll} w^+=-w^-\\ \dot w^+=-\dot w^--\frac{2\mathcal{K}+(h^++h^-)\sin\varphi}{\cos\varphi}w^- \end{array} \right. . \end{equation} In addition (\ref{drdphi}) implies that \begin{equation} \label{dphids} \frac{d\varphi}{ds}=\frac{\mathcal{K}(r)+h^{\pm}\sin\varphi}{\cos\varphi}w^{\pm} \mp \dot w^{\pm} . \end{equation} \begin{lemma} \label{wtaubound} For $x = (r, \varphi)$, let $\tau_1(x)$ denote the distance to the next collision under the flow. There exist constants $\hat C_1, \hat C_2 > 0$ independent of $x$, such that $|w(\tau)|$ and $|\dot w(\tau)|$ are uniformly bounded from above by $\hat C_1 |w^+(0)| + \hat C_2|\dot w^+(0)|$ for $\tau \in [0,\tau_1(x)]$. \end{lemma} \begin{proof} We fix $x$ and abbreviate $\tau_1(x)$ as $\tau_1$. We begin by adapting \cite[Lemma 3.4]{Ch01}, to show that if for some $\tau_0 \in [0,\tau_1)$, $\kappa(\tau_0)$ is bounded away from zero, then $\kappa$ is bounded away from zero and infinity on $[\tau_0,\tau_1]$. More precisely, (\ref{kappadot}) implies that if $\kappa > 0$, then $$ -(\kappa + \varepsilon_2)^2 \le \dot \kappa=-\kappa^2+b \kappa+a=-(\kappa-\frac{b}{2})^2+\frac{b^2}{4}+a\leq -(\kappa-c_0\eps_1)^2+ \eps_2^2 $$ where $\eps_2^2 = 2c_0\eps_1.$ So if we assume that for some $\tau_0 \in [0, \tau_1)$, $\kappa^+(\tau_0) > c_1$ for a fixed $c_1 > 5 \sqrt{\varepsilon_0}$, then we may integrate these inequalities to obtain $$ \frac{1}{(\kappa^+(\tau_0) + \varepsilon_2)^{-1} + (\tau - \tau_0)} - \varepsilon_2 \le \kappa(\tau) \le \varepsilon_2 \frac{Ae^{2 \varepsilon_2 (\tau - \tau_0)} + 1}{Ae^{2\varepsilon_2(\tau- \tau_0)} -1} + c_0\varepsilon_1, $$ where $A = (\kappa^+(\tau_0) - c_0 \varepsilon_1 + \varepsilon_2)/(\kappa^+(\tau_0) - c_0 \varepsilon_1 - \varepsilon_2)$. Then since $\varepsilon_0$ is small compared to $\kappa^+(\tau_0)$, this reduces to \begin{equation} \label{kappabound} \frac{1}{(\kappa^+(\tau_0))^{-1} + (\tau - \tau_0)} - \varepsilon_3 \le \kappa(\tau) \le \frac{1}{(\kappa^+(\tau_0))^{-1} + (\tau - \tau_0)} + \varepsilon_3 \end{equation} where $\varepsilon_3 = 2\varepsilon_2 + 2c_0 \varepsilon_1$. Now (\ref{kappadot}) implies that for any $0 \le \tau' < \tau \le \tau_1$, \begin{equation} \label{convw} w(\tau)=w(\tau')\exp\left(\int_{\tau'}^{\tau} \kappa d\gamma \right). \end{equation} Also, (\ref{ddotw}) implies that $\frac{\dot w}{w}d\ln \dot w=(a+b\kappa) \,d\tau$ and since $\dot w=\kappa w$, we integrate this to obtain, \begin{equation} \label{1stdotwt} \dot w(\tau)=\dot w(0) \exp\left(\int_{0}^{\tau} (\frac{a}{\kappa}+b)\, d\gamma \right) \; \; \; \mbox{for any $\tau \in [0,\tau_1]$.} \end{equation} Integrating again, it follows that \begin{equation} \label{2ndwtau} w(\tau)=w(0)+\dot w(0)\int_0^{\tau} \exp\left(\int_{0}^{\xi} (\frac{a}{\kappa}+b)\, d \gamma \right)\,d\xi . \end{equation} This implies that both $w(\tau), \dot w(\tau)$ are functions of $(w^+(0), \dot w^+(0))$. To show that $|w|$ and $|\dot w|$ are uniformly bounded, we consider three cases. \noindent Case I: $\kappa$ is finite on $[0, \tau_1)$ and $\kappa(\tau) < 1/\tau_{\min}$ for all $\tau \in [0, \tau_1)$ ($\kappa$ can be positive or negative). Then by \eqref{convw}, $|w(\tau)| \le |w(0)| e^{\tau/\tau_{\min}} \le |w(0)| e^{\tau_{\max}/\tau_{\min}}$ for all $\tau \in [0, \tau_1]$. Once we know $|w|$ is bounded on $[0,\tau_1]$, we may use it to bound $|\dot w|$ as follows. We integrate \eqref{ddotw} using the integrating factor $\exp(- \int_0^\tau b \, d\gamma)$ to obtain, \begin{equation} \label{eq:wint} \dot w(\tau) = \dot w(0) e^{\int_0^\tau b \, d\gamma} + e^{\int_0^\tau b \, d\gamma} \int_0^\tau aw(\xi) e^{ - \int_0^\xi b \, d\gamma } \, d\xi . \end{equation} Thus \begin{equation} \label{eq:dotw} | \dot w(\tau)| \le |\dot w^+(0)| e^{c_0\varepsilon_1 \tau_{\max}} + |w^+(0)|e^{(2 c_0 \varepsilon_1 + 1/\tau_{\min})\tau_{\max}} c_0 \varepsilon_1 \tau_{\max} =: C_1 |\dot w^+(0)| + C_2 |w^+(0)| . \end{equation} \noindent Case II: $\kappa$ is finite on $[0,\tau_1)$, $\kappa(\tau_0) \ge 1/\tau_{\min}$ for some $\tau_0 \in [0, \tau_1]$ and $\tau_0$ is the least $\tau$ in the interval with this property. Then by \eqref{kappabound}, $\kappa(\tau) \geq (\tau_{\min} + 2\tau_{\max})^{-1}$ for all $\tau \in [\tau_0, \tau_1]$. As a consequence, by \eqref{2ndwtau}, \begin{equation} \label{half} |w(\tau)| \le |w(\tau_0)| + |\dot w(\tau_0)| \tau_{\max} e^{c_0 \varepsilon_1 (\tau_{\min} + 2\tau_{\max} + 1)\tau_{\max}} \end{equation} for each $\tau \in [\tau_0, \tau_1]$. On the other hand, for $\tau \in [0, \tau_0]$, we have $\kappa(\tau) \le 1/\tau_{\min}$, so that both $|w(\tau)|$ and $|\dot w(\tau)|$ are uniformly bounded on this interval by Case I. This together with \eqref{half} proves Case II for $|w|$. The estimate for $|\dot w|$ follows again from \eqref{eq:wint} and \eqref{eq:dotw}. \noindent Case III: $\kappa(\tau_0) = \pm \infty$ for some $\tau_0 \in (0, \tau_1)$. According to \eqref{kappadot} and \eqref{kappabound}, the only way this case can occur is if $\kappa$ reaches $-\infty$ in finite time and changes from $-\infty$ to $\infty$ at $\tau_0$. \eqref{convw} implies in particular that $w(\tau_0)=0$. On the interval $[0, \tau_0]$, $\kappa$ clearly satisfies the assumption of Case I so that both $|w|$ and $|\dot w|$ are uniformly bounded as in the statement of the lemma on this interval. Indeed, this is true on any interval in which $\kappa$ remains negative. Thus the only case left to consider is when $\kappa(\tau) > 0$ for $\tau \in (\tau_0, \tau_1]$. In this case, \eqref{kappadot} guarantees that $\kappa$ initially decreases and \eqref{kappabound} guarantees that $\kappa(\tau) \geq \tau_{\min}^{-1}$ on this interval. Thus by \eqref{2ndwtau}, we estimate as in \eqref{half} to bound $|w|$ by a linear combination of $|w(\tau_0)|$ and $|\dot w(\tau_0)|$. But since these two quantities are in turn bounded by $|w^+(0)|$ and $|\dot w^+(0)|$ by the previous paragraph, the proof of Case III is complete for $|w|$. The estimate on $|\dot w|$ now follows again from \eqref{eq:wint} and \eqref{eq:dotw}. \end{proof} Combining the above facts, we can show the following. \begin{lemma} If we denote $x_1=(r_1, \varphi_1)=T_{\mathbf{F}} x$, then there exits $C=C( \mathcal{K}_*,\tau_*)>0$ such that for any unit vector $(dr/ds, d\varphi/ds)$, \begin{equation}\label{DTeps0} \left\{ \begin{array}{ll} -\cos\varphi_1\frac{dr_1}{ds}=\left(\cos\varphi+\tau \mathcal{K}+a_1\right)\frac{dr}{ds}+(\tau+a_2)\frac{d\varphi}{ds}\\ -\cos\varphi_1\frac{d\varphi_1}{ds}=\left(\tau \mathcal{K}_1\mathcal{K}+\mathcal{K}_1\cos\varphi+\mathcal{K}\cos\varphi_1 +b_1\right) \frac{dr}{ds}+\left(\mathcal{K}_1\tau+\cos\varphi_1+b_2\right)\frac{d\varphi}{ds} \end{array} \right. \end{equation} where $a_i\leq C\eps_1$ and $b_i\leq C\eps_1$, for $i=1,2$. In addition \begin{equation} \label{detTF} (1-C\eps_1)\frac{\cos\varphi}{\cos\varphi_1}\leq |\det D_xT_\mathbf{F}| \leq (1+C\eps_1)\frac{\cos\varphi}{\cos\varphi_1} \end{equation} \end{lemma} \begin{proof} Let $x_1=T_{\mathbf{F}} x$, and $\tau_1(x)$ be the length of the free path of $x$. By \eqref{1stdotwt} and \eqref{2ndwtau}, there exists a linear transformation $D_x$ such that \begin{equation} \label{Dx} D_x(w^+, \dot w^+)^T=(w^-_1,\dot w^-_1)^T \end{equation} where $w^-_1=w^-(\tau_1)$ and $\dot w^-_1=\dot w^-(\tau_1)$. Indeed, by Lemma~\ref{wtaubound}, there exist smooth functions $c_i$, $i=1, \ldots 4$ with $|c_i| \le C \varepsilon_1$ for some $C = C(\mathcal{K}_*, \tau_*) >0$ such that \begin{equation} \label{III} I:=\int_0^{\tau} aw + b \dot w \, d\gamma = c_1 w^+(0)+c_2\dot w^+(0),\,\,\,\, II:=\int_0^{\tau}\int_0^{\xi} aw + b \dot w \,d\gamma d\xi=c_3 w^+(0)+c_4\dot w^+(0), \end{equation} so that using \eqref{contw1}, we may write $D_x$ as \begin{equation} \label{Dxg} D_{x}=\left(\begin{array}{cc}1+c_3&\tau+c_4\\c_1&1+c_2\\\end{array}\right) . \end{equation} Using (\ref{dotwpm}) and (\ref{dphids}), the differential of $DT_{\mathbf{F}}$ satisfies \begin{equation} \label{DTepsmatrix} DT_{\mathbf{F}} = N_{x_1}^{-1} L_{x_1} D_x N_{x} \end{equation} where $$N_{x}=-\left(\begin{array}{cc}\cos\varphi &0\\\mathcal{K}+h^+\sin\varphi &1\\\end{array}\right)$$ is the coordinate transformation matrix on $\mathcal{T}_x M$, such that $(w^+(0), \dot w^+(0))^T=N_x (dr/ds, d\varphi/ds)^T$, and \begin{equation} \label{LU} L_{x_1}=\left(\begin{array}{cc}-1&0\\-\tfrac{2\mathcal{K}_1+(h_1^++h_1^-)\sin\varphi_1}{\cos\varphi_1}&-1\\\end{array}\right)\,\,\,\,\,\,\,\text{ and }\,\,\,\,\, N_{x_1}^{-1}=\left(\begin{array}{cc} -\frac{1}{\cos\varphi_1}&0\\ \frac{\mathcal{K}_1+h_1^+\sin\varphi_1}{\cos\varphi_1}&-1\\ \end{array}\right) . \end{equation} Now combining (\ref{contw1}) with \eqref{III} and (\ref{DTepsmatrix}), we get \begin{align*} -\cos\varphi_1\frac{dr_1}{ds}&=\left(\cos\varphi+\tau \mathcal{K}+\tau h^+\sin\varphi\right)\frac{dr}{ds}+\tau\frac{d\varphi}{ds}-II\\ &=\left(\cos\varphi+\tau \mathcal{K}+\tau h^+\sin\varphi\right)\frac{dr}{ds}+\tau\frac{d\varphi}{ds}-c_3w^+-c_4\dot w^+\\ &=\left(\cos\varphi+\tau \mathcal{K}+a_1\right)\frac{dr}{ds}+(\tau+a_2)\frac{d\varphi}{ds} \end{align*} where $a_1=c_3\cos\varphi+c_4(\mathcal{K}+h^+\sin\varphi)+\tau h^+\sin\varphi$ and $a_2=c_4$. Similarly we obtain \begin{align*}-\cos\varphi_1\frac{d\varphi_1}{ds}&=-(\mathcal{K}_1+h_1^-\sin\varphi_1)w^-_1-\cos\varphi_1\dot w^-_1\\ &=-(\mathcal{K}_1+h_1^-\sin\varphi_1)(w^++\dot w^+\tau+II)-\cos\varphi_1(\dot w^++I) \\ &=\left[(\mathcal{K}_1+h_1^-\sin\varphi_1)\cos\varphi+(\tau(\mathcal{K}_1+h_1^-\sin\varphi_1)+\cos\varphi_1)(\mathcal{K}+h^+\sin\varphi)\right] \frac{dr}{ds}\\ &\,\,\,\,\,\,+\left(\tau\mathcal{K}_1+\tau h_1^-\sin\varphi_1+\cos\varphi_1\right)\frac{d\varphi}{ds}-II(\mathcal{K}_1+h_1^-\sin\varphi_1)-I\cos\varphi_1\\ &=\left(\tau \mathcal{K}_1\mathcal{K}+\mathcal{K}_1\cos\varphi+\mathcal{K}\cos\varphi_1+b_1 \right) \frac{dr}{ds}+\left(\mathcal{K}_1\tau+\cos\varphi_1+b_2\right)\frac{d\varphi}{ds} \end{align*} where \begin{align*} b_1&=(\cos\varphi+\tau\mathcal{K})h_1^-\sin\varphi_1+\cos\varphi_1\left(c_1\cos\varphi+c_2\mathcal{K}+(1+c_2)h^+\sin\varphi\right)\\ &\,\,\,\,\,\,+\left(c_3\cos\varphi+\tau h^+\sin\varphi+c_4(\mathcal{K}+h^+\sin\varphi)\right)(\mathcal{K}_1+h_1^-\sin\varphi_1)\end{align*} and $b_2= (\tau+c_4) h_1^-\sin\varphi_1 +c_4\mathcal{K}_1+c_2\cos\varphi_1$. Now we use the assumption that the quantities $\mathcal{K}, \tau$ are uniformly bounded from above, and $|h^{\pm}|=\mathcal{O}(\eps_1)$, to obtained that for any unit vector $(dr/ds, d\varphi/ds)$, the quantities $|a_i|\leq C\eps_1$ and $|b_i| \leq C\eps_1$, $i=1,2$, for some uniform $C>0$. Finally we use (\ref{DTepsmatrix}) to calculate the determinant of the differential $D_x T_{\mathbf{F}}$, \begin{equation} \label{eq:full detTf} \begin{split} \det D_x T_{\mathbf{F}}&=\det N_{x_1}^{-1} \cdot \det L_{x_1} \cdot \det D_x\cdot \det N_x=\frac{\cos\varphi}{\cos\varphi_1} \det D_x\\ &=\frac{\cos\varphi}{\cos\varphi_1}\left((1+c_2)(1+c_3) - c_1(\tau+c_4)\right) \end{split} \end{equation} which implies the last inequality (\ref{detTF}). \end{proof} It follows from the above lemma that the differential $D_xT_\mathbf{F}:\mathcal{T}_x M\to \mathcal{T}_{x_1}M$ at any point $ x =(r, \varphi)\in M$ is the $2\times 2$ matrix: \begin{equation}\label{DTf} DT_{\mathbf{F}}(x)=-\frac{1}{\cos\varphi_1}\left( \begin{array}{cc} \tau\mathcal{K} +\cos\varphi+a_1& \tau+a_2 \\ \mathcal{K}( r_1)(\tau\mathcal{K}+\cos\varphi)+\mathcal{K}\cos\varphi_1+b_1 & \tau\mathcal{K}( r_1)+\cos\varphi_1+b_2 \\ \end{array} \right) \end{equation} where $ x_1=T_\mathbf{F}(x)=( r_1, \varphi_1)$. Furthermore it was shown in [Ch01] that the map $T_{\mathbf{F}}$ has two families of cones $\bar \mathcal{C}^u( x )$ (unstable) and $\bar \mathcal{C}^s( x )$ (stable) in the tangent spaces ${\mathcal{T}}_{ x } M$, for all ${ x }\in M$. More precisely, the unstable cone $\bar\mathcal{C}^u(x)$ contains all tangent vectors based at ${ x }$ whose images generate dispersing wave fronts: \begin{equation} \label{eq:unstable cone} \bar\mathcal{C}^u(x)=\{(dr, d\varphi)\in \mathcal{T}_{ x } M:\, B_0^{-1}\leq d\varphi/dr\leq B_0\} . \end{equation} The unstable cone $\bar\mathcal{C}^u(x)$ is strictly invariant under $DT_F$. Similarly the stable cone $$\bar\mathcal{C}^s(x)=\{(dr, d\varphi)\in \mathcal{T}_{ x } M:\, -B_0^{-1}\geq d\varphi/dr\geq -B_0\}$$ is strictly invariant under $DT_F^{-1}$. Here $B_0=B_0(\eps_1, \tau_*,\mathcal{K}_*)>1$ is a uniform constant. Indeed, there exists a uniform constant $C>0$ such that we can choose $B_0 = \mathcal{K}^{-1}_* + 2\tau_*^{-1} + C\varepsilon_1$ for all $\varepsilon_1$ sufficiently small. Let $dx = (dr, d\varphi) \in \mathcal{T}_xM$. Following \cite[Section 5.10]{chernov book}, we define the adapted norm $\| \cdot \|_*$ by \begin{equation} \label{inducenorm} \| dx \|_* = \frac{\mathcal{K}(x) + |\mathcal{V}|}{\sqrt{1 + \mathcal{V}^2}} \| dx \|, \; \; \; \forall dx \in C^s(x) \cup C^u(x) , \end{equation} where $\| dx \| = \sqrt{dr^2 + d\varphi^2}$ is the Euclidean norm. Since the slopes of vectors in $C^s(x)$ and $C^u(x)$ are bounded away from $\pm \infty$, we may extend $\| \cdot \|_*$ to all of $\mathbb{R}^2$ in such a way that $\| \cdot \|_*$ is uniformly equivalent to $\| \cdot \|$. It is straightforward to check that for $dx \in C^u(x)$, \begin{equation} \label{eq:Fexp} \frac{\| dx_1\|_*}{\| dx \|_*} \ge \hat\Lambda:=1 + \mathcal{K}_{\min} \tau_{\min}/2. \end{equation} Finally, a simple calculation using \eqref{DTf} shows that there exists a constant $B_1= B_1(\mathcal{K}_*, \tau_{\min}, \tau_{\max}) > 0$ such that \begin{equation} \label{eq:cos exp} \frac{B_1^{-1}}{\cos \varphi(x_1)} \le \frac{\| dx_1 \|}{\| dx \|} \le \frac{B_1}{\cos \varphi(x_1)}, \; \; \; \mbox{for all } dx \in C^u(x) . \end{equation} Uniform expansion in $C^s(x)$ under $DT^{-1}(x)$ follows similarly. (See also \cite[Sect. 3]{Ch01}.) \subsection{Hyperbolicity of the perturbed map $T_{\mathbf{F},\mathbf{G}}$} \label{hyp tfg} We are now ready to verify conditions {\bf (H1)}-{\bf (H5)} for the map $T_{\mathbf{F}, \mathbf{G}}$. We do this fixing $\mathbf{F}$, $\mathbf{G}$ satisfying assumptions {\bf (A1)}-{\bf (A4)} with $|\mathbf{F}|_{\mathcal{C}^1}, |\mathbf{G}|_{\mathcal{C}^1} \le \varepsilon$ for some $\varepsilon \le \varepsilon_1$. We then compare $T = T_{\mathbf{F}, \mathbf{G}}$ with the related map $T_{\mathbf{F}} = T_{\mathbf{F}, \mathbf{0}}$. Since $\mathbf{G}$ preserves tangential collisions, the discontinuity set of $T$ is the same as that of $T_{\mathbf{F}}$, which comprises the preimage of $\mathcal{S}_0:=\{\varphi=\pm \pi/2\}$. Similarly, the singularity sets of $T^{-1}$ and $T_\mathbf{F}^{-1}$ are the same due to {\bf (A4)}. But the singular sets for higher iterates are not the same. Let $\mathcal{S}_{\pm n}^{T}= \cup_{i=0}^n T ^{\mp i}\mathcal{S}_{0,H}$ with $n\in \mathbb{N}$. Then $T^{\pm n}$ is smooth on $M\setminus \mathcal{S}_{\pm n}^{T}$. For any phase point $ x =(r,\varphi)\in M$, let $T x =(\bar r_1,\bar \varphi_1)$ and $T_{\mathbf{F}} x =(r_1, \varphi_1)$. According to {\bf (A3)} and {\bf (A4) } and since we are on a fixed integral surface, we may express $\mathbf{G}$ in local coordinates via two smooth functions $g^1$ and $g^2$ such that $g^i(r,\pm \pi/2)=0$, $i=1,2$, and \begin{equation} \label{rphi} \bar r_1= r_1+g^1( r_1, \varphi_1)\,\,\,\,\,\text{ and }\,\,\,\,\,\,\bar \varphi_1=\varphi_1+g^2( r_1, \varphi_1) \end{equation} where $g^i$ is a $C^2$ function with $C^1$ norm uniformly bounded from above by $c_g\eps$, for some uniform constant $c_g>0$. According to (\ref{rphi}), the differential of $T$ satisfies \begin{equation} \label{drphi} d\bar r_1=\left(1+g^1_{1}( r_1, \varphi_1)\right) dr_1+g_{2}^1( r_1, \varphi_1) d\varphi_1\,\,\,\,\,\text{ and }\,\,\,\,\,\,d\bar \varphi_1=g^2_1( r_1, \varphi_1) dr_1+\left(1+g_2^2( r_1, \varphi_1)\right)d\varphi_1 \end{equation} where $g^i_1(r_1,\varphi_1)=\partial g^i/\partial r_1$ and $g^i_2(r_1,\varphi_1)=\partial g^i/\partial \varphi_1$. This implies \begin{align} \label{DTepsg} D T(x)&= \left( \begin{array}{cc} 1+g^1_1( r_1, \varphi_1) & g^1_2( r_1, \varphi_1) \\ g^2_1( r_1, \varphi_1) & 1+g^2_2( r_1, \varphi_1) \\ \end{array} \right)D T_{\mathbf{F}}(x) \end{align} Note that $T$ is not a $\mathcal{C}^1$ perturbation of $T_\mathbf{F}$ around the boundary of $M$. Furthermore, $T$ no longer preserves $\mu_{\mathbf{F}}$, the SRB measure for $T_\mathbf{F}$. However, it follows from \eqref{detTF} and \eqref{DTepsg} that \begin{align} \label{differential} |\det DT(x)|\leq \frac{\cos\varphi(x)}{\cos\bar \varphi_1(x)} \frac{\cos \bar \varphi_1(x)}{\cos \varphi_1(x)}(1+ C\eps) \le \frac{\cos\varphi(x)}{\cos\bar \varphi_1(x)} (1 + C_1 \varepsilon) \end{align} since by \eqref{rphi}, \begin{equation} \label{eq:equiv cos} \frac{\cos \bar \varphi_1(x)}{\cos \varphi_1(x)} = \frac{\cos (\varphi_1(x) + g^2(x_1))}{\cos \varphi_1(x)} \le (1 + C' \varepsilon) \end{equation} since $g^2(r, \pm \pi/2) = 0$ and $|\nabla g^2| \le C \varepsilon$. Clearly this implies condition {\bf (H5)}. The next proposition shows that although the perturbed maps do not have the same families of stable/unstable manifolds, they do share common families of stable and unstable cones. \begin{proposition}\label{cones} There exist two families of cones $C^u(x)$ (unstable) and $C^s(x)$ (stable) in the tangent spaces ${\mathcal{T}}_{ x} M$ and $\Lambda>1$, such that for all ${ x}\in M$: \begin{itemize} \item[(1)] $D T (C^u(x))\subset C^u(T x)$ and $D T (C^s (x))\supset C^s( T x)$ whenever $D T $ exists. \item[(2)] These families of cones are continuous on $M$ and the angle between $C^u(x)$ and $C^s(x)$ is uniformly bounded away from zero. \item[(3)] $\|D_{ x} T (v)\|_*\geq \Lambda \|v\|_*, \forall v\in C^u(x) \quad\text{and}\quad \|D_{ x}T ^{-1}(v)\|_*\geq \Lambda \|v\|_*, \forall v\in C^s(x)$. \end{itemize} \end{proposition} \begin{proof} For $ x\in M$ and any unit vector $d x\in \mathcal{T}_{ x}M$, let $d{ x}_1=D_{ x}T_\mathbf{F} d{ x}$. Then by (\ref{drphi}) the slope $\bar\mathcal{V}_1$ of the vector $d{ \bar x}_1$ at $\bar x_1:=T { x}=(\bar r_1, \bar \varphi_1)$ satisfies \begin{align}\label{cVeps} \bar\mathcal{V}_1&=\frac{g^2_1+(1+g^2_2)\mathcal{V}_1}{1+g^1_1+g^1_2 \mathcal{V}_1}=\mathcal{V}_1+\mathcal{O}(\eps) \end{align} So the cone $\bar C^u(x)$ from \eqref{eq:unstable cone} may not be invariant under $DT(x)$. Accordingly, we define a slightly bigger cone, $$C^u(x)=\{(dr, d\varphi)\in \mathcal{T}_{x} M:\, B_0^{-1}(1-c_1\eps_1))\leq d\varphi/dr\leq B_0(1+c_2\eps_1)$$ for some constants $c_1, c_2>0$, and we use assumption (\textbf{A2}) to ensure that $c_i\eps_1<1/2$, $i=1,2$. By (\ref{DTf}), $DT_{\mathbf{F}}$ maps the first and third quadrants strictly inside themselves and shrinks any cones larger than the unstable cones. More precisely, let $V$ be a unit vector on the upper boundary of $C^u(x)$, with slope $\mathcal{V}=B_0(1+c_2\eps_1)$. Then by (\ref{DTf}) the slope of $DT_{\mathbf{F}} V$ satisfies $\mathcal{V}_1=\frac{C+D\mathcal{V}}{A+B\mathcal{V}} $, where we denote $$\left(\begin{array}{cc}A & B \\mathcal{C} & D \end{array}\right) =\left(\begin{array}{cc}\tau\mathcal{K} +\cos\varphi+a_1& \tau+a_2 \\ \mathcal{K}( r_1)(\tau\mathcal{K}+\cos\varphi)+\mathcal{K}\cos\varphi_1+b_1 & \tau\mathcal{K}( r_1)+\cos\varphi_1+b_2\end{array}\right) . $$ It follows from the invariance of $\bar\mathcal{C}^u$ that $\frac{C+D B_0}{A+B B_0}<B_0$. One can easily check that $$\mathcal{V}_1=\frac{C+D B_0(1+c_2\eps_1)}{A+B B_0(1+c_2\eps_1)}<\mathcal{V}=B_0(1+c_2\eps_1)$$ Similarly we can check the lower boundary of the cone is also mapped inside the cone $C^u$. Thus $C^u$ is invariant under $DT$. Similarly we define the stable cone $C^s(x)$ as $$C^s(x)=\{(dr, d\varphi)\in \mathcal{T}_{ x} M:\, -B_0^{-1}(1-c_1\eps_1))\geq d\varphi/dr\geq -B_0(1+c_2\eps_1)\}.$$ Then one can check that the stable cone $\mathcal{C}^{s}$ is strictly invariant under $DT^{-1} $ whenever $DT^{-1} $ exists for any $T\in \mathcal{F}$. From the definitions of $C^s(x)$ and $C^u(x)$, it is clear that the angle between them is bounded away from 0 on $M$. Thus items (1) and (2) of the lemma are proved. To prove (3), note that \eqref{inducenorm} implies, $$ \frac{\|d \bar x_1\|_*}{\|d x\|_*}=\frac{\|d \bar x_1\|_*}{\|d x_1\|_*}\frac{\|d x_1\|_*}{\|dx\|_*}=\frac{\|d x_1\|_*}{\|dx\|_*} \frac{\mathcal{K}(\bar r_1)+|d\bar \varphi_1|}{\mathcal{K}(r_1)+ |d\varphi_1|} . $$ Using \eqref{rphi}, \eqref{drphi}, \eqref{eq:Fexp} and the fact that $\mathcal{K}(\cdot)$ is a $\mathcal{C}^1$ function on $M$, we conclude that for $\epsilon_0=1$ small enough, \begin{equation} \label{reducenorm} \frac{\|d \bar x_1\|_*}{\|d x\|_*}\geq \Lambda := 1 + \mathcal{K}_{\min} \tau_{\min}/3 . \end{equation} Similarly, one can show property (3) for stable cones, which we will not repeat here. \end{proof} Near grazing collisions, we have also using \eqref{eq:cos exp} and \eqref{eq:equiv cos} along with \eqref{rphi} and \eqref{drphi}, \begin{equation} \label{eq:cos tgf} \frac{B_1^{-1}(1-C\varepsilon_1)}{\cos \bar \varphi_1} \le \frac{\| d \bar x_1 \|}{\| dx \|} = \frac{\|d \bar x_1\|}{\|d x_1\|}\frac{\|d x_1\|}{\|dx\|} \le \frac{B_1(1+ C \varepsilon_1)}{\cos \bar \varphi_1} , \end{equation} which establishes \eqref{eq:expansion} in {\bf (H1)} since in the finite horizon case, there are only finitely many singularity curves so we may take $n$ in that formula to be 1. The last formula (\ref{eq:2 deriv}) in \textbf{(H1)} (again with $n=1$) follows directly from differentiating (\ref{DTepsg}) and using (\ref{DTf}) to recover this standard estimate for the unperturbed billiard (see \cite{katok} or \cite[Sect. 9.9]{Ch01} for the classical result). This finishes the verification of \textbf{(H1)}. \subsection{Regularity of stable and unstable curves} \label{stable curves} It follows from Proposition \ref{cones} that we may define common families of stable and unstable cones for all perturbations $T\in \mathcal{F}_B(Q_0, \tau_*, \varepsilon_1)$. Recall the homogeneity strips $\bH_k$ defined in Section~\ref{class of maps} and that a homogeneous curve in $M$ is a curve that lies in a single homogeneity strip. In this subsection we will show that there is a class of $C^2$ smooth unstable homogeneous curves $\widehat\mathcal{W}^u$ in $M$ which is invariant under any $T\in \mathcal{F} $. Furthermore these curves are regular in the sense that they have uniformly bounded curvature and distortion bounds. Similarly, there is an invariant class of homogeneous stable curves, $\widehat \mathcal{W}^s$. \subsubsection{Curvature bounds} The next lemma, proved in $T_{\mathbf{F}}$ in \cite{Ch01}, states that the images of an unstable curve are essentially flattened under the map $T_\mathbf{F}$. \begin{lemma}\label{curvbdT0} Let $W\subset M$ be a $C^2$-smooth unstable curve with equation $\varphi_0=\varphi_0(r_0)$ such that $T_\mathbf{F}^iW$ is a homogeneous unstable curve for each $0 \le i \le n$. Then $T_\mathbf{F}^n W$ has equation $\varphi_{n}=\varphi_n(r_n)$ which satisfies: \begin{equation}\label{curvt0}|\frac{d^2\varphi_n}{dr^2_n}|\leq C_1+ \theta^{3n}|\frac{d^2\varphi_0}{d r_0^2}|\leq C_2\end{equation} where $C_i=C_i(Q)$, $i=1,2$ is a constant and $\theta\in (0,1)$. Furthermore, for any regular unstable curve $W$, there exists $n_W\geq 1$, such that for any $n>n_W$, every smooth curve of $T^n W$ has uniformly bounded curvature. \end{lemma} One can obtain a similar bounded curvature property for the perturbed map $T $. \begin{proposition} \label{curvbd}(Curvature bounds) Let $W$ be any $C^2$ smooth unstable curve. Then there exists $n_W\geq 1$ and $C_b >0$ such that every smooth curve $W'\subset T ^n W$ with equation $\bar\varphi_n=\bar\varphi_n(\bar r _n)$ satisfies \begin{equation}\label{slopebd}|d^2\bar\varphi_n/d\bar r _n^2|\leq C_b, \; \; \; \mbox{for } n > n_W . \end{equation} \end{proposition} \begin{proof} We fix any phase point $ \bar x_0:=x \in W$, denote $x_n = (r_n, \varphi_n) =T_{\mathbf{F}}^n x$ and $\bar x_n = (\bar r_n, \bar \varphi_n) =T^n x$. According to \eqref{drphi}, the slope of the vector $DT \,d \bar x $ satisfies \begin{align} \label{dbvarphidbr1} \frac{d\bar\varphi_1}{ d\bar r_1}=\frac{g^2_1+(1+g^2_2)\mathcal{V}_1}{1+g^1_1+g^1_2 \mathcal{V}_1}=\mathcal{V}_1+\frac{g^2_1+g^2_2\mathcal{V}_1-g^1_1\mathcal{V}_1-g^1_2 \mathcal{V}_1}{1+g^1_1+g^1_2 \mathcal{V}_1}, \end{align} where $\mathcal{V}_1=d\varphi_1/dr_1$, $\bar \mathcal{V}_1=d\bar\varphi_1/d\bar r_1$. We differentiate the above equality with respect to $r_1$, using the fact that by (\ref{drphi}), $\displaystyle \frac{d\bar r_1}{dr_1}=1+g^1_1 +g^1_2\mathcal{V}_1 $. Now use the same notation as in Lemma \ref{curvbdT0} to get for some $C_0>0$ and $C_3>0$ \begin{equation} \label{evodphi} |\frac{d^2\bar\varphi_1}{d\bar r_1^2}| \leq C_0+(1+C_3\eps_1)\theta^{3}|\frac{d^2\bar\varphi_0}{d\bar r ^2_0}|, \end{equation} since $d^2\bar \varphi_0/ d\bar r_0^2 = d^2 \varphi_0 / dr_0^2$. By choosing $\eps_1$ small one can make $(1+\eps_1 C_3)\theta^2<1$. Then we have for any $n\geq 1$, $$|\frac{d^2\bar\varphi_n}{d\bar r_n^2}| \leq \frac{C_0}{1-\theta}+\theta^{n}|\frac{d^2\bar\varphi_0}{d\bar r_0^2}| $$ Since $W$ is $\mathcal{C}^2$, there exists $C_1=C_1(W)>0$ such that $|\frac{d^2\bar\varphi_0}{d\bar r_0^2}|<C_1$. We fix a constant $C_b=C_b(Q)>0$ and define $$n_W=|\frac{\ln (C_b/C_1)}{\ln \theta}|.$$ Then for any $n>n_W$, connected components of $T^n W$ have equation $\bar\varphi_n=\bar\varphi_n(\bar r_n)$ with second derivative bounded from above by $C_b$. \end{proof} We now fix the constant $C_b>0$, then define $\widehat \mathcal{W}^u$ be the class of all homogeneous unstable curves $W$ whose curvature is uniformly bounded by $C_b$. It follows from Propositions~\ref{cones} and \ref{curvbdT0} that the class $\widehat\mathcal{W}^u$ is invariant under any $T \in\mathcal{F}$. Any unstable curve $W\in \widehat\mathcal{W}^u$ is called a regular unstable curve. Similarly one defines $\widehat\mathcal{W}^s$. This verifies condition \textbf{(H2)}. \subsubsection{Distortion bounds} In this section, we establish the distortion bounds for $T$ required by {\bf (H4)}. For any stable curve $W \in \widehat \mathcal{W}^s$ and $ x \in W$, denote by $J_W T_\mathbf{F}( x )$ (resp. $J_W T ( x )$) the Jacobian of $T_\mathbf{F}$ (resp. $T$) along $W$ at $ x \in W$. It was shown in \cite{Ch01} that there exists $C_1>0$, such that for any regular stable curve $W$ for which $T_\mathbf{F} W$ is also a regular stable curve, \begin{equation} \label{distT0} |\ln J_W T_\mathbf{F}( x )-\ln J_W T_\mathbf{F}( y)| \leq C_1 d_W( x , y)^{\frac{1}{3}} \end{equation} where $d_W( x , y)$ is the arclength between $ x $ and $ y$ along $W$. We show that $T$ has the same properties on the set of all regular stable curves $\widehat \mathcal{W}^s$. \begin{lemma}\label{distorbd}(Distortion bounds) Let $T \in \mathcal{F}$ and $W \in \widehat \mathcal{W}^s$ be such that $T$ is smooth on $W$ and $TW \in \widehat \mathcal{W}^s$. There exists $C_J>0$ independent of $W$ and $F$ such that $$|\ln J_W T ( x )-\ln J_W T ( y)|\leq C_J d_W( x , y)^{\frac{1}{3}} .$$ \end{lemma} \begin{proof} Fix $T \in \mathcal{F}$ and $W \in \mathcal{W}^s$ for which $TW \in \widehat \mathcal{W}^s$. This implies in particular that both $T$ and $T_\mathbf{F}$ are smooth on $W$. For any $x=(r,\varphi)\in W$, let $x_1:=T_\mathbf{F} x=(r_1,\varphi_1)$ and $\bar x_1=T x=(\bar r_1,\bar \varphi_1)$. Similarly, let $dx = (dr, d\varphi) \in \mathcal{T}_xW$ be a unit vector and define $dx_1 = DT_\mathbf{F}(x) dx = (dr_1, d\varphi_1)$ and $d\bar x_1 = DT(x) dx = (d\bar r_1, d\bar \varphi_1)$. Then \[ \frac{J_W T ( x )}{J_W T_{\mathbf{F}}( x )} = \sqrt{\frac{1+\bar \mathcal{V}^2_{1}}{1+\mathcal{V}^2_{1}}} \frac{|d\bar r_1|}{|d r_1|} \] where $\mathcal{V}_{1} = d\varphi_1/dr_1$ and $\bar \mathcal{V}_{1} = d\bar \varphi_1/ d\bar r_1$. Then it follows from \eqref{drphi} that \begin{equation} \label{eq:jacs} \ln J_W T ( x )=\ln J_W T_\mathbf{F}( x )+ \frac{1}{2}\ln (1+\bar \mathcal{V}_{1}^2)- \frac{1}{2}\ln (1+\mathcal{V}_{1}^2)+\ln |1+g^1_1 +g^1_2\mathcal{V}_1| . \end{equation} By the smoothness of $W$ and the curvature bounds, there exists $C>0$ such that for any $ x , y\in W$, $$|\ln (1+\mathcal{V}_{1}^2( x_1 ))-\ln (1+\mathcal{V}_{1}^2( y_1))|\leq |\mathcal{V}_{1}^2( x_1 )-\mathcal{V}_{1}^2( y_1)|\leq C d_{T_\mathbf{F} W}( x_1 , y_1) \le C' d_W(x,y),$$ where $y_1 = T_\mathbf{F} y$, and similarly for $\bar \mathcal{V}_1$. Since $\mathbf{G}$ is $C^2$, the terms involving $g^1_1$ and $g^1_2$ satisfy a Lipschitz bound as well. Putting this together with \eqref{distT0} and \eqref{eq:jacs} proves the lemma. \end{proof} In general, for $W \in \widehat \mathcal{W}^s$ and $n\in \mathbb{N}$, suppose $T^n$ is smooth on $W$ and that $T^k W \in \widehat \mathcal{W}^s$, $0 \le k \le n$. Define $T^k W = W_k$ and for $x, y \in W$, let $x _k=T^k x $ and $ y_k=T^k y$. Then \begin{equation} \begin{split} \label{eq:dist ext} |\ln J_W T^n( x )-\ln J_W T^n ( y)| & \leq \sum_{k=0}^{n-1}|\ln J_{W_k}T ( x _k)-\ln J_{W_k}T ( y_k)| \\ & \leq C\sum_{k=0}^{n-1}d_{W_k}( x _k, y_k)^{1/3} \le C d_W(x,y)^{1/3} \sum_{k=0}^\infty \Lambda^{k/3}, \end{split} \end{equation} due to \eqref{reducenorm}. This completes the required estimate on $J_WT$. Finally, we prove the required bounded distortion estimate for $J_\mu T$. By \eqref{eq:full detTf} and \eqref{DTepsg}, we have \begin{equation} \label{eq:det formula} \det DT(x) = \frac{\cos \varphi}{\cos \varphi_1} \big( (1+c_2)(1+c_3) - c_1(\tau + c_4) \big) \big( (1 + g^1_1)(1+ g^2_2) - g^1_2 g^2_1 \big) =: \frac{A(x)}{\cos \bar \varphi_1}, \end{equation} where $c_1, \ldots, c_4$ are defined by \eqref{III} and we have replaced $\cos \varphi_1$ with $\cos \bar \varphi_1$ times a smooth function on $M \setminus \mathcal{S}_1^T$ due to \eqref{eq:equiv cos}. Note that $A(x)$ is a smooth function of its argument wherever $T$ is smooth and has bounded $C^1$ norm on $M \setminus \mathcal{S}_1^T$. It follows that $J_\mu T$ is a smooth function on $M \setminus \mathcal{S}_1^T$ whose $C^1$-norm is bounded between $1 \pm C \varepsilon_1$ for some uniform constant $C$ depending on the table (recall that $d\mu = c \cos \varphi \, dm$ is the smooth invariant measure for the unperturbed billiard $T_{\mathbf{0},\mathbf{0}}$). The required distortion estimates \eqref{eq:distortion stable} and \eqref{eq:D u dist} for $J_\mu T$ follow using this smoothness and the uniform hyperbolicity of $T$ as in \eqref{eq:dist ext}. Indeed, \eqref{eq:dist ext} holds with exponent 1 rather than $1/3$ for $J_\mu T$. This completes the verification of {\bf (H4)}. Distortion bounds for $\det DT$ with exponent $1/3$ follow from the above considerations in addition to recalling that $1/\cos \varphi$ is of order $k^2$ in $\bH_k$, while the width of such a strip along a stable or unstable curve is $k^{-3}$. Similarly, one may prove absolute continuity of the holonomy map between unstable leaves as in \cite{Ch01}, but we do not do that here since we do not need this fact. \subsection{One step expansion} Since we have established the expansion factors given by \eqref{reducenorm} and \eqref{eq:cos tgf}, the one-step expansion condition \eqref{eq:step1} follows from an argument similar to the unperturbed case (see \cite[Lemma 5.56]{chernov book}) and fixes the choice of $k_0 \in \mathbb{N}$, the minimum index of the homogeneity strips. We will not reprove that lemma here. Instead, we focus on the second part of {\bf (H3)}, given by \eqref{eq:weakened step1}. Fix $\delta_0 > 0$ and $k_0$ satisfying \eqref{eq:one step contract} and define $\mathcal{W}^s$ accordingly. For $W \in \mathcal{W}^s$, let $V_i$ denote the maximal homogeneous connected components of $T^{-1}W$. \begin{lemma} \label{lem:sigma} For any $\varsigma>1/2$, there exists $C=C(\delta_0, \varsigma,\eps_0)>0$ such that for any $W \in \mathcal{W}^s$, any $T\in \mathcal{F}_B(Q_0, \tau_*, \varepsilon_1)$, \begin{equation}\label{step2} \sum_i \frac{|TV_i|^{\varsigma}}{|V_i|^{\varsigma}} < C . \end{equation} \end{lemma} \begin{proof} According to the structure of singular curves, a stable curve of length $\le \delta_0$ can be cut by at most $N \le \tau_{\max}/\tau_{\min}$ singularity curves in $\mathcal{S}_{-1}^{T}$ (see \cite[\S 5.10]{chernov book}). For each $s \in \mathcal{S}_{-1}^{T}$ intersecting $W$, $W$ is cut further by images of the boundaries of homogeneity strips $S^H_k$, $k \geq k_0$. For one such $s$, we relabel the components $V_i$ of $T^{-1}W$ on which $T$ is smooth by $V_k$, $k$ corresponding to the homogeneity strip $\bH_k$ containing $V_k$. By \eqref{eq:cos tgf}, there exists $c_1=c_1(\varepsilon_1)>0$ such that on $TV_k$, the expansion under $T^{-1}$ is $\geq c_1 k^2$. So for all $\varsigma>1/2$, \begin{equation} \label{eq:one sum} \sum_{k\geq k_0}\frac{|TV_k|^{\varsigma}}{|V_k|^{\varsigma}} \leq c_1\sum_{k\geq k_0} \frac{1}{k^{2\varsigma}}\leq \frac{c_1}{k_0^{2\varsigma-1}} . \end{equation} An upper bound for \eqref{step2} in this case is given by $N$ times the bound in \eqref{eq:one sum}. \end{proof} This completes the verification of {\bf (H1)}-{\bf (H5)} and completes the proof of Theorem~\ref{thm:C1}. \subsection{Smallness of the perturbation} In this section, we check that conditions {\bf (C1)}-{\bf (C4)} are satisfied for $\varepsilon_1$ sufficiently small. We will then be able to apply Theorem~\ref{thm:C2} to any map $T \in \mathcal{F}_B(Q_0, \tau_*, \varepsilon_1)$. We fix $\eps\in (0,\eps_1)$ and choose any $T:=T_{\mathbf{F}, \mathbf{G}}\in \mathcal{F}_B(Q_0, \tau_*, \varepsilon_1)$, such that $|\mathbf{F}|_{\mathcal{C}^1}, |\mathbf{G}|_{\mathcal{C}^1} \le \eps$. By the triangle inequality, it suffices to estimate $d_\mathcal{F}(T_0, T)$ where $T_0 = T_{\mathbf{0}, \mathbf{0}}$ is the unperturbed billiard map. Denote by $\Phi^t$ the flow corresponding to $T$ and by $\Phi_0^t$ the flow corresponding to $T_0$. Let $x \in M \setminus (S_{-1}^T\cup S_{-1}^{T_0})$. By the facts summarized in Section~\ref{flow review}, $\Phi^t(x)$ and $\Phi_0^t(x)$ can be no further than a uniform constant times $\varepsilon t$ on the billiard table. Thus since $T$ has finite horizon bounded by $\tau_{\max}$ and the scatterers have uniformly bounded curvature, $T(x)$ and $T_{\mathbf{F},\mathbf{0}}(x)$ can be no more than a constant times $\sqrt{\varepsilon}$ apart if they lie on the same scatterer. By the smallness of $\mathbf{G}$ and \eqref{rphi}, we have $d_M(T_{\mathbf{F}, \mathbf{0}}(x), T_{\mathbf{F}, \mathbf{G}}) < C\varepsilon$ and thus by the triangle inequality, $d_M(T(x), T_0(x)) < C_f \sqrt{\varepsilon}$ for some uniform $C_f >0$ as long as they lie on the same scatterer. A similar bound holds for $T^{-1}x$ and $T_0^{-1}x$. Let $\epsilon = C_f \varepsilon^{1/3}$. It then follows that for any $x\notin N_{\epsilon}(S_{-1}^T\cup S_{-1}^{T_0})$, $d(T^{-1}(x), T^{-1}_0(x))< \epsilon$. This is {\bf (C1)}. To establish {\bf (C2)}, we use the fact that $J_\mu T_0 \equiv 1$ while \[ J_\mu T(x) = \big( (1+c_2)(1+c_3) - c_1(\tau + c_4) \big) \big( (1 + g^1_1)(1+ g^2_2) - g^1_2 g^2_1 \big) \] by \eqref{eq:det formula}. Since the functions here are all bounded by uniform constants times $\varepsilon$ and our horizon is bounded by $\tau_{\max}$, {\bf (C2)} is satisfied. Next, we prove {\bf (C4)}. Inverting \eqref{DTepsg} and \eqref{DTf} and using \eqref{eq:det formula}, we have \[ DT^{-1}(x) = \frac{-1}{A(T^{-1}x) \cos \varphi(T^{-1}x)} \left( \begin{array}{cc} B + b_2 & C - a_2 \\ D - b_1 & E + a_1 \end{array} \right) \left( \begin{array}{cc} 1 + g^2_2 & - g^1_2 \\ - g^2_1 & 1 + g^1_1 \end{array} \right) , \] where $A$ is the smooth function from \eqref{eq:det formula} and $B = \tau(T^{-1}x) K(x) + \cos \varphi(x)$, $C = - \tau(T^{-1}x)$, $$D = -\mathcal{K}(T^{-1}x)(\tau(T^{-1}x) \mathcal{K}(x) + \cos \varphi(x)) - \mathcal{K}(x) \cos \varphi(T^{-1}x),\,\,\,\,\, \text{ and }\,\,\,\,\,E = \tau(T^{-1}x) \mathcal{K}(T^{-1}x) + \cos \varphi(T^{-1}x)$$ match the corresponding entries of $DT_0^{-1}x$ with $T$ replaced by $T_0$. We split the matrix product as \[ \left( \left( \begin{array}{cc} B & C \\ D & E \end{array} \right) + \left( \begin{array}{cc} b_2 & - a_2 \\ - b_1 & a_1 \end{array} \right) \right) \left( I + \left( \begin{array}{cc} g^2_2 & - g^1_2 \\ - g^2_1 & g^1_1 \end{array} \right) \right) =: F + R , \] where $F = \left( \begin{array}{cc} B & C \\ D & E \end{array} \right)$ and $R$ is a matrix whose entries are smooth functions, all bounded by a uniform constant times $\varepsilon$. Now defining $F_0$ to be the matrix $F$ with $T_0$ replacing $T$, we write, \begin{equation} \label{eq:matrix} \begin{split} & \| DT^{-1}(x) - DT_0^{-1}(x) \| = \Big\| \frac{F + R}{A(T^{-1}x) \cos \varphi(T^{-1} x)} - \frac{ 1}{\cos \varphi(T_0^{-1}x)} F_0 \Big\| \\ & \le \frac{\| F - F_0 \|}{|A(T^{-1}x) \cos \varphi(T^{-1}x)|} + \| F_0 \| \left| \frac{1}{A(T^{-1}x) \cos \varphi(T^{-1}x)} - \frac{1}{\cos \varphi(T_0^{-1}x)} \right| + \frac{\| R \|}{|A(T^{-1}x) \cos \varphi(T^{-1}x)|} . \end{split} \end{equation} Notice that if $x \notin N_\epsilon(\mathcal{S}_{-1}^T \cup \mathcal{S}_{-1}^{T_0})$, then due to the uniform expansion given by \eqref{eq:cos tgf} and the uniform transversality of the stable cone with $\mathcal{S}_0$, we have $d_M(T^{-1}x, \mathcal{S}_0) \ge C\sqrt{\epsilon}$, for some uniform constant $C$. Thus $\cos \varphi(T^{-1}x) \ge C' \sqrt{\epsilon}$ for some uniform constant $C' > 0$. The same fact is true for $T_0^{-1}x$. Using this, plus the fact that the entries of $F$ and $F_0$ are smooth functions of their arguments with uniformly bounded $\mathcal{C}^1$ norms, we estimate the first term of \eqref{eq:matrix} by \[ \frac{\| F - F_0 \|}{|A(T^{-1}x) \cos \varphi(T^{-1}x)|} \le C \epsilon^{-1/2} d_M(T^{-1}x, T_0^{-1}x) \le C' \epsilon^{-1/2} \varepsilon^{1/2} = C' C_f \epsilon \] since the $C^1$ norm of $A$ is bounded above and below by $1\pm C\varepsilon$ by \eqref{eq:det formula}. Similarly, the third term of \eqref{eq:matrix} is bounded by $C \epsilon$. Since $\| F_0 \|$ is uniformly bounded, we split the middle term of \eqref{eq:matrix} into the sum of two terms, \[ \left| \frac{1}{A(T^{-1}x) \cos \varphi(T^{-1}x)} - \frac{1}{\cos \varphi(T_0^{-1}x)} \right| \le \frac{1}{\cos \varphi(T^{-1}x)} \left| \frac{1}{A(T^{-1}x)} - 1 \right | + \left| \frac{1}{\cos \varphi(T^{-1}x)} - \frac{1}{\cos \varphi(T_0^{-1}x)} \right| . \] As noted earlier, the $\mathcal{C}^1$ norm of $A$ is bounded above and below by $1 \pm C \varepsilon$ so that the first difference above is bounded by $C \epsilon^{-1/2} \varepsilon \le C C_f \epsilon$. The second difference is bounded by $C \epsilon^{-1} d_M(T^{-1}x, T_0^{-1}x) \le C' \epsilon^{-1} \varepsilon^{1/2} = C' C_f \epsilon^{1/2}$, similar to the estimate \eqref{eq:cos diff}. Putting these estimates together in \eqref{eq:matrix} proves {\bf (C4)} with $\epsilon = C \varepsilon^{1/3}$. Condition {\bf (C3)} follows similarly using the fact that $J_WT(x) = \| DT(x)v \|$ where $v \in \mathcal{T}_xW$ is a unit vector. The exponent of $\epsilon$ in {\bf (C3)} is better than in {\bf (C4)} by a factor of $\epsilon^{1/2}$ since we must estimate $\left| \frac{\cos \varphi(T^{-1}x)}{\cos \varphi(T_0^{-1}x)} - 1 \right|$ in place of $\left| \frac{1}{\cos \varphi(T^{-1}x)} - \frac{1}{\cos \varphi(T_0^{-1}x)} \right| $. \end{document}
arXiv
Systolic freedom In differential geometry, systolic freedom refers to the fact that closed Riemannian manifolds may have arbitrarily small volume regardless of their systolic invariants. That is, systolic invariants or products of systolic invariants do not in general provide universal (i.e. curvature-free) lower bounds for the total volume of a closed Riemannian manifold. Systolic freedom was first detected by Mikhail Gromov in an I.H.É.S. preprint in 1992 (which eventually appeared as Gromov 1996), and was further developed by Mikhail Katz, Michael Freedman and others. Gromov's observation was elaborated on by Marcel Berger (1993). One of the first publications to study systolic freedom in detail is by Katz (1995). Systolic freedom has applications in quantum error correction. Croke & Katz (2003) survey the main results on systolic freedom. Example The complex projective plane admits Riemannian metrics of arbitrarily small volume, such that every essential surface is of area at least 1. Here a surface is called "essential" if it cannot be contracted to a point in the ambient 4-manifold. Systolic constraint The opposite of systolic freedom is systolic constraint, characterized by the presence of systolic inequalities such as Gromov's systolic inequality for essential manifolds. References • Berger, Marcel (1993), "Systoles et applications selon Gromov", Séminaire Bourbaki (in French), 1992/93. Astérisque 216, Exp. No. 771, 5, 279–310. • Croke, Christopher B.; Katz, Mikhail (2003), "Universal volume bounds in Riemannian manifolds", Surveys in differential geometry, VIII (Boston, MA, 2002), Somerville, MA: Int. Press, pp. 109–137. • Freedman, Michael H. (1999), "Z2-systolic-freedom", Proceedings of the Kirbyfest (Berkeley, CA, 1998), Geom. Topol. Monogr., vol. 2, Coventry: Geom. Topol. Publ., pp. 113–123. • Freedman, Michael H.; Meyer, David A.; Luo, Feng (2002), "Z2-systolic freedom and quantum codes", Mathematics of quantum computation, Comput. Math. Ser., Boca Raton, FL: Chapman & Hall/CRC, pp. 287–320. • Freedman, Michael H.; Meyer, David A. (2001), Projective plane and planar quantum codesjournal=Found. Comput. Math., vol. 1, pp. 325–332. • Gromov, Mikhail (1996), "Systoles and intersystolic inequalities", Actes de la Table Ronde de Géométrie Différentielle (Luminy, 1992), Sémin. Congr., vol. 1, Paris: Soc. Math. France, pp. 291–362. • Katz, Mikhail (1995), "Counterexamples to isosystolic inequalities", Geom. Dedicata, 57 (2): 195–206, doi:10.1007/bf01264937, S2CID 11211702. Systolic geometry 1-systoles of surfaces • Loewner's torus inequality • Pu's inequality • Filling area conjecture • Bolza surface • Systoles of surfaces • Eisenstein integers 1-systoles of manifolds • Gromov's systolic inequality for essential manifolds • Essential manifold • Filling radius • Hermite constant Higher systoles • Gromov's inequality for complex projective space • Systolic freedom • Systolic category
Wikipedia
Effect of perturbation in the numerical solution of fractional differential equations DCDS-B Home A comparison of boundary correction methods for Strang splitting September 2018, 23(7): 2661-2678. doi: 10.3934/dcdsb.2017185 Positive symplectic integrators for predator-prey dynamics Fasma Diele , and Carmela Marangi Istituto per Applicazioni del Calcolo M.Picone, CNR, Bari, via Amendola 122/D, Italy Received October 2016 Revised May 2017 Published July 2017 Figure(4) We propose novel positive numerical integrators for approximating predator-prey models. The schemes are based on suitable symplectic procedures applied to the dynamical system written in terms of the log transformation of the original variables. Even if this approach is not new when dealing with Hamiltonian systems, it is of particular interest in population dynamics since the positivity of the approximation is ensured without any restriction on the temporal step size. When applied to separable M-systems, the resulting schemes are proved to be explicit, positive, Poisson maps. The approach is generalized to predator-prey dynamics which do not exhibit an M-system structure and successively to reaction-diffusion equations describing spatially extended dynamics. A classical polynomial Krylov approximation for the diffusive term joint with the proposed schemes for the reaction, allows us to propose numerical schemes which are explicit when applied to well established ecological models for predator-prey dynamics. Numerical simulations show that the considered approach provides results which outperform the numerical approximations found in recent literature. Keywords: Positive numerical integration, symplectic integrators, Poisson systems, predator-prey dynamics, Rosenzweig-MacArthur model. Mathematics Subject Classification: Primary: 37M15; Secondary: 65P10. Citation: Fasma Diele, Carmela Marangi. Positive symplectic integrators for predator-prey dynamics. Discrete & Continuous Dynamical Systems - B, 2018, 23 (7) : 2661-2678. doi: 10.3934/dcdsb.2017185 M. Beck and M. Gander, On the positivity of Poisson integrators for the Lotka-Volterra equations, BIT Numerical Mathematics, 55 (2015), 319-340. doi: 10.1007/s10543-014-0505-1. Google Scholar S. Blanes and F. Casas, Splitting methods in the numerical integration of non-autonomous dynamical systems, Journal of Physics A: Mathematical and General, 39 (2006), 5405-5423. doi: 10.1088/0305-4470/39/19/S05. Google Scholar S. Blanes, F. Casas, P. Chartier and A. Murua, Optimized high-order splitting methods for some classes of parabolic equations, Mathematics of Computiation, 82 (2013), 1559-1576. doi: 10.1090/S0025-5718-2012-02657-3. Google Scholar S. Blanes, F. Diele, C. Marangi and S. Ragni, Splitting and composition methods for explicit time dependence in separable dynamical systems, Journal of Computational and Applied Mathematics, 235 (2010), 646-659. doi: 10.1016/j.cam.2010.06.018. Google Scholar F. Diele, I. Moret and S. Ragni, Error estimates for polynomial Krylov approximations to matrix functions, SIAM Journal on Matrix Analysis and Applications, 30 (2008), 1546-1565. doi: 10.1137/070688924. Google Scholar F. Diele, C. Marangi and S. Ragni, Implicit-Symplectic Partitioned (IMSP) Runge-Kutta Schemes for Predator-Prey Dynamics, AIP Conference Proceedings, 1479 (2012), 1177-1180. doi: 10.1063/1.4756360. Google Scholar F. Diele, C. Marangi and S. Ragni, IMSP schemes for spatially explicit models of cyclic populations and metapopulation dynamics, Mathematics and Computers in Simulation, 100 (2014), 41-53. doi: 10.1016/j.matcom.2014.01.006. Google Scholar L. Einkemmer and A. Ostermann, Overcoming order reduction in reaction-diffusion splitting. Part 1: Dirichlet boundary conditions, SIAM Journal on Scientific Computing, 37 (2015), A1577-A1592. doi: 10.1137/140994204. Google Scholar L. Einkemmer and A. Ostermann, Overcoming order reduction in diffusion-reaction splitting. Part 2: Oblique boundary conditions, SIAM Journal on Scientific Computing, 38 (2016), A3741-A3757. doi: 10.1137/16M1056250. Google Scholar M. R. Garvie, Finite-difference schemes for reaction-diffusion equations modeling predator-prey interactions in MATLAB, Bulletin of Mathematical Biology, 69 (2007), 931-956. doi: 10.1007/s11538-006-9062-3. Google Scholar M. R. Garvie and M. Golinski, Metapopulation dynamics for spatially extended predator-prey interactions, Ecological Complexity, 7 (2010), 55-59. doi: 10.1016/j.ecocom.2009.05.001. Google Scholar M. R. Garvie and C. Trenchea, Finite element approximation of spatially extended predator-prey interactions with the Holling type Ⅱ functional response, Numerische Mathematik, 107 (2007), 641-667. doi: 10.1007/s00211-007-0106-x. Google Scholar E. Hairer, C. Lubich and G. Wanner, Geometric Numerical Integration. Structure-Preserving Algorithms for Ordinary Differential Equations Springer-Verlag, Berlin, 2006. Google Scholar E. Hansen, F. Kramer and A. Ostermann, A second-order positivity preserving scheme for semilinear parabolic problems, Applied Numerical Mathematics, 62 (2012), 1428-1435. doi: 10.1016/j.apnum.2012.06.003. Google Scholar T. Koto, IMEX Runge-Kutta schemes for reaction-diffusion equations, Journal of Computational and Applied Mathematics, 215 (2008), 182-195. doi: 10.1016/j.cam.2007.04.003. Google Scholar D. Lacitignola, F. Diele, C. Marangi and A. Provenzale, On the dynamics of a generalized predator-prey system with Z-type control, Mathematical Biosciences, 280 (2016), 10-23. doi: 10.1016/j.mbs.2016.07.011. Google Scholar J. Martinez-Linares, Phase space formulation of population dynamics in ecology, preprint, arXiv: q-bio.PE/1304.2324v. Google Scholar A. B. Medvinsky, S. V. Petrovskii, I. A. Tikhonova, H. Malchow and B.-L. Li, Spatiotemporal complexity of plankton and fish dynamics, SIAM Review, 44 (2002), 311-370. doi: 10.1137/S0036144502404442. Google Scholar M. Mehdizadeh Khalsaraei and R. Shokri Jahandizi, Positivity-preserving nonstandard finite difference schemes for simulation of advection-diffusion reaction equation, Computational Methods for Differential Equations, 2 (2014), 256-267. Google Scholar J. H. Merkin, D. J. Needham and S. K. Scott, The Development of Travelling Waves in a Simple Isothermal Chemical System Ⅰ. Quadratic Autocatalysis with Linear Decay, Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences, 424 (1989), 187-209. doi: 10.1098/rspa.1989.0075. Google Scholar R. E. Mickens, Nonstandard finite difference schemes for reaction-diffusion equations, Numerical Methods for Partial Differential Equations, 15 (1999), 201-214. doi: 10.1002/(SICI)1098-2426(199903)15:2<201::AID-NUM5>3.0.CO;2-H. Google Scholar M. Rosenzweig and R. MacArthur, Graphical representation and stability conditions of predator-prey interactions, The American Naturalist, 97 (1963), 209-223. doi: 10.1086/282272. Google Scholar V. Thomée, On Positivity Preservation in Some Finite Element Methods for the Heat Equation. In: Dimov I. , Fidanova S. , Lirkov I. (eds) Numerical Methods and Applications. NMA 2014. Lecture Notes in Computer Science, Springer, 8962 (2015), 13-24 doi: 10.1007/978-3-319-15585-2_2. Google Scholar E. H. Twizell, Y. Wang, W. G. Price and F. Fakhr, Finite-difference methods for solving the reaction-diffusion equations of a simple isothermal chemical system, Numerical Methods for Partial Differential Equations, 10 (1994), 435-454. doi: 10.1002/num.1690100404. Google Scholar Figure 1. On the left: positive first-order schemes (7) and (8) compared with the symplectic Euler (SE) method, its explicit variant (EVSE) applied to the LV system at $T=8.3$, with $u_0 = 0.2$, $v_0 = 1.1$ and $\Delta t = 1.1$. Parameters: $a =b =1$. On the right: numerical accuracy of Poisson integrators at $T=10$, including Strang splitting (SS) and Yoshida composition (YC), applied to the LV system with $\Delta t = 1/k$, for $k = 3,\dots,8$. Parameters: $a = b = 0.5$. Initial values: $u_0 = v_0 = 0.2$ Figure 2. Positive symplectic Euler (17) compared with the explicit Euler method applied to the Z-controlled LV dynamics (21) with $u_0 =v_0 = 40$ and $\Delta t = 0.1$. Parameters: $\alpha=\delta=0.6$, $\beta=\gamma=0.01$, $u_d=100$, $\lambda=1.4$. Phase space portrait (left), predator function versus time (right) Figure 3. Plots of the concentration profiles of $u(x,t)$ (right) and $v(x,t)$ (left) with positive Lie Splitting (solid line) and nonstandard positive method (dashed line) at $t=100$. Step sizes $h=0.4$ and $\Delta t=0.32$ for positive Lie Splitting. Refinements are obtained with $h=0.4,0.8,0.08$ and $\Delta t=0.32,0.8,0.032$ for the nonstandard positive approximations Figure 4. Prey densities approximation with IMEX (left), IMSP (center) and $\Phi^{(RM)}$ (right) schemes for different temporal step size: $\Delta t = 1/3, 1/24, 1/384$ (left and center columns), $\Delta t = 1, 1/3, 1/24$ (right column) Jinfeng Wang, Hongxia Fan. Dynamics in a Rosenzweig-Macarthur predator-prey system with quiescence. Discrete & Continuous Dynamical Systems - B, 2016, 21 (3) : 909-918. doi: 10.3934/dcdsb.2016.21.909 Wei Feng, Nicole Rocco, Michael Freeze, Xin Lu. Mathematical analysis on an extended Rosenzweig-MacArthur model of tri-trophic food chain. Discrete & Continuous Dynamical Systems - S, 2014, 7 (6) : 1215-1230. doi: 10.3934/dcdss.2014.7.1215 Dingyong Bai, Jianshe Yu, Yun Kang. Spatiotemporal dynamics of a diffusive predator-prey model with generalist predator. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020132 Shanshan Chen. Nonexistence of nonconstant positive steady states of a diffusive predator-prey model. Communications on Pure & Applied Analysis, 2018, 17 (2) : 477-485. doi: 10.3934/cpaa.2018026 Wenshu Zhou, Hongxing Zhao, Xiaodan Wei, Guokai Xu. Existence of positive steady states for a predator-prey model with diffusion. Communications on Pure & Applied Analysis, 2013, 12 (5) : 2189-2201. doi: 10.3934/cpaa.2013.12.2189 Liang Zhang, Zhi-Cheng Wang. Spatial dynamics of a diffusive predator-prey model with stage structure. Discrete & Continuous Dynamical Systems - B, 2015, 20 (6) : 1831-1853. doi: 10.3934/dcdsb.2015.20.1831 H. W. Broer, K. Saleh, V. Naudot, R. Roussarie. Dynamics of a predator-prey model with non-monotonic response function. Discrete & Continuous Dynamical Systems - A, 2007, 18 (2&3) : 221-251. doi: 10.3934/dcds.2007.18.221 Hanwu Liu, Lin Wang, Fengqin Zhang, Qiuying Li, Huakun Zhou. Dynamics of a predator-prey model with state-dependent carrying capacity. Discrete & Continuous Dynamical Systems - B, 2019, 24 (9) : 4739-4753. doi: 10.3934/dcdsb.2019028 Peter A. Braza. Predator-Prey Dynamics with Disease in the Prey. Mathematical Biosciences & Engineering, 2005, 2 (4) : 703-717. doi: 10.3934/mbe.2005.2.703 Kaigang Huang, Yongli Cai, Feng Rao, Shengmao Fu, Weiming Wang. Positive steady states of a density-dependent predator-prey model with diffusion. Discrete & Continuous Dynamical Systems - B, 2018, 23 (8) : 3087-3107. doi: 10.3934/dcdsb.2017209 Fei Xu, Ross Cressman, Vlastimil Křivan. Evolution of mobility in predator-prey systems. Discrete & Continuous Dynamical Systems - B, 2014, 19 (10) : 3397-3432. doi: 10.3934/dcdsb.2014.19.3397 Christian Kuehn, Thilo Gross. Nonlocal generalized models of predator-prey systems. Discrete & Continuous Dynamical Systems - B, 2013, 18 (3) : 693-720. doi: 10.3934/dcdsb.2013.18.693 Guanqi Liu, Yuwen Wang. Stochastic spatiotemporal diffusive predator-prey systems. Communications on Pure & Applied Analysis, 2018, 17 (1) : 67-84. doi: 10.3934/cpaa.2018005 Peng Feng. On a diffusive predator-prey model with nonlinear harvesting. Mathematical Biosciences & Engineering, 2014, 11 (4) : 807-821. doi: 10.3934/mbe.2014.11.807 Ronald E. Mickens. Analysis of a new class of predator-prey model. Conference Publications, 2001, 2001 (Special) : 265-269. doi: 10.3934/proc.2001.2001.265 Julián López-Gómez, Eduardo Muñoz-Hernández. A spatially heterogeneous predator-prey model. Discrete & Continuous Dynamical Systems - B, 2017, 22 (11) : 0-0. doi: 10.3934/dcdsb.2020081 Michael Entov, Leonid Polterovich, Daniel Rosen. Poisson brackets, quasi-states and symplectic integrators. Discrete & Continuous Dynamical Systems - A, 2010, 28 (4) : 1455-1468. doi: 10.3934/dcds.2010.28.1455 Yun Kang, Sourav Kumar Sasmal, Amiya Ranjan Bhowmick, Joydev Chattopadhyay. Dynamics of a predator-prey system with prey subject to Allee effects and disease. Mathematical Biosciences & Engineering, 2014, 11 (4) : 877-918. doi: 10.3934/mbe.2014.11.877 Sze-Bi Hsu, Tzy-Wei Hwang, Yang Kuang. Global dynamics of a Predator-Prey model with Hassell-Varley Type functional response. Discrete & Continuous Dynamical Systems - B, 2008, 10 (4) : 857-871. doi: 10.3934/dcdsb.2008.10.857 Verónica Anaya, Mostafa Bendahmane, Mauricio Sepúlveda. Mathematical and numerical analysis for Predator-prey system in a polluted environment. Networks & Heterogeneous Media, 2010, 5 (4) : 813-847. doi: 10.3934/nhm.2010.5.813 PDF downloads (130) Fasma Diele Carmela Marangi
CommonCrawl
What payloads and launch speeds could a sling launcher get using modern materials on the Moon? The only reference paper I have found on this subject is this paper by Landis, which is a great introduction to sling launchers - but the concept has been around for some time and there must be other treatments. At any rate, it is a simple device where a motor on a tower spins two cables - one bearing a payload and the other a counterweight - so fast that when released the payload flies into orbit or beyond. Modern motors are capable of doing this, and would even be highly efficient, if the cables used are very long - on the order of 50 km or more. The cables have to be incredibly strong, that's all. The paper looks at the use of modern materials for such a device, but only briefly because the cables would be huge and extremely costly. It moves on to future possibilities using carbon nanotube cables. If fullerene materials are not available, the concept could be implemented with existing materials. This increases the tether mass, and the sling launch becomes more difficult, but not impossible. The highest strength-to-weight ration for a currently available tether material are obtained with Poly(p-phenylene-2,6-benzobisoxazole), or "PBO" fibers, or with gel-spun polyethylene fibers. PBO (sold under the trade name "Zylon®") has a with tensile strength of 5.8 GPA, and density of 1.54 g/cm3 [12, 13]. High-strength polyethylene fiber (sold under the trade name Spectra-2000) has an ultimate strength of 4.0 GPa and a density of 0.97 g/cm3 [11]. Assuming an engineering factor of 2.5, the allowable load strength for the Spectra-2000 fiber is 1.6 GPa. For the example case of a launch to lunar orbit, 1.68 km/second, the required acceleration is 5.7 g (56 m/sec2 ). To carry a thousand kilogram payload, the force will be 56000 N. This will require a cable cross-section of 0.35 square centimeters at the tip. Since the cable must have additional cross-section to carry its own weight as well as the end mass, the cable must now be made to increase in cross-section from the tip to a wider cross-section toward the hub. This taper increases the cable mass. The cable mass is now about 2500 kg, no longer less than the mass of the launched object, but still a value which is feasible for an engineering system. How was this analysis of the cable done? The reference paper made this calculation for escape velocity from the Moon. Could these materials take much more load without snapping under their own weight in such a scenario? the-moon materials analysis performance launch-system TildalWave kim holderkim holder $\begingroup$ You need to clarify a few things to make this question answerable: 1. What orbit do you want to achieve? This will determine the release velocity you have to achieve. 2. What's the mass of the object you're trying to sling? This, combined with the orbit question, will determine the requirements of the cables. $\endgroup$ – mLuby Apr 28 '15 at 0:46 $\begingroup$ @MichaelLuby As I understand it a cable with current materials that could support its own weight and launch something into orbit is at the limit of what is possible, so I was pretty vague about specifics. But your point is well taken. Let me think about it and make an edit. $\endgroup$ – kim holder Apr 28 '15 at 2:00 $\begingroup$ The was a Nasa Space Flight thread on slinging stuff from the moon: forum.nasaspaceflight.com/index.php?topic=5420.0 $\endgroup$ – HopDavid Apr 28 '15 at 5:32 $\begingroup$ @HopDavid Just a note that they're NASASpaceFlight.com and it's just a name of a website. The way you wrote it suggests NASA associates with it. It doesn't. Misleading name doesn't really discredit it, it's IMO a good forum, I just thought to make that clear to anyone that doesn't know that. $\endgroup$ – TildalWave Apr 28 '15 at 7:19 $\begingroup$ @MichaelLuby No. Feasibility of specific orbits, payload mass, even maximum radial acceleration and centrifugal force that would define suitability of such a launch system for humans is exactly what has yet to be established by analyzing performance of current high tensile strength cable technology. It's then pretty straightforward maths (numerical integration) and physics from then on. As in, you need this release speed (∆v) for that orbit, this cable length (ω²r) for that max acceleration, this much mass (M⋅a) for each release speed that the cable and payload could tolerate, and so on. $\endgroup$ – TildalWave Apr 28 '15 at 7:33 The paper does not describe how the calculations for the tether are done, but I can make a guess. We take a small piece of the tether with mass $\delta m$ and length $\delta r$, at distance $r$ from the center. The tether is spinning at an angular rate $\omega$, and has an ultimate tensile strength $\sigma_\text{max}$ and density $\rho$. The area $A$ is a function of $r$. We can write a relationship between the tension $T(r)$ on both sides of the piece: $$ T(r) - T(r+\delta r) = \delta m\ a_c = \delta m\ r \omega^2 $$ We can substitute $\delta m=\rho A\delta r$, and recast this as a differential equation: $$ -\frac{\partial T}{\partial r}\delta r = \delta r A(r) \rho r\omega^2 \\ \frac{\partial T}{\partial r} = -\rho\omega^2rA $$ Next we can substitute $T=\sigma A$: $$ T = \sigma A \\ \frac{\partial T}{\partial r} = \frac{\partial \sigma}{\partial r}A + \sigma\frac{\partial A}{\partial r} = -\rho\omega^2rA $$ There are two types of solution we have to consider: Cable with Constant Cross-section When the cable has a constant area, the differential equation becomes: $$ \frac{\partial \sigma}{\partial r} = -\rho\omega^2r \\ \sigma = -\rho\frac{\left(r\omega\right)^2}2 + C $$ The tension on the end of the cable, where a payload of mass $M$ is supported, is: $$ T(R) = M\omega^2 R \\ \sigma(r)A = M\omega^2 R $$ Setting $\sigma = M\omega^2 R/A$ at $r=R$ and $\sigma=\sigma_\text{max}$ at $r=0$, we get: $$ \sigma_\text{max} - \frac{(R\omega)^2}{2\rho} = \frac{M \omega^2 R}{A} $$ Substituting in the edge velocity $V=\omega R$ and solving for $A$ we get a required cable area of: $$ A = \frac{MV^2}{R}\left(\sigma_\text{max} - \rho\frac{V^2}{2}\right)^{-1} $$ We can rewrite this in terms of the critical velocity $v_\text{crit}=\sqrt{2\sigma_\text{max}/\rho}$: $$ A = \frac{MV^2}{R\sigma_\text{max}\left(1-(V/v_\text{crit})^2\right)} $$ Now for some examples. I'll use the three speeds given in the paper: Lunar Orbit: $1.68~\text{km}/\text{s}$ Lunar Escape Trajectory: $2.40~\text{km}/\text{s}$ Mars Injection Orbit: $3.84~\text{km}/\text{s}$ And I'll assume a Spectra cable: $\sigma_\text{max}=3.6~\text{GPa}$ $\rho = 0.97~\text{g}/\text{cm}^3$ $v_\text{crit}=2.7~\text{km}/\text{s}$ The dashed lines represent the required area with a margin of $25\%$. Note that the fastest speed doesn't even appear on the plot: since it's faster than $v_\text{crit}$ the cable would snap under its own weight. Tapered Cable (Varying Cross-section) When the cable is operating at maximum performance, $\sigma=\sigma_\text{max}$ along its whole length: $$ \frac{\partial A}{\partial r} = -\frac{\rho\omega^2}{\sigma_\text{max}}rA \\ A(r) = A(0) \exp\left\{-\frac{\rho \omega^2}{2\sigma_\text{max}}r^2\right\} \\ A(r) = A(0) \exp\left\{-\left(\frac V{v_\text{crit}}\right)^2\right\} $$ Thus, to support a mass $M$ at the end of the cable ($r=R$) we have: $$ \sigma_\text{max} A(r) = M\omega^2 R \\ \sigma_\text{max} A(0)\exp\left\{-\left(\frac V{v_\text{crit}}\right)^2\right\} = M\frac{V^2}{R} \\ A(0) = \frac{MV^2}{R\sigma_\text{max}}\exp\left\{\left(\frac V{v_\text{crit}}\right)^2\right\} $$ Our plot now looks something like: We can find the total mass of the cable by integrating $\delta m$: $$ \int \delta m = \int \rho A\ \delta r\\ = M\sqrt{\pi}\left(\frac{V}{v_\text{crit}}\right)e^{\left(\frac{V}{v_\text{crit}}\right)^2}\text{erf}\left(\frac{V}{v_\text{crit}}\right) $$ The ratio between cable mass and payload mass turns out to be purely a function of the ratio between tip velocity and the critical velocity. It looks like this: The tether works on the Moon, but we can see that on Mars or Earth where the escape speeds are two to five times higher, the tether system quickly becomes impractical with current materials. Even carbon nanotubes, with $v_\text{crit}=9.6~\text{km}/\text{s}$, would have a cable-to-payload mass ratio of over 7 launching from Earth. 2012rcampion2012rcampion $\begingroup$ Your calculus skills surpass mine, but so far as I can see the math is sound. Hoping someone like Mark Adler will check it. Some very useful concepts, first time I'd seen Vcrit for tethers. $\endgroup$ – HopDavid Apr 28 '15 at 15:19 $\begingroup$ I have to admit your answer is unpleasant to me -- Some of my favorite day dreams are Clarke towers from asteroids. For these shallow gravity gravity wells, centrifugal force quickly becomes the dominant force as radius grows and thus are close to your sling model. I will have to revisit some of my sci fi scenarios set on Ceres and Vesta. $\endgroup$ – HopDavid Apr 28 '15 at 15:22 $\begingroup$ @Hop Clarke tower == space elevator? Sorry, I'm not hip with the lingo here... $\endgroup$ – 2012rcampion Apr 28 '15 at 16:11 $\begingroup$ I do have an odd manner of speech that's sometimes opaque. Yes, Clarke tower == space elevator. hopsblog-hop.blogspot.com/2012/09/… In my spreadsheets lowering the tether foot will raise the tether top. (A spreadsheet based on equations in P. K. Aravind's paper). I think this may be a way to look at taper ratios for very tall elevators from Vesta or Ceres. $\endgroup$ – HopDavid Apr 28 '15 at 16:28 $\begingroup$ A relatively high launch cable mass works to your advantage in one way: When you release your load the system immediately goes out of balance and shifts its rotation to the new center of mass. This can be managed by (for example) supporting your central hub on a sliding or articulating base, or by allowing the cable to slide through the base a certain distance. The higher your cable mass fraction the smaller the displacement in CofM and therefore the easier/cheaper your post-launch displacement issue. $\endgroup$ – Kengineer Oct 19 '17 at 18:03 Not the answer you're looking for? Browse other questions tagged the-moon materials analysis performance launch-system or ask your own question. What formulas do I use to calculate the gravity and drag forces on an object ascending from Earth's surface? Could a sling launcher be used on Mars? What would be the lightest possible moon launch vehicle? What source of raw materials could be found in space for making plastics? What sort of analysis was performed before "modern" computing and the invention of finite element analysis and computational fluid dynamics? What is the specific compression strength of an orbital tower structure? Is it possible building an Space Shuttle in reinforced concrete? Could you launch rocks from the Moon to hit Earth? How much lighter could a modern, working replica of an Apollo payload and crew be at launch? What size payloads could we feasibly launch from the moon with a mass driver? What are NASA's dozen payloads for the Moon that will be ready for launch by the end this year? (2019)
CommonCrawl
Ruofei Du CMSC838F CMSC828B CMSC828V Course | Ruofei Du MA066 Advanced Algebra I by grant | posted in: Math, Undergraduate | 0 Algebra defines, roughly, relationships. See: http://blog.ruofeidu.com/gradient-circulation-laplacian-divergence-jacobian-hessian-trace/ Matrix Form for Linear Regression For the regression model $$Y = AX + B$$, the coefficients of the least squares regression line are given by the matrix equation $$A = (X^T X)^{-1} X^T Y$$ and the sum of squared error is $$B^T B$$ Shape: Descriptions like "upper-triangular", "symmetric", "diagonal" are the shape of the matrix, and influence their transformations. rank(A) is the dimension of the vector space generated (or spanned) by its columns, which can be computed by reducing the matrix to row echelon form. rank(AB) <= min(rank(A), rank(B)) tr(A) is the sum of the elements on the main diagonal tr(A+B) = tr(A) + tr(B) Gradient is a multi-variable generalization of the derivative Divergence of a three-dimensional vector field is the extent to which the vector field flow behaves like a source at a given point Jacobian matrix is the matrix of all first-order partial derivatives of a vector-valued function. Hessian matrix or Hessian is a square matrix of second-order partial derivatives of a scalar-valued function, or scalar field. Homogeneous Coordinates They are reprenseted in an affine space, which consists of a vector space and a set of points. [x, y, z] => [x, y, z, 1], so the subtraction of two points yield a vector [x', y', z', 0] Rotation in 3D Euler angles depends on order: $R_x R_y R_z \neq R_z R_y R_x$ Quaternion: do not have the gimbal lock issue, takes 4 scalars version 3×3 rotation matrix, and speed is fast. A linear transformation $$T:R_n→R_m$$ is a mapping from n-dimensional space to m-dimensional space. Such a linear transformation can be associated with an $m\times n$ matrix. Reflection in y $$T(x, y) = (-x, y)$$ $$\begin{pmatrix} -1 & 0 \\ 0 & 1 \end{pmatrix}$$ $$T(x, y) = (x + ky, y)$$ $$\begin{pmatrix} 1 & k \\ 0 & 1 \end{pmatrix}$$ $$\begin{pmatrix} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{pmatrix}$$ The determinant is the "size" of the output transformation. If the input was a unit vector (representing area or volume of 1), the determinant is the size of the transformed area or volume. A determinant of 0 means matrix is "destructive" and cannot be reversed (similar to multiplying by zero: information was lost). If A is an invertible matrix, then $$A^{-1} = \frac{1}{det(A)} adj(A)$$ Eigenvector and Eigenvalue The eigenvector and eigenvalue represent the "axes" of the transformation. Consider spinning a globe: every location faces a new direction, except the poles. An "eigenvector" is an input that doesn't change direction when it's run through the matrix (it points "along the axis"). And although the direction doesn't change, the size might. The eigenvalue is the amount the eigenvector is scaled up or down when going through the matrix. Cramer's Rule If a system of linear equations in variables has a coefficient matrix with a nonzero determinant then the solution of the system is $$x_1 = \frac{det(A_1)}{det(A)}, x_2 = \frac{det(A_2)}{det(A)}, x_n = \frac{det(A_n)}{det(A)}$$ Area of a Triangle in the XY-Plane $$Area = \frac{1}{2} \begin{vmatrix} x_1 & y_1 & 1 \\ x_2 & y_2 & 1 \\ x_3 & y_3 & 1 \end{vmatrix} $$ Finding the Volume of a Tetrahedron $$\frac{1}{6} \begin{vmatrix} x_1 & y_1 & z_1 & 1 \\ x_2 & y_2 & z_2 & 1 \\ x_3 & y_3 & z_3 & 1 \\ x_4 & y_4 & z_4 & 1 \end{vmatrix} $$ Cross Product Cross product of two vectors is a vector perpendicular to both, in the direction defined by the right hand rule. $$ u \times v = \begin{pmatrix} i & j & k \\ u_1 & u_2 & u_3 \\ v_1 & v_2 & v_3 \end{pmatrix} $$ $$ u \times v = – v \times u = |u||v| \sin \theta = \textrm{area of parallelogram } uv $$ for example, in $R^3$ $$u \times v = (u_2 v_3 – u_3 v_2) i – (u_1v_3 – u_3v_1)j + (u_1v_2 – u_2v_1)k$$ In physics, the cross product can be used to measure torque—the moment of a force about a point as shown in the figure below: When the point of application of the force is the moment of about is given by $$M = AB \times F $$ where represents the vector whose initial point is and whose terminal point is The magnitude of the moment measures the tendency of to rotate counterclockwise about an axis directed along the vector M. Four points and are coplanar if and only if $$det \begin{vmatrix} x_1 & y_1 & z_1 & 1 \\ x_2 & y_2 & z_2 & 1 \\ x_3 & y_3 & z_3 & 1 \\ x_4 & y_4 & z_4 & 1 \end{vmatrix} = 0 $$ The volume of a parallelepiped spanned by the vectors aa, bb and cc is the absolute value of the scalar triple product (a×b)⋅c(a×b)⋅c. Three-Point Form of the Equation of a Plane An equation of the plane passing through the distinct points and is given by $$det \begin{vmatrix} x & y & z & 1 \\ x_1 & y_1 & z_1 & 1 \\ x_2 & y_2 & z_2 & 1 \\ x_3 & y_3 & z_3 & 1 \\ \end{vmatrix} $$ http://blog.ruofeidu.com/gradient-circulation-laplacian-divergence-jacobian-hessian-trace/ http://homepage.ntu.edu.tw/~jryanwang/course/Mathematics%20for%20Management%20(undergraduate%20level)/Applications%20in%20Ch3.pdf https://betterexplained.com/articles/linear-algebra-guide/ Generative Adversarial Nets (GANs) Protected: CMSC 660 Scientific Computing I Homework CS110 C++ Programming CS475 Algorithm Analysis and Design anjali on Interactive Computer Graphics Pranesh Biswas on Interactive Computer Graphics louis on Interactive Computer Graphics santhoshkumar on Interactive Computer Graphics Santhoshkumar s p on Interactive Computer Graphics CHI14 PostMS © 2020 Course | Ruofei Du - WordPress Theme by Kadence Themes
CommonCrawl
Galton–Watson process The Galton–Watson process is a branching stochastic process arising from Francis Galton's statistical investigation of the extinction of family names. The process models family names as patrilineal (passed from father to son), while offspring are randomly either male or female, and names become extinct if the family name line dies out (holders of the family name die without male descendants). This is an accurate description of Y chromosome transmission in genetics, and the model is thus useful for understanding human Y-chromosome DNA haplogroups. Likewise, since mitochondria are inherited only on the maternal line, the same mathematical formulation describes transmission of mitochondria. The formula is of limited usefulness in understanding actual family name distributions, since in practice family names change for many other reasons, and dying out of name line is only one factor. History There was concern amongst the Victorians that aristocratic surnames were becoming extinct. In 1869, Galton published Hereditary Genius, in which he treated the extinction of different social groups. Galton originally posed a mathematical question regarding the distribution of surnames in an idealized population in an 1873 issue of The Educational Times:[1] A large nation, of whom we will only concern ourselves with adult males, N in number, and who each bear separate surnames colonise a district. Their law of population is such that, in each generation, a0 per cent of the adult males have no male children who reach adult life; a1 have one such male child; a2 have two; and so on up to a5 who have five. Find what proportion of their surnames will have become extinct after r generations; and how many instances there will be of the surname being held by m persons. The Reverend Henry William Watson replied with a solution.[2] Together, they then wrote an 1874 paper titled "On the probability of the extinction of families" in the Journal of the Anthropological Institute of Great Britain and Ireland (now the Journal of the Royal Anthropological Institute).[3] Galton and Watson appear to have derived their process independently of the earlier work by I. J. Bienaymé; see.[4] Their solution is incomplete, according to which all family names go extinct with probability 1. Bienaymé published the answer to the problem in 1845,[5] with a promise to publish the derivation later. though there is no known publication of his solution. Cournot published a solution in 1847, in chapter 36 of.[6] The problem in his formulation is Consider a gambler who buys lotteries. Each lottery costs 1 ecu and pays $0,1,...,n$ with probabilities $p_{0},...,p_{n}$. The gambler always spends all their money to buy lotteries. If the gambler starts with $k$ dollars, what's the probability of going bankrupt? Ronald A. Fisher in 1922 studied the same problem formulated in terms of genetics. Instead of the extinction of family names, he studied the probability for a mutant gene to eventually disappear in a large population.[7] Haldane solved the problem in 1927.[8] Agner Krarup Erlang was a member of the prominent Krarup family, which was going extinct. In 1929, he published the same problem posthumously (his obituary appears beside the problem). Erlang died childless. Steffensen solved it in 1930. For a detailed history, see Kendall (1966[9] and 1975[10]) and [11] and also Section 17 of.[12] Concepts Assume, for the sake of the model, that surnames are passed on to all male children by their father. Suppose the number of a man's sons to be a random variable distributed on the set { 0, 1, 2, 3, ... }. Further suppose the numbers of different men's sons to be independent random variables, all having the same distribution. Then the simplest substantial mathematical conclusion is that if the average number of a man's sons is 1 or less, then their surname will almost surely die out, and if it is more than 1, then there is more than zero probability that it will survive for any given number of generations. Modern applications include the survival probabilities for a new mutant gene, or the initiation of a nuclear chain reaction, or the dynamics of disease outbreaks in their first generations of spread, or the chances of extinction of small population of organisms; as well as explaining (perhaps closest to Galton's original interest) why only a handful of males in the deep past of humanity now have any surviving male-line descendants, reflected in a rather small number of distinctive human Y-chromosome DNA haplogroups. A corollary of high extinction probabilities is that if a lineage has survived, it is likely to have experienced, purely by chance, an unusually high growth rate in its early generations at least when compared to the rest of the population. Mathematical definition A Galton–Watson process is a stochastic process {Xn} which evolves according to the recurrence formula X0 = 1 and $X_{n+1}=\sum _{j=1}^{X_{n}}\xi _{j}^{(n)}$ where $\{\xi _{j}^{(n)}:n,j\in \mathbb {N} \}$ is a set of independent and identically-distributed natural number-valued random variables. In the analogy with family names, Xn can be thought of as the number of descendants (along the male line) in the nth generation, and $\xi _{j}^{(n)}$ can be thought of as the number of (male) children of the jth of these descendants. The recurrence relation states that the number of descendants in the n+1st generation is the sum, over all nth generation descendants, of the number of children of that descendant. The extinction probability (i.e. the probability of final extinction) is given by $\lim _{n\to \infty }\Pr(X_{n}=0).\,$ This is clearly equal to zero if each member of the population has exactly one descendant. Excluding this case (usually called the trivial case) there exists a simple necessary and sufficient condition, which is given in the next section. Extinction criterion for Galton–Watson process In the non-trivial case, the probability of final extinction is equal to 1 if E{ξ1} ≤ 1 and strictly less than 1 if E{ξ1} > 1. The process can be treated analytically using the method of probability generating functions. If the number of children ξ j at each node follows a Poisson distribution with parameter λ, a particularly simple recurrence can be found for the total extinction probability xn for a process starting with a single individual at time n = 0: $x_{n+1}=e^{\lambda (x_{n}-1)},\,$ giving the above curves. Bisexual Galton–Watson process In the classical family surname Galton–Watson process described above, only men need to be considered, since only males transmit their family name to descendants. This effectively means that reproduction can be modeled as asexual. (Likewise, if mitochondrial transmission is analyzed, only women need to be considered, since only females transmit their mitochondria to descendants.) A model more closely following actual sexual reproduction is the so-called "bisexual Galton–Watson process", where only couples reproduce. (Bisexual in this context refers to the number of sexes involved, not sexual orientation.) In this process, each child is supposed as male or female, independently of each other, with a specified probability, and a so-called "mating function" determines how many couples will form in a given generation. As before, reproduction of different couples are considered to be independent of each other. Now the analogue of the trivial case corresponds to the case of each male and female reproducing in exactly one couple, having one male and one female descendant, and that the mating function takes the value of the minimum of the number of males and females (which are then the same from the next generation onwards). Since the total reproduction within a generation depends now strongly on the mating function, there exists in general no simple necessary and sufficient condition for final extinction as is the case in the classical Galton–Watson process. However, excluding the non-trivial case, the concept of the averaged reproduction mean (Bruss (1984)) allows for a general sufficient condition for final extinction, treated in the next section. Extinction criterion If in the non-trivial case the averaged reproduction mean per couple stays bounded over all generations and will not exceed 1 for a sufficiently large population size, then the probability of final extinction is always 1. Examples Citing historical examples of Galton–Watson process is complicated due to the history of family names often deviating significantly from the theoretical model. Notably, new names can be created, existing names can be changed over a person's lifetime, and people historically have often assumed names of unrelated persons, particularly nobility. Thus, a small number of family names at present is not in itself evidence for names having become extinct over time, or that they did so due to dying out of family name lines – that requires that there were more names in the past and that they die out due to the line dying out, rather than the name changing for other reasons, such as vassals assuming the name of their lord. Chinese names are a well-studied example of surname extinction: there are currently only about 3,100 surnames in use in China, compared with close to 12,000 recorded in the past,[13][14] with 22% of the population sharing the names Li, Wang and Zhang (numbering close to 300 million people), and the top 200 names covering 96% of the population. Names have changed or become extinct for various reasons such as people taking the names of their rulers, orthographic simplifications, taboos against using characters from an emperor's name, among others.[14] While family name lines dying out may be a factor in the surname extinction, it is by no means the only or even a significant factor. Indeed, the most significant factor affecting the surname frequency is other ethnic groups identifying as Han and adopting Han names.[14] Further, while new names have arisen for various reasons, this has been outweighed by old names disappearing.[14] By contrast, some nations have adopted family names only recently. This means both that they have not experienced surname extinction for an extended period, and that the names were adopted when the nation had a relatively large population, rather than the smaller populations of ancient times.[14] Further, these names have often been chosen creatively and are very diverse. Examples include: • Japanese names, which in general use date only to the Meiji restoration in the late 19th century (when the population was over 30,000,000), have over 100,000 family names, surnames are very varied, and the government restricts married couples to using the same surname. • Many Dutch names have included a formal family name only since the Napoleonic Wars in the early 19th century. Earlier, surnames originated from patronyms[15] (e.g., Jansen = John's son), personal qualities (e.g., de Rijke = the rich one), geographical locations (e.g., van Rotterdam), and occupations (e.g., Visser = the fisherman), sometimes even combined (e.g., Jan Jansz van Rotterdam). There are over 68,000 Dutch family names. • Thai names have included a family name only since 1920, and only a single family can use a given family name; hence there are a great number of Thai names. Furthermore, Thai people change their family names with some frequency, complicating the analysis. On the other hand, some examples of high concentration of family names is not primarily due to the Galton–Watson process: • Vietnamese names have about 100 family names, with 60% of the population sharing three family names. The name Nguyễn alone is estimated to be used by almost 40% of the Vietnamese population, and 90% share 15 names. However, as the history of the Nguyễn name makes clear, this is in no small part due to names being forced on people or adopted for reasons unrelated to genetic relation. See also • Branching process • Resource-dependent branching process • Pedigree collapse References 1. Francis Galton (1873-03-01). "Problem 4001" (PDF). Educational Times. 25 (143): 300. Archived from the original (PDF) on 2017-01-23. 2. Henry William Watson (1873-08-01). "Problem 4001" (PDF). Educational Times. 26 (148): 115. Archived from the original (PDF) on 2016-12-01. A first offering submitted by G.S. Carr, according to Galton, was "totally erroneous"; see G. S. Carr (1873-04-01). "Problem 4001" (PDF). Educational Times. 26 (144): 17. Archived from the original (PDF) on 2017-08-03. 3. Galton, F., & Watson, H. W. (1875). "On the probability of the extinction of families". Journal of the Royal Anthropological Institute, 4, 138–144. 4. Heyde, C. C.; Seneta, E. (1972). "Studies in the History of Probability and Statistics. XXXI. The simple branching process, a turning point test and a fundamental inequality: A historical note on I. J. Bienaymé". Biometrika. 59 (3): 680–683. doi:10.1093/biomet/59.3.680. ISSN 0006-3444. 5. Bienayme, I.J. (1845). De la loi de multiplication et de la durée des families. Soc. Philomath. Paris, Extraits, Sér. 5, 3 (Quoted from Heyde & Seneta (1972)). 6. Cournot, A. A. (Antoine Augustin) (1847). De l'origine et des limites de la correspondance entre l'algèbre et la géométrie. University of Illinois Urbana-Champaign. Paris : L. Hachette. 7. Fisher, R. A. (1922). "XXI.—On the Dominance Ratio". Proceedings of the Royal Society of Edinburgh. 42: 321–341. doi:10.1017/S0370164600023993. ISSN 0370-1646. 8. Haldane, J. B. S. (July 1927). "A Mathematical Theory of Natural and Artificial Selection, Part V: Selection and Mutation". Mathematical Proceedings of the Cambridge Philosophical Society. 23 (7): 838–844. Bibcode:1927PCPS...23..838H. doi:10.1017/S0305004100015644. ISSN 1469-8064. S2CID 86716613. 9. Kendall, David G. (1966). "Branching Processes Since 1873". Journal of the London Mathematical Society. s1-41 (1): 385–406. doi:10.1112/jlms/s1-41.1.385. 10. Kendall, David G. (November 1975). "The Genealogy of Genealogy Branching Processes before (and after) 1873". Bulletin of the London Mathematical Society. 7 (3): 225–253. doi:10.1112/blms/7.3.225. 11. Albertsen, K. (1995). "The Extinction of Families". International Statistical Review / Revue Internationale de Statistique. 63 (2): 234–239. doi:10.2307/1403617. ISSN 0306-7734. JSTOR 1403617. S2CID 124630211. 12. Simkin, M. V.; Roychowdhury, V. P. (2011-05-01). "Re-inventing Willis". Physics Reports. 502 (1): 1–35. arXiv:physics/0601192. Bibcode:2011PhR...502....1S. doi:10.1016/j.physrep.2010.12.004. ISSN 0370-1573. S2CID 88517297. 13. "O rare John Smith", The Economist (US ed.), p. 32, June 3, 1995, Only 3,100 surnames are now in use in China [...] compared with nearly 12,000 in the past. An 'evolutionary dwindling' of surnames is common to all societies. [...] [B]ut in China, [Du] says, where surnames have been in use far longer than in most other places, the paucity has become acute. 14. Du, Ruofu; Yida, Yuan; Hwang, Juliana; Mountain, Joanna L.; Cavalli-Sforza, L. Luca (1992), Chinese Surnames and the Genetic Differences between North and South China (PDF), Journal of Chinese Linguistics Monograph Series, pp. 18–22 (History of Chinese surnames and sources of data for the present research), archived from the original (PDF) on 2012-11-20, also part of Morrison Institute for Population and Resource Studies Working papers {{citation}}: External link in |postscript= (help)CS1 maint: postscript (link) 15. "Patronym - Behind the Name". Further reading • F. Thomas Bruss (1984). "A Note on Extinction Criteria for Bisexual Galton–Watson Processes". Journal of Applied Probability 21: 915–919. • C C Heyde and E Seneta (1977). I.J. Bienayme: Statistical Theory Anticipated. Berlin, Germany. • Kendall, D. G. (1966). "Branching Processes Since 1873". Journal of the London Mathematical Society. s1-41: 385–406. doi:10.1112/jlms/s1-41.1.385. ISSN 0024-6107. • Kendall, D. G. (1975). "The Genealogy of Genealogy Branching Processes before (and after) 1873". Bulletin of the London Mathematical Society. 7 (3): 225–253. doi:10.1112/blms/7.3.225. ISSN 0024-6093. External links • "Survival of a Single Mutant" by Peter M. Lee of the University of York • The simple Galton-Watson process: Classical approach, University of Muenster Stochastic processes Discrete time • Bernoulli process • Branching process • Chinese restaurant process • Galton–Watson process • Independent and identically distributed random variables • Markov chain • Moran process • Random walk • Loop-erased • Self-avoiding • Biased • Maximal entropy Continuous time • Additive process • Bessel process • Birth–death process • pure birth • Brownian motion • Bridge • Excursion • Fractional • Geometric • Meander • Cauchy process • Contact process • Continuous-time random walk • Cox process • Diffusion process • Empirical process • Feller process • Fleming–Viot process • Gamma process • Geometric process • Hawkes process • Hunt process • Interacting particle systems • Itô diffusion • Itô process • Jump diffusion • Jump process • Lévy process • Local time • Markov additive process • McKean–Vlasov process • Ornstein–Uhlenbeck process • Poisson process • Compound • Non-homogeneous • Schramm–Loewner evolution • Semimartingale • Sigma-martingale • Stable process • Superprocess • Telegraph process • Variance gamma process • Wiener process • Wiener sausage Both • Branching process • Galves–Löcherbach model • Gaussian process • Hidden Markov model (HMM) • Markov process • Martingale • Differences • Local • Sub- • Super- • Random dynamical system • Regenerative process • Renewal process • Stochastic chains with memory of variable length • White noise Fields and other • Dirichlet process • Gaussian random field • Gibbs measure • Hopfield model • Ising model • Potts model • Boolean network • Markov random field • Percolation • Pitman–Yor process • Point process • Cox • Poisson • Random field • Random graph Time series models • Autoregressive conditional heteroskedasticity (ARCH) model • Autoregressive integrated moving average (ARIMA) model • Autoregressive (AR) model • Autoregressive–moving-average (ARMA) model • Generalized autoregressive conditional heteroskedasticity (GARCH) model • Moving-average (MA) model Financial models • Binomial options pricing model • Black–Derman–Toy • Black–Karasinski • Black–Scholes • Chan–Karolyi–Longstaff–Sanders (CKLS) • Chen • Constant elasticity of variance (CEV) • Cox–Ingersoll–Ross (CIR) • Garman–Kohlhagen • Heath–Jarrow–Morton (HJM) • Heston • Ho–Lee • Hull–White • LIBOR market • Rendleman–Bartter • SABR volatility • Vašíček • Wilkie Actuarial models • Bühlmann • Cramér–Lundberg • Risk process • Sparre–Anderson Queueing models • Bulk • Fluid • Generalized queueing network • M/G/1 • M/M/1 • M/M/c Properties • Càdlàg paths • Continuous • Continuous paths • Ergodic • Exchangeable • Feller-continuous • Gauss–Markov • Markov • Mixing • Piecewise-deterministic • Predictable • Progressively measurable • Self-similar • Stationary • Time-reversible Limit theorems • Central limit theorem • Donsker's theorem • Doob's martingale convergence theorems • Ergodic theorem • Fisher–Tippett–Gnedenko theorem • Large deviation principle • Law of large numbers (weak/strong) • Law of the iterated logarithm • Maximal ergodic theorem • Sanov's theorem • Zero–one laws (Blumenthal, Borel–Cantelli, Engelbert–Schmidt, Hewitt–Savage, Kolmogorov, Lévy) Inequalities • Burkholder–Davis–Gundy • Doob's martingale • Doob's upcrossing • Kunita–Watanabe • Marcinkiewicz–Zygmund Tools • Cameron–Martin formula • Convergence of random variables • Doléans-Dade exponential • Doob decomposition theorem • Doob–Meyer decomposition theorem • Doob's optional stopping theorem • Dynkin's formula • Feynman–Kac formula • Filtration • Girsanov theorem • Infinitesimal generator • Itô integral • Itô's lemma • Karhunen–Loève theorem • Kolmogorov continuity theorem • Kolmogorov extension theorem • Lévy–Prokhorov metric • Malliavin calculus • Martingale representation theorem • Optional stopping theorem • Prokhorov's theorem • Quadratic variation • Reflection principle • Skorokhod integral • Skorokhod's representation theorem • Skorokhod space • Snell envelope • Stochastic differential equation • Tanaka • Stopping time • Stratonovich integral • Uniform integrability • Usual hypotheses • Wiener space • Classical • Abstract Disciplines • Actuarial mathematics • Control theory • Econometrics • Ergodic theory • Extreme value theory (EVT) • Large deviations theory • Mathematical finance • Mathematical statistics • Probability theory • Queueing theory • Renewal theory • Ruin theory • Signal processing • Statistics • Stochastic analysis • Time series analysis • Machine learning • List of topics • Category Authority control: National • Germany
Wikipedia
\begin{definition}[Definition:Normal to Curve] Let $C$ be a curve embedded in the plane. The '''normal to $C$''' at a point $P$ is defined as the straight line which lies perpendicular to the tangent at $P$ and in the same plane as $P$. \end{definition}
ProofWiki
Who invented the push-pull construction? I learned about the push-pull construction from a video lecture by Freedman in which it is explained starting around 39:08. It is somewhat long and technical to describe in detail, but the main idea is that it is a technique to control an homeomorphism in one linear direction. The example application is to show that if $X$ and $Y$ are compact metric spaces such that $X\times\Bbb R$ is homeomorphic to $Y\times\Bbb R$, then $X\times S^1$ and $Y\times S^1$ are also homeomorphic. The way this is done is by taking an homeomorphism $h\colon X\times\Bbb R\to Y\times\Bbb R$ and using the push-pull construction to modify it to another homeomorphism between the two spaces which is periodic in the $\Bbb R$ coordinate, so that it descends to an homeomorphism $X\times S^1\to Y\times S^1$. Morse used the construction in 1960 to establish the Schoenflies conjecture "Every bicollared embedding $S^d\to\Bbb R^{d+1}$ extends to an embedding $D^{d+1}\to\Bbb R^{d+1}$". In particular he used the push-pull construction to show that every such bicollared embedding admits a so called flat spot after composing with a self-homeomorphism of $\Bbb R^{d+1}$, and the conjecture had already been established by Mazur for bicollared embeddings with a flat spot. However Freedman mentions that the push-pull construction was not introduced by Morse, and in fact that it was already well known by the time he used to to settle Schoenflies's conjecture, hence my question: who invented the push-pull construction? geometry topology Alessandro CodenottiAlessandro Codenotti $\begingroup$ There is a more detailed printed version of Freedman's lectures, where he mentions that it was "common knowledge", that "by the time Morse had completed Mazur's argument, Brown had already given an independent proof", and ends up "giving some more applications, probably also due to Brown, of push-pull" (oddly enough, all his previous applications are associated with Morse). Stern mentions in the MathSciNet review of Kirby-Siebenmann (1977) that Brown's proof "marked the advent of push-pull topology". $\endgroup$ – Conifold Apr 19 '20 at 9:31 Browse other questions tagged geometry topology or ask your own question. Who discovered the topological proof of Nielsen-Schreier theorem? Who first proved the "Cantor-Heine theorem" on uniform continuity? What exactly did Poincaré mean by 'simply connected'? Who was Puppe of the Puppe sequence? Who was the first individual that used the word "torus" to refer to $\mathbb{S}^{1} \times \mathbb{S}^{1}$? How was geometry with 3 dimensions discovered/invented? Who first described the fundamental group as the group of deck transformations? Normed vector space : when and who?
CommonCrawl
Penrose Tiling Get Penrose Tiling essential facts below. View Videos or join the Penrose Tiling discussion. Add Penrose Tiling to your PopFlock.com topic list for future reference or share this resource on social media. Non-periodic tiling of the plane A Penrose tiling with rhombi exhibiting fivefold symmetry A Penrose tiling is an example of an aperiodic tiling. Here, a tiling is a covering of the plane by non-overlapping polygons or other shapes, and aperiodic means that shifting any tiling with these shapes by any finite distance, without rotation, cannot produce the same tiling. However, despite their lack of translational symmetry, Penrose tilings may have both reflection symmetry and fivefold rotational symmetry. Penrose tilings are named after mathematician and physicist Roger Penrose, who investigated them in the 1970s. There are several different variations of Penrose tilings with different tile shapes. The original form of Penrose tiling used tiles of four different shapes, but this was later reduced to only two shapes: either two different rhombi, or two different quadrilaterals called kites and darts. The Penrose tilings are obtained by constraining the ways in which these shapes are allowed to fit together. This may be done in several different ways, including matching rules, substitution tiling or finite subdivision rules, cut and project schemes, and coverings. Even constrained in this manner, each variation yields infinitely many different Penrose tilings. Roger Penrose in the foyer of the Mitchell Institute for Fundamental Physics and Astronomy, Texas A&M University, standing on a floor with a Penrose tiling Penrose tilings are self-similar: they may be converted to equivalent Penrose tilings with different sizes of tiles, using processes called inflation and deflation. The pattern represented by every finite patch of tiles in a Penrose tiling occurs infinitely many times throughout the tiling. They are quasicrystals: implemented as a physical structure a Penrose tiling will produce diffraction patterns with Bragg peaks and five-fold symmetry, revealing the repeated patterns and fixed orientations of its tiles.[1] The study of these tilings has been important in the understanding of physical materials that also form quasicrystals.[2] Penrose tilings have also been applied in architecture and decoration, as in the floor tiling shown. Periodic and aperiodic tilings Figure 1. Part of a periodic tiling with two prototiles Covering a flat surface ("the plane") with some pattern of geometric shapes ("tiles"), with no overlaps or gaps, is called a tiling. The most familiar tilings, such as covering a floor with squares meeting edge-to-edge, are examples of periodic tilings. If a square tiling is shifted by the width of a tile, parallel to the sides of the tile, the result is the same pattern of tiles as before the shift. A shift (formally, a translation) that preserves the tiling in this way is called a period of the tiling. A tiling is called periodic when it has periods that shift the tiling in two different directions.[3] The tiles in the square tiling have only one shape, and it is common for other tilings to have only a finite number of shapes. These shapes are called prototiles, and a set of prototiles is said to admit a tiling or tile the plane if there is a tiling of the plane using only these shapes. That is, each tile in the tiling must be congruent to one of these prototiles.[4] A tiling that has no periods is non-periodic. A set of prototiles is said to be aperiodic if all of its tilings are non-periodic, and in this case its tilings are also called aperiodic tilings.[5] Penrose tilings are among the simplest known examples of aperiodic tilings of the plane by finite sets of prototiles.[3] Earliest aperiodic tilings An aperiodic set of Wang dominoes.[6] The subject of aperiodic tilings received new interest in the 1960s when logician Hao Wang noted connections between decision problems and tilings.[7] In particular, he introduced tilings by square plates with colored edges, now known as Wang dominoes or tiles, and posed the "Domino Problem": to determine whether a given set of Wang dominoes could tile the plane with matching colors on adjacent domino edges. He observed that if this problem were undecidable, then there would have to exist an aperiodic set of Wang dominoes. At the time, this seemed implausible, so Wang conjectured no such set could exist. Robinson's six prototiles Wang's student Robert Berger proved that the Domino Problem was undecidable (so Wang's conjecture was incorrect) in his 1964 thesis,[8] and obtained an aperiodic set of 20,426 Wang dominoes.[9] He also described a reduction to 104 such prototiles; the latter did not appear in his published monograph,[10] but in 1968, Donald Knuth detailed a modification of Berger's set requiring only 92 dominoes.[11] The color matching required in a tiling by Wang dominoes can easily be achieved by modifying the edges of the tiles like jigsaw puzzle pieces so that they can fit together only as prescribed by the edge colorings.[12]Raphael Robinson, in a 1971 paper[13] which simplified Berger's techniques and undecidability proof, used this technique to obtain an aperiodic set of just six prototiles.[14] Development of the Penrose tilings The first Penrose tiling (tiling P1 below) is an aperiodic set of six prototiles, introduced by Roger Penrose in a 1974 paper,[16] based on pentagons rather than squares. Any attempt to tile the plane with regular pentagons necessarily leaves gaps, but Johannes Kepler showed, in his 1619 work Harmonices Mundi, that these gaps can be filled using pentagrams (star polygons), decagons and related shapes.[17] Traces of these ideas can also be found in the work of Albrecht Dürer.[18] Acknowledging inspiration from Kepler, Penrose found matching rules for these shapes, obtaining an aperiodic set. These matching rules can be imposed by decorations of the edges, as with the Wang tiles. Penrose's tiling can be viewed as a completion of Kepler's finite Aa pattern.[19] A non-Penrose tiling by pentagons and thin rhombs in the early 18th-century Pilgrimage Church of Saint John of Nepomuk at Zelená hora, Czech Republic Penrose subsequently reduced the number of prototiles to two, discovering the kite and dart tiling (tiling P2 below) and the rhombus tiling (tiling P3 below).[20] The rhombus tiling was independently discovered by Robert Ammann in 1976.[21] Penrose and John H. Conway investigated the properties of Penrose tilings, and discovered that a substitution property explained their hierarchical nature; their findings were publicized by Martin Gardner in his January 1977 "Mathematical Games" column in Scientific American.[22] In 1981, N. G. De Bruijn provided two different methods to construct Penrose tilings. De Bruijn's "multigrid method" obtains the Penrose tilings as the dual graphs of arrangements of five families of parallel lines. In his "cut and project method", Penrose tilings are obtained as two-dimensional projections from a five-dimensional cubic structure. In these approaches, the Penrose tiling is viewed as a set of points, its vertices, while the tiles are geometrical shapes obtained by connecting vertices with edges.[23] Penrose tilings A P1 tiling using Penrose's original set of six prototiles The three types of Penrose tiling, P1–P3, are described individually below.[24] They have many common features: in each case, the tiles are constructed from shapes related to the pentagon (and hence to the golden ratio), but the basic tile shapes need to be supplemented by matching rules in order to tile aperiodically. These rules may be described using labeled vertices or edges, or patterns on the tile faces; alternatively, the edge profile can be modified (e.g. by indentations and protrusions) to obtain an aperiodic set of prototiles.[9][25] Original pentagonal Penrose tiling (P1) Penrose's first tiling uses pentagons and three other shapes: a five-pointed "star" (a pentagram), a "boat" (roughly 3/5 of a star) and a "diamond" (a thin rhombus).[26] To ensure that all tilings are non-periodic, there are matching rules that specify how tiles may meet each other, and there are three different types of matching rule for the pentagonal tiles. Treating these three types as different prototiles gives a set of six prototiles overall. It is common to indicate the three different types of pentagonal tiles using three different colors, as in the figure above right.[27] Kite and dart tiling (P2) Part of the plane covered by Penrose tiling of type P2 (kite and dart). Created by applying several deflations, see section below. Penrose's second tiling uses quadrilaterals called the "kite" and "dart", which may be combined to make a rhombus. However, the matching rules prohibit such a combination.[28] Both the kite and dart are composed of two triangles, called Robinson triangles, after 1975 notes by Robinson.[29] Kite and dart tiles (top) and the seven possible vertex figures in a P2 tiling. The kite is a quadrilateral whose four interior angles are 72, 72, 72, and 144 degrees. The kite may be bisected along its axis of symmetry to form a pair of acute Robinson triangles (with angles of 36, 72 and 72 degrees). The dart is a non-convex quadrilateral whose four interior angles are 36, 72, 36, and 216 degrees. The dart may be bisected along its axis of symmetry to form a pair of obtuse Robinson triangles (with angles of 36, 36 and 108 degrees), which are smaller than the acute triangles. The matching rules can be described in several ways. One approach is to color the vertices (with two colors, e.g., black and white) and require that adjacent tiles have matching vertices.[30] Another is to use a pattern of circular arcs (as shown above left in green and red) to constrain the placement of tiles: when two tiles share an edge in a tiling, the patterns must match at these edges.[20] These rules often force the placement of certain tiles: for example, the concave vertex of any dart is necessarily filled by two kites. The corresponding figure (center of the top row in the lower image on the left) is called an "ace" by Conway; although it looks like an enlarged kite, it does not tile in the same way.[31] Similarly the concave vertex formed when two kites meet along a short edge is necessarily filled by two darts (bottom right). In fact, there are only seven possible ways for the tiles to meet at a vertex; two of these figures – namely, the "star" (top left) and the "sun" (top right) – have 5-fold dihedral symmetry (by rotations and reflections), while the remainder have a single axis of reflection (vertical in the image).[32] Apart from the ace and the sun, all of these vertex figures force the placement of additional tiles.[33] Rhombus tiling (P3) Matching rule for Penrose rhombs using circular arcs or edge modifications to enforce the tiling rules Matching rule for Penrose rhombs using parabolic edges to enforce the tiling rules A Penrose Tiling using Penrose Rhombuses with parabolic edges The third tiling uses a pair of rhombuses (often referred to as "rhombs" in this context) with equal sides but different angles.[9] Ordinary rhombus-shaped tiles can be used to tile the plane periodically, so restrictions must be made on how tiles can be assembled: no two tiles may form a parallelogram, as this would allow a periodic tiling, but this constraint is not sufficient to force aperiodicity, as figure 1 above shows. There are two kinds of tile, both of which can be decomposed into Robinson triangles.[29] The thin rhomb t has four corners with angles of 36, 144, 36, and 144 degrees. The t rhomb may be bisected along its short diagonal to form a pair of acute Robinson triangles. The thick rhomb T has angles of 72, 108, 72, and 108 degrees. The T rhomb may be bisected along its long diagonal to form a pair of obtuse Robinson triangles; in contrast to the P2 tiling, these are larger than the acute triangles. The matching rules distinguish sides of the tiles, and entail that tiles may be juxtaposed in certain particular ways but not in others. Two ways to describe these matching rules are shown in the image on the right. In one form, tiles must be assembled such that the curves on the faces match in color and position across an edge. In the other, tiles must be assembled such that the bumps on their edges fit together.[9] There are 54 cyclically ordered combinations of such angles that add up to 360 degrees at a vertex, but the rules of the tiling allow only seven of these combinations to appear (although one of these arises in two ways).[34] The various combinations of angles and facial curvature allow construction of arbitrarily complex tiles, such as the Penrose chickens.[35] Features and constructions Golden ratio and local pentagonal symmetry Several properties and common features of the Penrose tilings involve the golden ratio ? = (1+)/2 (approximately 1.618).[29][30] This is the ratio of chord lengths to side lengths in a regular pentagon, and satisfies ? = 1 + 1/?. Pentagon with an inscribed thick rhomb (light), acute Robinson triangles (lightly shaded) and a small obtuse Robinson triangle (darker). Dotted lines give additional edges for inscribed kites and darts. Consequently, the ratio of the lengths of long sides to short sides in the (isosceles) Robinson triangles is ?:1. It follows that the ratio of long side lengths to short in both kite and dart tiles is also ?:1, as are the length ratios of sides to the short diagonal in the thin rhomb t, and of long diagonal to sides in the thick rhomb T. In both the P2 and P3 tilings, the ratio of the area of the larger Robinson triangle to the smaller one is ?:1, hence so are the ratios of the areas of the kite to the dart, and of the thick rhomb to the thin rhomb. (Both larger and smaller obtuse Robinson triangles can be found in the pentagon on the left: the larger triangles at the top – the halves of the thick rhomb – have linear dimensions scaled up by ? compared to the small shaded triangle at the base, and so the ratio of areas is ?2:1.) Any Penrose tiling has local pentagonal symmetry, in the sense that there are points in the tiling surrounded by a symmetric configuration of tiles: such configurations have fivefold rotational symmetry about the center point, as well as five mirror lines of reflection symmetry passing through the point, a dihedral symmetry group.[9] This symmetry will generally preserve only a patch of tiles around the center point, but the patch can be very large: Conway and Penrose proved that whenever the colored curves on the P2 or P3 tilings close in a loop, the region within the loop has pentagonal symmetry, and furthermore, in any tiling, there are at most two such curves of each color that do not close up.[36] There can be at most one center point of global fivefold symmetry: if there were more than one, then rotating each about the other would yield two closer centers of fivefold symmetry, which leads to a mathematical contradiction.[37] There are only two Penrose tilings (of each type) with global pentagonal symmetry: for the P2 tiling by kites and darts, the center point is either a "sun" or "star" vertex.[38] Inflation and deflation A pentagon decomposed into six smaller pentagons (half a dodecahedral net) with gaps Many of the common features of Penrose tilings follow from a hierarchical pentagonal structure given by substitution rules: this is often referred to as inflation and deflation, or composition and decomposition, of tilings or (collections of) tiles.[9][22][39] The substitution rules decompose each tile into smaller tiles of the same shape as those used in the tiling (and thus allow larger tiles to be "composed" from smaller ones). This shows that the Penrose tiling has a scaling self-similarity, and so can be thought of as a fractal, using the same process as the pentaflake.[40] Penrose originally discovered the P1 tiling in this way, by decomposing a pentagon into six smaller pentagons (one half of a net of a dodecahedron) and five half-diamonds; he then observed that when he repeated this process the gaps between pentagons could all be filled by stars, diamonds, boats and other pentagons.[26] By iterating this process indefinitely he obtained one of the two P1 tilings with pentagonal symmetry.[9][19] Robinson triangle decompositions Robinson triangles and their decompositions The substitution method for both P2 and P3 tilings can be described using Robinson triangles of different sizes. The Robinson triangles arising in P2 tilings (by bisecting kites and darts) are called A-tiles, while those arising in the P3 tilings (by bisecting rhombs) are called B-tiles.[29] The smaller A-tile, denoted AS, is an obtuse Robinson triangle, while the larger A-tile, AL, is acute; in contrast, a smaller B-tile, denoted BS, is an acute Robinson triangle, while the larger B-tile, BL, is obtuse. Concretely, if AS has side lengths (1, 1, ?), then AL has side lengths (?, ?, 1). B-tiles can be related to such A-tiles in two ways: If BS has the same size as AL then BL is an enlarged version ?AS of AS, with side lengths (?, ?, ?2 = 1 + ?) – this decomposes into an AL tile and AS tile joined along a common side of length 1. If instead BL is identified with AS, then BS is a reduced version (1/?)AL of AL with side lengths (1/?,1/?,1) – joining a BS tile and a BL tile along a common side of length 1 then yields (a decomposition of) an AL tile. In these decompositions, there appears to be an ambiguity: Robinson triangles may be decomposed in two ways, which are mirror images of each other in the (isosceles) axis of symmetry of the triangle. In a Penrose tiling, this choice is fixed by the matching rules. Furthermore, the matching rules also determine how the smaller triangles in the tiling compose to give larger ones.[29] Partial inflation of star to yield rhombs, and of a collection of rhombs to yield an ace. It follows that the P2 and P3 tilings are mutually locally derivable: a tiling by one set of tiles can be used to generate a tiling by another. For example, a tiling by kites and darts may be subdivided into A-tiles, and these can be composed in a canonical way to form B-tiles and hence rhombs.[15] The P2 and P3 tilings are also both mutually locally derivable with the P1 tiling (see figure 2 above).[41] The decomposition of B-tiles into A-tiles may be written BS = AL, BL = AL + AS (assuming the larger size convention for the B-tiles), which can be summarized in a substitution matrix equation:[42] (BLBS)=(1110)(ALAS).{\displaystyle {\begin{pmatrix}B_{L}\\B_{S}\end{pmatrix}}={\begin{pmatrix}1&1\\1&0\end{pmatrix}}{\begin{pmatrix}A_{L}\\A_{S}\end{pmatrix}}\,.} Combining this with the decomposition of enlarged ?A-tiles into B-tiles yields the substitution (φALφAS)=(1110)(BLBS)=(2111)(ALAS),{\displaystyle {\begin{pmatrix}\varphi A_{L}\\\varphi A_{S}\end{pmatrix}}={\begin{pmatrix}1&1\\1&0\end{pmatrix}}{\begin{pmatrix}B_{L}\\B_{S}\end{pmatrix}}={\begin{pmatrix}2&1\\1&1\end{pmatrix}}{\begin{pmatrix}A_{L}\\A_{S}\end{pmatrix}}\,,} so that the enlarged tile ?AL decomposes into two AL tiles and one AS tiles. The matching rules force a particular substitution: the two AL tiles in a ?AL tile must form a kite, and thus a kite decomposes into two kites and a two half-darts, and a dart decomposes into a kite and two half-darts.[43][44] Enlarged ?B-tiles decompose into B-tiles in a similar way (via ?A-tiles). Composition and decomposition can be iterated, so that, for example φn(ALAS)=(2111)n(ALAS).{\displaystyle \varphi ^{n}{\begin{pmatrix}A_{L}\\A_{S}\end{pmatrix}}={\begin{pmatrix}2&1\\1&1\end{pmatrix}}^{n}{\begin{pmatrix}A_{L}\\A_{S}\end{pmatrix}}\,.} The number of kites and darts in the nth iteration of the construction is determined by the nth power of the substitution matrix: (2111)n=(F2n+1F2nF2nF2n−1),{\displaystyle {\begin{pmatrix}2&1\\1&1\end{pmatrix}}^{n}={\begin{pmatrix}F_{2n+1}&F_{2n}\\F_{2n}&F_{2n-1}\end{pmatrix}}\,,} where Fn is the nth Fibonacci number. The ratio of numbers of kites to darts in any sufficiently large P2 Penrose tiling pattern therefore approximates to the golden ratio ?.[45] A similar result holds for the ratio of the number of thick rhombs to thin rhombs in the P3 Penrose tiling.[43] Deflation for P2 and P3 tilings Consecutive deflations of the 'sun' vertex in a Penrose tiling of type P2 Consecutive deflations of a tile-set in a Penrose tiling of type P3 8th deflation of the 'sun' vertex in a Penrose tiling of type P2 Starting with a collection of tiles from a given tiling (which might be a single tile, a tiling of the plane, or any other collection), deflation proceeds with a sequence of steps called generations. In one generation of deflation, each tile is replaced with two or more new tiles that are scaled-down versions of tiles used in the original tiling. The substitution rules guarantee that the new tiles will be arranged in accordance with the matching rules.[43] Repeated generations of deflation produce a tiling of the original axiom shape with smaller and smaller tiles. This rule for dividing the tiles is a subdivision rule. Initial tiles Half-kite Half-dart The above table should be used with caution. The half kite and half dart deflation are useful only in the context of deflating a larger pattern as shown in the sun and star deflations. They give incorrect results if applied to single kites and darts. In addition, the simple subdivision rule generates holes near the edges of the tiling which are just visible in the top and bottom illustrations on the right. Additional forcing rules are useful. Consequences and applications Inflation and deflation yield a method for constructing kite and dart (P2) tilings, or rhombus (P3) tilings, known as up-down generation.[31][43][44] The Penrose tilings, being non-periodic, have no translational symmetry – the pattern cannot be shifted to match itself over the entire plane. However, any bounded region, no matter how large, will be repeated an infinite number of times within the tiling. Therefore, no finite patch can uniquely determine a full Penrose tiling, nor even determine which position within the tiling is being shown.[46] This shows in particular that the number of distinct Penrose tilings (of any type) is uncountably infinite. Up-down generation yields one method to parameterize the tilings, but other methods use Ammann bars, pentagrids, or cut and project schemes.[43] Related tilings and topics Decagonal coverings and quasicrystals Gummelt's decagon (left) with the decomposition into kites and darts indicated by dashed lines; the thicker darker lines bound an inscribed ace and thick rhomb; possible overlaps (right) are by one or two red aces.[47] In 1996, German mathematician Petra Gummelt demonstrated that a covering (so called to distinguish it from a non-overlapping tiling) equivalent to the Penrose tiling can be constructed using a single decagonal tile if two kinds of overlapping regions are allowed.[48] The decagonal tile is decorated with colored patches, and the covering rule allows only those overlaps compatible with the coloring. A suitable decomposition of the decagonal tile into kites and darts transforms such a covering into a Penrose (P2) tiling. Similarly, a P3 tiling can be obtained by inscribing a thick rhomb into each decagon; the remaining space is filled by thin rhombs. These coverings have been considered as a realistic model for the growth of quasicrystals: the overlapping decagons are 'quasi-unit cells' analogous to the unit cells from which crystals are constructed, and the matching rules maximize the density of certain atomic clusters.[47][49] The aperiodic nature of the coverings can make theoretical studies of physical properties, such as electronic structure, difficult due to the absence of Bloch's theorem. However, spectra of quasicrystals can still be computed with error control.[50] Related tilings Tie and Navette tiling (in red on a Penrose background) The three variants of the Penrose tiling are mutually locally derivable. Selecting some subsets from the vertices of a P1 tiling allows to produce other non-periodic tilings. If the corners of one pentagon in P1 are labeled in succession by 1,3,5,2,4 an unambiguous tagging in all the pentagons is established, the order being either clockwise or counterclockwise. Points with the same label define a tiling by Robinson triangles while points with the numbers 3 and 4 on them define the vertices of a Tie-and-Navette tiling.[51] A variant tiling which is not a quasicrystal. It is not a Penrose tiling because it does not comply with the tile alignment rules. There are also other related unequivalent tilings, such as the hexagon-boat-star and Mikulla–Roth tilings. For instance, if the matching rules for the rhombus tiling are reduced to a specific restriction on the angles permitted at each vertex, a binary tiling is obtained.[52] Its underlying symmetry is also fivefold but it is not a quasicrystal. It can be obtained either by decorating the rhombs of the original tiling with smaller ones, or by applying substitution rules, but not by de Bruijn's cut-and-project method.[53] Pentagonal and decagonal Girih-tile pattern on a spandrel from the Darb-i Imam shrine, Isfahan, Iran (1453 C.E.) Salesforce Transit Center in San Francisco. The outer "skin", made of white aluminum, is perforated in the pattern of a Penrose tiling. The aesthetic value of tilings has long been appreciated, and remains a source of interest in them; hence the visual appearance (rather than the formal defining properties) of Penrose tilings has attracted attention. The similarity with certain decorative patterns used in North Africa and the Middle East has been noted;[54][55] the physicists Peter J. Lu and Paul Steinhardt have presented evidence that a Penrose tiling underlies examples of medieval Islamic geometric patterns, such as the girih (strapwork) tilings at the Darb-e Imam shrine in Isfahan.[56] Drop City artist Clark Richert used Penrose rhombs in artwork in 1970, derived by projecting the rhombic triacontahedron shadow onto a plane observing the embedded "fat" rhombi and "skinny" rhombi which tile together to produce the non-periodic tessellation. Art historian Martin Kemp has observed that Albrecht Dürer sketched similar motifs of a rhombus tiling.[57] San Francisco's new $2.2 billion Transbay Transit Center features perforations in its exterior's undulating white metal skin in the Penrose pattern.[58] The floor of the atrium of the Bayliss Building at The University of Western Australia is tiled with Penrose tiles.[59] In 1979 Miami University used a Penrose tiling executed in terrazzo to decorate the Bachelor Hall courtyard in their Department of Mathematics and Statistics.[60] The Andrew Wiles Building, the location of the Mathematics Department at the University of Oxford as of October 2013,[61] includes a section of Penrose tiling as the paving of its entrance.[62] The pedestrian part of the street Keskuskatu in central Helsinki is paved using a form of Penrose tiling. The work was finished in 2014.[63] Girih tiling List of aperiodic sets of tiles Pinwheel tiling Pentagonal tiling Quaquaversal tiling ^ Senechal 1996, pp. 241-244. ^ Radin 1996. ^ a b General references for this article include Gardner 1997, pp. 1–30, Grünbaum & Shephard 1987, pp. 520–548 & 558–579, and Senechal 1996, pp. 170–206. ^ Gardner 1997, pp. 20, 23 ^ Grünbaum & Shephard 1987, p. 520 ^ Culik & Kari 1997 ^ Wang 1961 ^ Robert Berger at the Mathematics Genealogy Project ^ a b c d e f g Austin 2005a ^ Berger 1966 ^ Gardner 1997, p. 5 ^ Robinson 1971 ^ a b Senechal 1996, pp. 173-174 ^ Penrose 1974 ^ Grünbaum & Shephard 1987, section 2.5 ^ Luck 2000 ^ a b Senechal 1996, p. 171 ^ a b Gardner 1997, p. 6 ^ Gardner 1997, p. 19 ^ a b Gardner 1997, chapter 1 ^ de Bruijn 1981 ^ The P1–P3 notation is taken from Grünbaum & Shephard 1987, section 10.3 ^ Grünbaum & Shephard 1987, section 10.3 ^ a b Penrose 1978, p. 32 ^ "However, as will be explained momentarily, differently colored pentagons will be considered to be different types of tiles." Austin 2005a; Grünbaum & Shephard 1987, figure 10.3.1, shows the edge modifications needed to yield an aperiodic set of prototiles. ^ "The rhombus of course tiles periodically, but we are not allowed to join the pieces in this manner." Gardner 1997, pp. 6–7 ^ a b c d e Grünbaum & Shephard 1987, pp. 537– 547 ^ Gardner 1997, pp. 10–11 ^ Senechal 1996, p. 178 ^ "The Penrose Tiles". Murderous Maths. Retrieved 2020. ^ In Grünbaum & Shephard 1987, the term "inflation" is used where other authors would use "deflation" (followed by rescaling). The terms "composition" and "decomposition", which many authors also use, are less ambiguous. ^ Ramachandrarao, P (2000). "On the fractal nature of Penrose tiling" (PDF). Current Science. 79: 364. ^ Senechal 1996, pp. 157–158 ^ a b c d e Austin 2005b ^ "... any finite patch that we choose in a tiling will lie inside a single inflated tile if we continue moving far enough up in the inflation hierarchy. This means that anywhere that tile occurs at that level in the hierarchy, our original patch must also occur in the original tiling. Therefore, the patch will occur infinitely often in the original tiling and, in fact, in every other tiling as well." Austin 2005a ^ a b Lord & Ranganathan 2001 ^ Gummelt 1996 ^ Steinhardt & Jeong 1996; see also Steinhardt, Paul J. "A New Paradigm for the Structure of Quasicrystals". ^ Colbrook; Roman; Hansen (2019). "How to Compute Spectra with Error Control". Physical Review Letters. 122 (25): 250201. Bibcode:2019PhRvL.122y0201C. doi:10.1103/PhysRevLett.122.250201. PMID 31347861. ^ Luck, R (1990). "Penrose Sublattices". Journal of Non-Crystalline Solids. 117-8 (90): 832-5. Bibcode:1990JNCS..117..832L. doi:10.1016/0022-3093(90)90657-8. ^ Lançon & Billard 1988 ^ Godrèche & Lançon 1992; see also D. Frettlöh; F. Gähler & E. Harriss. "Binary". Tilings Encyclopedia. Department of Mathematics, University of Bielefeld. ^ Zaslavski? et al. 1988; Makovicky 1992 ^ Prange, Sebastian R.; Peter J. Lu (1 September 2009). "The Tiles of Infinity". Saudi Aramco World. Aramco Services Company. pp. 24-31. Retrieved 2010. ^ Lu & Steinhardt 2007 ^ Kemp 2005 ^ Kuchar, Sally (11 July 2013), "Check Out the Proposed Skin for the Transbay Transit Center", Curbed ^ "Centenary: The University of Western Australia", www.treasures.uwa.edu.au ^ The Penrose Tiling at Miami University by David Kullman, Presented at the Mathematical Association of America Ohio Section Meeting Shawnee State University, 24 October 1997 ^ New Building Project, archived from the original on 22 November 2012, retrieved 2013 ^ Roger Penrose explains the mathematics of the Penrose Paving, University of Oxford Mathematical Institute ^ "Keskuskadun kävelykadusta voi tulla matemaattisen hämmästelyn kohde", Helsingin Sanomat, 6 August 2014 Berger, R. (1966), The undecidability of the domino problem, Memoirs of the American Mathematical Society, 66, ISBN 9780821812662 . de Bruijn, N. G. (1981), "Algebraic theory of Penrose's non-periodic tilings of the plane, I, II" (PDF), Indagationes Mathematicae, 43 (1): 39-66, doi:10.1016/1385-7258(81)90017-2 . Gummelt, Petra (1996), "Penrose tilings as coverings of congruent decagons", Geometriae Dedicata, 62 (1), doi:10.1007/BF00239998, S2CID 120127686 . Penrose, Roger (1974), "The role of aesthetics in pure and applied mathematical research", Bulletin of the Institute of Mathematics and Its Applications, 10: 266ff . US 4133152, Penrose, Roger, "Set of tiles for covering a surface", issued 1979-01-09 . Robinson, R.M. (1971), "Undecidability and non-periodicity for tilings of the plane", Inventiones Mathematicae, 12 (3): 177-190, Bibcode:1971InMat..12..177R, doi:10.1007/BF01418780, S2CID 14259496 . Schechtman, D.; Blech, I.; Gratias, D.; Cahn, J.W. (1984), "Metallic Phase with long-range orientational order and no translational symmetry", Physical Review Letters, 53 (20): 1951-1953, Bibcode:1984PhRvL..53.1951S, doi:10.1103/PhysRevLett.53.1951 Wang, H. (1961), "Proving theorems by pattern recognition II", Bell System Technical Journal, 40: 1-42, doi:10.1002/j.1538-7305.1961.tb03975.x . Austin, David (2005a), "Penrose Tiles Talk Across Miles", Feature Column, Providence: American Mathematical Society . Austin, David (2005b), "Penrose Tilings Tied up in Ribbons", Feature Column, Providence: American Mathematical Society . Colbrook, Matthew; Roman, Bogdan; Hansen, Anders (2019), "How to Compute Spectra with Error Control", Physical Review Letters, 122 (25): 250201, Bibcode:2019PhRvL.122y0201C, doi:10.1103/PhysRevLett.122.250201, PMID 31347861 Culik, Karel; Kari, Jarkko (1997), "On aperiodic sets of Wang tiles", Foundations of Computer Science, Lecture Notes in Computer Science, 1337, pp. 153-162, doi:10.1007/BFb0052084, ISBN 978-3-540-63746-2 Gardner, Martin (1997), Penrose Tiles to Trapdoor Ciphers, Cambridge University Press, ISBN 978-0-88385-521-8 . (First published by W. H. Freeman, New York (1989), ISBN 978-0-7167-1986-1.) Chapter 1 (pp. 1–18) is a reprint of Gardner, Martin (January 1977), "Extraordinary non-periodic tiling that enriches the theory of tiles", Scientific American, 236 (1): 110-121, Bibcode:1977SciAm.236a.110G, doi:10.1038/scientificamerican0177-110 . Godrèche, C; Lançon, F. (1992), "A simple example of a non-Pisot tiling with five-fold symmetry" (PDF), Journal de Physique I, 2 (2): 207-220, Bibcode:1992JPhy1...2..207G, doi:10.1051/jp1:1992134 . Grünbaum, Branko; Shephard, G. C. (1987), Tilings and Patterns, New York: W. H. Freeman, ISBN 978-0-7167-1193-3 . Kemp, Martin (2005), "Science in culture: A trick of the tiles", Nature, 436 (7049): 332, Bibcode:2005Natur.436..332K, doi:10.1038/436332a . Lançon, Frédéric; Billard, Luc (1988), "Two-dimensional system with a quasi-crystalline ground state" (PDF), Journal de Physique, 49 (2): 249-256, CiteSeerX 10.1.1.700.3611, doi:10.1051/jphys:01988004902024900 . Lord, E.A.; Ranganathan, S. (2001), "The Gummelt decagon as a 'quasi unit cell'" (PDF), Acta Crystallographica, A57 (5): 531-539, CiteSeerX 10.1.1.614.3786, doi:10.1107/S0108767301007504, PMID 11526302 Lu, Peter J.; Steinhardt, Paul J. (2007), "Decagonal and Quasi-crystalline Tilings in Medieval Islamic Architecture" (PDF), Science, 315 (5815): 1106-1110, Bibcode:2007Sci...315.1106L, doi:10.1126/science.1135491, PMID 17322056 . Luck, R. (2000), "Dürer-Kepler-Penrose: the development of pentagonal tilings", Materials Science and Engineering, 294 (6): 263-267, doi:10.1016/S0921-5093(00)01302-2 . Makovicky, E. (1992), "800-year-old pentagonal tiling from Maragha, Iran, and the new varieties of aperiodic tiling it inspired", in I. Hargittai (ed.), Fivefold Symmetry, Singapore–London: World Scientific, pp. 67-86, ISBN 9789810206000 . Penrose, Roger (1978), "Pentaplexity", Eureka, 39: 16-22 . (Page numbers cited here are from the reproduction as Penrose, R. (1979-80), "Pentaplexity: A class of non-periodic tilings of the plane", The Mathematical Intelligencer, 2: 32-37, doi:10.1007/BF03024384, S2CID 120305260 .) Radin, Charles (April 1996), "Book Review: Quasicrystals and geometry" (PDF), Notices of the American Mathematical Society, 43 (4): 416-421 Senechal, Marjorie (1996), Quasicrystals and geometry, Cambridge University Press, ISBN 978-0-521-57541-6 . Steinhardt, Paul J.; Jeong, Hyeong-Chai (1996), "A simpler approach to Penrose tiling with implications for quasicrystal formation", Nature, 382 (1 August): 431-433, Bibcode:1996Natur.382..431S, doi:10.1038/382431a0, S2CID 4354819 . Zaslavski?, G.M.; Sagdeev, Roal'd Z.; Usikov, D.A.; Chernikov, A.A. (1988), "Minimal chaos, stochastic web and structures of quasicrystal symmetry", Soviet Physics Uspekhi, 31 (10): 887-915, Bibcode:1988SvPhU..31..887Z, doi:10.1070/PU1988v031n10ABEH005632 . Weisstein, Eric W. "Penrose Tiles". MathWorld. John Savard, Penrose Tilings, quadibloc.com, retrieved 2009 Eric Hwang, Penrose Tiling, intendo.net, retrieved 2009 F. Gähler; E. Harriss & D. Frettlöh, "Penrose Rhomb", Tilings Encyclopedia, Department of Mathematics, University of Bielefeld, retrieved 2009 Kevin Brown, On de Bruijn Grids and Tilings, mathpages.com, retrieved 2009 David Eppstein, "Penrose Tiles", The Geometry Junkyard, ics.uci.edu/~eppstein, retrieved 2009 This has a list of additional resources. William Chow, Penrose tile in architecture, retrieved 2009 Penrose's tiles viewer Generalized Penrose tilings Animated Penrose Tiling Animated Penrose Tiling (part 2) Helsinki Maths Mystery - Penrose tiles 5 and Penrose Tiling - Numberphile Animated Penrose Tiling - preliminary version Animated Penrose Tiling - preliminary version (part 2) Penrose tiling rotating in the five dimensional space Roger Penrose: Forbidden crystal symmetry - Event Q&A Roger Penrose - Forbidden crystal symmetry in mathematics and architecture Penrose cooling Before the Big Bang - Roger Penrose Full version Patterns of the Universe: A Coloring Adventure in Math and Beauty For Free [Read] Patterns of the Universe: A Coloring Adventure in Math and Beauty Best Sellers Rank : #3 About For Books Patterns of the Universe: A Coloring Adventure in Math and Beauty For Kindle Full E-book Patterns of the Universe: A Coloring Adventure in Math and Beauty For Free Penrose_tiling
CommonCrawl
author: niplav, created: 2021-01-21, modified: 2021-03-31, language: english, status: in progress, importance: 2, confidence: likely In which I solve exercsise from "Artificial Intelligence: A Modern Approach", written by Stuart Russell and Peter Norvig. I use the 3rd edition from 2010, because the exercises for the 4th edition were moved online. Solutions to "Artificial Intelligence: A Modern Approach" Define in your own words: (a) intelligence, (b) artificial intelligence, (c) agent, (d) rationality, (e) logical reasoning The word "intelligence" is mostly used to describe a property of systems. Roughly, it refers to the ability of a system to make decisions that result in consequences are graded high according to some metric, as opposed to decisions that result in consequences that are graded low according to that metric. "Artificial intelligence" refers to systems designed and implemented by humans with the aim of these systems displaying intelligent behavior. An "agent" is a part of the universe that carries out goal-directed actions. The usage of the word "rationality" is difficult to untangle from the usage of the word "intelligence". For humans, "rationality" usually refers to the ability to detect and correct cognitive errors that hinder coming to correct conclusions about the state of the world (epistemic rationality), as well as the ability to act on those beliefs according to ones values (instrumental rationality). However, these seem very related to "intelligence", maybe only being separated by a potentiality–intelligence being the potential, and rationality being the ability to fulfill that potential. One could attempt to apply the same definition to artificial intelligences, but it seems unclear how a lawful process could be more intelligent, but is not. "Logical reasoning" refers to the act of deriving statements from other statements according to pre-defined rules. A reflex action is not intelligent, as it is not the result of a deliberate reasoning process. According to my personal definition above (and also the definition given in the text), it is also not rational (since the action is not guided by a belief). Common usage of the term "rational" indicates that people would describe this reflex as a rational action. I believe this is fine, and words are just pointers to clusters in thing-space anyway. Suppose we extend Evans's ANALOGY program so that it can score 200 on a standard IQ test. Would we then have a program more intelligent than a human? Explain. No. (At least not for any useful definition of intelligence). IQ tests as they currently exist measure a proxy for the actual ability to perform complex tasks in the real world. For humans, geometry puzzles correlate (and predict) well with such tests (Sternberg et al. 2001). However, this proxy breaks down once we start optimising for it (as in the case on extending ANALOGY). We can now not predict real-world performance on arbitrary goals given the result of the IQ test performed on ANALOGY anymore. The neural structure of the sea slug Aplysia has been widely studied (first by Nobel Laureate Eric Kandel) because it has only about 20,000 neurons, most of them large and easily manipulated. Assuming that the cycle time for an Aplysia neuron is roughly the same as for a human neuron, how does the computational power, in terms of memory updates per second, compare with the high-end computer described in Figure 1.3? Given the cycle time of $10^{-3}$ seconds, we can expect $$\frac{2*10^{4} \hbox{ neurons}}{10^{-3}\frac{\hbox{s}}{\hbox{update}}}=2*10^{7} \frac{\hbox{neuron updates}}{s}$$ which is seven orders of magnitude lower than a supercomputer. Aplysia won't be proving any important theorems soon. Suppose that the performance measure is concerned with just the first T time steps of the environment and ignores everything thereafter. Show that a rational agent's action may depend not just on the state of the environment but also on the time step it has reached. Example: Let's say that we are in an environment with a button, and pressing the button causes a light to go on in the next timestep. The agent cares that the light is on (obtaining 1 util per timestep the light is on for the first T timesteps). However, pressing the button incurs a cost of ½ on the agent. Then, at timestep T, the agent will not press the button, since it does not care about the light being on at timestep T+1, and wants to avoid the cost ½. At timesteps $<T$ it will press the button, with the light currently being on, at timestep T it will not press the button, under the same environmental conditions. For each of the following assertions, say whether it is true or false and support your answer with examples or counterexamples where appropriate. a. An agent that senses only partial information about the state cannot be perfectly rational. False. An agent that senses only partial information about the state could infer missing information by making deductions (logical or statistical) about the state of the environment, coming to full knowledge of the environment, and making perfectly rational choices using that information. For example, a chess-playing agent that can't see exactly one square could infer the piece standing on that square by observing which piece is missing from the rest of the board. b. There exist task environments in which no pure reflex agent can behave rationally. True. In an environment in which the next reward depends on the current state and the previous state, a simple reflex agent will get outperformed by agents with an internal world-model. An example for this is a stock-trading agent: The future prices of stocks doesn't just depend on the current prices, but on the history of prices. c. There exists a task environment in which every agent is rational. True. It is the environment where the agent has no options to act. d. The input to an agent program is the same as the input to the agent function. Not sure. Both the agent function and the agent program receive percepts, but sometimes the agent program also needs information that is not a percept (e.g. priors for bayesian agents). Is that counted as input, or simply as program-specific data? e. Every agent function is implementable by some program/machine combination. False. An agent function could be uncomputable (e. g. AIXI), and therefore not be implementable on a real-world machine. f. Suppose an agent selects its action uniformly at random from the set of possible actions. There exists a deterministic task environment in which this agent is rational. True, that would be the environment in which every action scores equally well on the performance measure. g. It is possible for a given agent to be perfectly rational in two distinct task environments. True. Given two agents $A_X$ and $A_Y$, and two task environments $X$ (giving percepts from the set $\{x_1, \dots, x_n\}$) and $Y$ (giving percepts from the set $\{y_1, \dots, y_n\}$), with $A_X$ being perfectly rational in $X$ and $A_Y$ being perfectly rational in $Y$ an agent that is perfectly rational in two distinct task environments could be implemented using the code: p=percept() if p∈X A_X(p) while p=percept() if p∈Y A_Y(p) h. Every agent is rational in an unobservable environment. False. Given an unobservable environment in which moving results in the performance measure going up (e.g. by knocking over ugly vases), agents that move a lot are more rational than agents that do not move. i. A perfectly rational poker-playing agent never loses. False. Given incomplete knowledge, a rational poker-playing agent can only win in expectation. For each of the following activities, give a PEAS description of the task environment and characterize it in terms of the properties listed in Section 2.3.2 Playing soccer. Performance measure: $goals_{own}-goals_{enemy}$; environment: soccer field; actuators: legs & feet, arms & hands (for goalkeeper), torso, head; sensors: vision, hearing, tactile Multi-agent, continuous, partially observable, fully known (both rules of soccer and classical mechanics underlying the ball & other players, although fluid dynamics of air-player interaction is probably tricky), sequential, dynamic, stochastic (in theory deterministic, but practically stochastic, very small unobservable effects can have large consequences) Exploring the subsurface oceans of Titan. Performance measure: surface explored; environment: subsurface environments of Titan; actuators: motor with propeller, arms to grab things, perhaps wheels; sensors: radar, vision (if the agent has inbuilt light generation) Single-agent, continuous, partially observable, partially known (in case there's actually life there, we don't know how it behaves), sequential, dynamic (maybe not very dynamic, but there might be currents/geothermal vents/life), stochastic. Shopping for used AI books on the internet. Performance measure: $\frac{n_{books}}{\sum_{b \in books} p(b)}$ (price per book); environment: web browser; actuators: keyboard, mouse; sensors: vision of the screen, location of mouse, state of keys on keyboard pressed Multi-agent (if bidding against others), discrete, partially observable, fully known (unless bidding against others, since that would need model of human psychology), sequential (money in bank account is not reset), static (again, unless bidding against others), deterministic Playing a tennis match. Performance measure: $points_{own}-points_{enemy}$ (I think tennis uses rounds? Maybe $winrounds_{own}-winrounds_{enemy}$); environment: tennis court; actuators: arms, tennis racket, wheels/legs to move around; sensors: vision, hearing Multi-agent, continous, fully observable, fully known (though caveats similar to soccer apply), episodic (after each round there's a reset, right?), dynamic, stochastic (similar caveats as in soccer example) Practicing tennis against a wall. Performance measure: number of balls hit; environment: place with wall; actuators: arms, tennis racket, wheels/legs to move around; sensors: vision, hearing Single-agent, continous, fully observable, fully known (though caveats similar to soccer apply), episodic, dynamic, stochastic (similar caveats as in soccer example) Performing a high jump. Performance measure: height of the jump; environment: a place with a high/nonexistent ceiling; actuators: legs; sensors: tactile sensors in feet, height sensor Single-agent, continuous, fully observable (unless wind), fully known (although, again, caveats as in soccer), episodic (unless falling over and not being able to get up again), static, deterministic (unless wind) Knitting a sweater. Performance measure: beauty, robustness and comfortableness of the sweater; environment: a cozy sofa in the living room; actuators: needles for knitting; sensors: tactile sensors for the needles, visual sensors for observing the sweater Single-agent, continuous, fully observable, fully known (again using classical mechanics), sequential (unless unraveling completely & starting again is an option), static, deterministic Bidding on an item at an auction. Performance measure: $\frac{nitems}{\sum_{i \in items} price(item)}$; environment: bidding platform/auction house; actuators: text entering for online/audio output for bidding; sensors: vision of the screen/auditory in the case of the auction house, visual to observe the items presented Multi-agent, discrete (money is usually discrete), fully observable, partially known (other bidders might be human and too complex to fully model), sequential (account balance persistent throughout auction), dynamic, deterministic Explain why problem formulation must follow goal formulation. The goal formulation applies first & foremost to the real world. The problem formulation, however, then translates this real-world goal into a format that computers can deal with. Formulating the problem before the goal has no "anchor" as to what to formalize, the goal gives information on what to concentrate on. Your goal is to navigate a robot out of a maze. The robot starts in the center of the maze facing north. You can turn the robot to face north, east, south, or west. You can direct the robot to move forward a certain distance, although it will stop before hitting a wall. a. Formulate this problem. How large is the state space? Assumption: The maze has size $n*m$. Size of the state space: $4*n*m$. b. In navigating a maze, the only place we need to turn is at the intersection of two or more corridors. Reformulate this problem using this observation. How large is the state space now? Let i be the number of intersections. Then there are $2*((n*m)-i)+i*4$ different states (2 for each non-intersection state (walking forward or backward, and 4 for each intersection state, for each direction the agent can go). However, this does not consider dead ends or intersections where there are only 3 valid directions. If there are $i_d$ dead ends, $i_3$ intersections with 3 possible directions, and $i_4$ intersections with 4 possible directions, the number of possible states is instead $i_d+3*i_3+4*i_4+2*((n*m)-(i_d+i_3+i_4))$. c. From each point in the maze, we can move in any of the four directions until we reach a turning point, and this is the only action we need to do. Reformulate the problem using these actions. Do we need to keep track of the robot's orientation now? Since we don't have to turn before moving, we're equivalent to an unchanging directionless dot (only the position changes). We don't need to keep track of the orientation anymore, since we don't have to a specific direction before moving. d. In our initial description of the problem we already abstracted from the real world, restricting actions and removing details. List three such simplifications we made. Only 4 different directions allowed, not being able to run into walls, the robot will move the given distance (and not experience battery failure/fall into a hole etc.). How many solutions are there for the map-coloring problem in Figure 6.1? How many solutions if four colors are allowed? Two colors? 2 colors: 0 possible solutions 3 colors: $3*3*2=18$ possible solutions (TA and SA are free, and then the WA-NT-Q-NSW-V chain can only be colored with 2 different colors, which have to be alternating) 4 colors: $4*4*(3*2*2*2*2)=768$ possible solutions (again, TA and SA are free, and then WA-NT-Q-NSW-V have 3 colors left, but no same color twice, which means 3 colors for the first option, and two for each successor) Solve the cryptarithmetic problem in Figure 6.2 by hand, using the strategy of backtracking with forward checking and the MRV and least-constraining-value heuristics. Variables: $X=\{F, T, U, W, R, O, C_1, C_2, C_3\}$ $$C=\{\langle O, R \rangle: O+O \mod 10=R, \\ \langle W, U, C_1 \rangle: W+W+C_1 \mod 10=U, \\ \langle T, O, C_2 \rangle: T+T+C_2 \mod 10=O, \\ \langle C_1, O \rangle: C_1=1 \hbox{ if } O+O>9 \hbox { else } 0, \\ \langle C_2, W, C_1 \rangle: C_2=1 \hbox{ if } W+W+C_1>9 \hbox { else } 0, \\ \langle C_3, T, C_2 \rangle: C_3=1 \hbox{ if } T+T+C_2>9 \hbox { else } 0, \\ \langle F, C_3 \rangle: F=C_3\\ \langle F, T, U, W, R, O \rangle: Alldiff(F, T, U, W, R, O)\}$$ Domains: $\{0..9\}$ for $\{F, T, U, W, R, O\}$, and $\{0, 1\}$ for $\{C_1, C_2, C_3\}$. Replacing the Alldiff constraint with binary constraints: $$C := (C \backslash \{\langle F, T, U, W, R, O \rangle: Alldiff(F, T, U, W, R, O)\}) \cup \\{ \langle x_1, x_2 \rangle: x_1 \not = x_2 | x_1, x_2 \in \{ F, T, U, W, R, O \} }$$ Replacing the other trinary constraints with binary ones: New variables $X_1, X_2 \in [10] \times \{0, 1\}$. We remove the constraints $$\{\langle W, U, C_1 \rangle: W+W+C_1 \mod 10=U, \\ \langle T, O, C_2 \rangle: T+T+C_2 \mod 10=O, \\ \langle C_2, W, C_1 \rangle: C_2=1 \hbox{ if } W+W+C_1>9 \hbox { else } 0, \\ \langle C_3, T, C_2 \rangle: C_3=1 \hbox{ if } T+T+C_2>9 \hbox { else } 0 \} $$ and add some constraints to replace the trinary with binary constraints on $X_{1 \hbox{ to } 4}$. The result looks like this: $$ C := \{ \langle X_1, U \rangle: U=fst(X_1)+fst(X_1)+snd(X_1) \mod 10, \\ \langle X_2, O \rangle: O=fst(X_2)+fst(X_2)+snd(X_2) \mod 10, \\ \langle X_1, C_2 \rangle: C_2=1 \hbox{ if } fst(X_1)+fst(X_1)+snd(X_1)>9 \hbox { else } 0, \\ \langle X_2, C_3 \rangle: C_3=1 \hbox{ if } fst(X_2)+fst(X_2)+snd(X_2)>9 \hbox { else } 0, \\ \langle X_1, W \rangle: W=fst(X_1), \\ \langle X_1, C_1 \rangle: C_1=snd(X_1), \\ \langle X_2, T \rangle: T=fst(X_2), \\ \langle X_2, C_2 \rangle: C_2=snd(X_2), \\ \langle O, R \rangle: O+O \mod 10=R, \\ \langle C_1, O \rangle: C_1=1 \hbox{ if } O+O>9 \hbox { else } 0, \\ \langle F, C_3 \rangle: F=C_3 \} \\ \cup \{ \langle x_1, x_2 \rangle: x_1 \not = x_2 | x_1, x_2 \in \{ F, T, U, W, R, O \} $$ Variables sorted by domain size: $X_1: 20, X_2: 20, F: 10, T: 10, U: 10, W: 10, R: 10, O: 10, C_1: 2, C_2: 2, C_3: 2$ Variables sorted by degree: $O: 8, W: 6, T: 6, R: 6, U: 6, F: 6, X_1: 4, X_2: 4, C_1: 2, C_2: 2, C_3: 2$ Now, one can do the actual searching and inference: Assign (tie between $C_1, C_2, C_3$ in remaining values, choosing $C_1$ randomly): $C_1=1$ Infer: $X_1 \in [10] \times \{1\}$ Infer: $O \in \{5,6,7,8,9\}$ Infer: $X_2 \in \{2,3,4,7,8,9\} \times \{0, 1\}$ Infer: $R \in \{0,2,4,6,8\}$ Infer: $T \in \{2,3,4,7,8,9\}$ Assign: (tie between $C_2, C_3$ in remaining values, choosing $C_2$ next): $C_2=1$ Infer from $C_2$: $X_1 \in \{5,6,7,8,9\} \times \{1\}$ Infer from $C_2$: $X_2 \in \{2,3,4,7,8,9\} \times \{1\}$ Infer from $X_1$: $U \in \{1, 3, 5, 7, 9\}$ Infer from $X_1$: $W \in \{5,6,7,8,9\}$ Infer from $X_2$: $O \in \{5,7,9\}$ Infer from $X_2$: $T \in \{2,3,4,7,8,9\}$ Infer from $O$: $R \in \{0, 4, 8\}$ Assign: $C_3=1$ Infer from $C_3$: $X_2 \in \{7,8,9\} \times \{1\}$ Infer from $C_3$: $F=1$ Infer from $F$: $U \in \{3,5,7,9\}$ Infer from $U$: $X_1 \in \{6,7,8,9\} \times \{1\}$ Infer from $X_1$: $W \in \{6,7,8,9\}$ Assign: $R=0$ Infer from $R$: $O \in \emptyset$ Backtrack, assign: $R=4$ Infer from $R$: $O=7$ Infer from $R$: $T \in \{2,3,7,8,9\}$ Infer from $O$: $X_2=(8,1)$ Infer from $O$: $T \in \{2,3,8,9\}$ Infer from $O$: $W \in \{6,8,9\}$ Infer from $X_2$: $T=8$ Infer from $W$: $X_1 \in \{6,8,9\} \times \{1\}$ Infer from $T$: $W \in \{6,9\}$ Infer from $W$: $X_1 \in \{6,9\} \times \{1\}$ Infer from $X_1$: $U \in \{3,9\}$ Assign: $W=6$ Infer from $W$: $X_1=(6,1)$ Infer from $X_1$: $U=3$ The assignments are $C_1=1, C_2=1, C_3=1, F=1, T=8, U=3, W=6, R=4, O=7, X_1=(6,4), X_2=(8,1).$ Or, in the puzzle: $$ \matrix { & 8 & 6 & 7 \cr + & 8 & 6 & 7 \cr \hline{} 1 & 7 & 3 & 4 \cr } $$ Decide whether each of the following sentences is valid, unsatisfiable, or neither. Verify your decisions using truth tables or the equivalence rules of Figure 7.11 (page 249). a. $Smoke \Rightarrow Smoke$ $$Smoke \Rightarrow Smoke \equiv \\ \lnot Smoke \lor Smoke \equiv \\ True$$ The sentence is valid since True is valid. b. $Smoke \Rightarrow Fire$ $Smoke \Rightarrow Fire \equiv \lnot Smoke \lor Fire$ Neither: If Smoke=True and Fire=False, then the sentence is false, if Smoke=False and Fire=False, the sentence is true. c. $(Smoke \Rightarrow Fire) \Rightarrow (\lnot Smoke \Rightarrow \lnot Fire)$ $$(Smoke \Rightarrow Fire) \Rightarrow (\lnot Smoke \Rightarrow \lnot Fire) \equiv \\ \lnot (\lnot Smoke \lor Fire) \lor (Smoke \lor \lnot Fire) \equiv \\ (Smoke \land \lnot Fire) \lor Smoke \lor \lnot Fire$$ Neither: For Smoke=False and Fire=True, the sentence is false, but for Smoke=True, the sentence is true. d. $Smoke \lor Fire \lor \lnot Fire$ $Smoke \lor Fire \lor \lnot Fire \equiv Smoke \lor True = True$ This sentence is valid, since it is equivalent to True. e. $((Smoke \land Heat) \Rightarrow Fire) \Leftrightarrow ((Smoke \Rightarrow Fire) \lor (Heat \Rightarrow Fire))$ $$((Smoke \land Heat) \Rightarrow Fire) \Leftrightarrow ((Smoke \Rightarrow Fire) \lor (Heat \Rightarrow Fire)) \equiv \\ ((\lnot Smoke \lor \lnot Heat \lor Fire) \Leftrightarrow (\lnot Smoke \lor Fire \lor \lnot Heat)) \equiv \\ True$$ This sentence is valid since $a \Leftrightarrow a \equiv True$. f. $(Smoke \Rightarrow Fire) \Rightarrow ((Smoke \land Heat) \Rightarrow Fire)$ $$(Smoke \Rightarrow Fire) \Rightarrow ((Smoke \land Heat) \Rightarrow Fire) \equiv \\ \lnot (\lnot Smoke \lor Fire) \lor (\lnot (Smoke \land Heat) \lor Fire) \equiv \\ (Smoke \land \lnot Fire) \lor \not Smoke \lor \lnot Heat \lor Fire \equiv $$ This sentence is valid. If Smoke=True, Heat=True and Fire=False, then $Smoke \land \lnot Fire$ is true, and makes the whole sentence true. Otherwise, any of the other disjunctions make the sentence true. g. $Big \lor Dumb \lor (Big \Rightarrow Dumb)$ $Big \lor Dumb \lor (Big \Rightarrow Dumb) \equiv Big \lor Dumb \lor \lnot Big \lor Dumb \equiv True$. Therefore, this sentence is valid as heck. According to some political pundits, a person who is radical (R) is electable (E) if he/she is conservative (C), but otherwise not electable. a. Which of the following are correct representations of this assertion? (i) $R \land E \Leftrightarrow C$ (ii) $R \Rightarrow (E \Leftrightarrow C)$ (iii) $R \Rightarrow ((C \Rightarrow E) \lor \lnot E)$ (i) Would mean that a conservative is only electable if they are radical and electable, which must not be true. (ii) is a good representation: If someone is radical, they have to be either both conservative and electable or not conservative and not electable. For (iii), if R=True, C=True and E=False, then the sentence is true, but this goes against the earlier formulation: There are no unelectable radical conservatives (in this hypothetical scenario). b. Which of the sentences in (a) can be expressed in Horn form? $$(R \land E) \Leftrightarrow C \equiv \\ C \Rightarrow (R \land E) \land (R \land E) \Rightarrow C \equiv \\ \lnot C \lor (R \land E) \land \lnot (R \land E) \lor C \equiv \\ (\lnot C \lor R) \land (\lnot C \lor E) \land (\lnot R \lor \lnot E \lor C)$$ This sentence can't be represented in Horn form, since it can't be reduced down to only disjunctions of literals. $$ R \Rightarrow (E \Leftrightarrow C) \equiv \\ \lnot R \lor (E \Rightarrow C \land C \Rightarrow E) \equiv \\ \lnot R \lor (\lnot E \lor C \land \lnot C \lor E) \equiv \\ (\lnot R \lor \lnot E \lor C) \land (\lnot R \lor \lnot C \lor E) \equiv \\ \lnot R \land (\lnot E \lor C) \land (\lnot C \lor E) $$ Neither can this sentence. $$ R \Rightarrow ((C \Rightarrow E) \lor \lnot E) \equiv \\ \lnot R \lor ((\lnot C \lor E) \lor \lnot E \equiv) \\ \lnot R \lor \lnot C \lor E \lor \lnot E \equiv \\ (R \land C \land E) \Rightarrow E \equiv \\ True$$ This sentence can be represented in Horn form, and is also a tautology. Suppose you are given the following axioms: $0 \le 3$. $ 7 \le 9$. $\forall x: x \le x$. $\forall x: x \le x+0$. $\forall x: x+0 \le x$. $\forall x, y: x+y \le y+x$. $\forall w, x, y, z: w \le y \land x \le z \Rightarrow w+x \le y+z$. $\forall x, y, z: x \le y \land y \le z \Rightarrow x \le z$. a. Give a backward-chaining proof of the sentence $7 \le 3 + 9$. (Be sure, of course, to use only the axioms given here, not anything else you may know about arithmetic.) Show only the steps that leads [sic] to success, not the irrelevant steps. Proof: $7 \le 3+9$ Rule 8: $\{7/x, 3+9/z\}$ Proof: $7 \le y \land y \le 3+9$ Substitute $\{0+7/y\}$ Rule 8: $7 \le y \land y \le 0+7$ Substitute: $\{y/7+0\}$ Proof: $7+0 \le 0+7$ Rule 6: $7+0 \le 0+7$ Rule 4: $7 \le 7+0$ Rule 7: $\{0/w, 7/x, 3/y, 9/z\}$ Proof: $0 \le 3 \land 7 \le 9$: Rule 1: $0 \le 3$ b. Give a forward-chaining proof of the sentence $7 \le 3+9$. Again, show only the steps that lead to success. Known: $0 \le 3, 7 \le 9$ Known: $0+7 \le 3+9$ Rule 7: $\{x/7\}$ Known: $7 \le 7+0$ Rule 6: $\{7/x, 0/y\}$ Rule 8: $\{7/x, 7+0/y, 0+7/z\}$ Rule 8: $\{7/0, 0+7/y, 3+9/z$ Show from first principile that $P(a|b \land a) = 1$. I'm not sure whether this counts as "from first principles", but $P(a|b \land a)=\frac{P(a \land a \land b)}{P(a \land b)}=\frac{P(a \land b)}{P(a \land b)}=1$ is my solution. Using the axioms of probability, prove that any probability distribution on a discrete random variable must sum to 1. We know that $\sum_{\omega \in \Omega} P(\omega)=1$. Given a discrete random variable X (X is discrete (and therefore also countable?)), and a probability distribution $P: X \rightarrow [0;1]$. Then, setting $\Omega=X$, one can see that $\sum_{x \in X} P(x)=1$. For each of the following statements, either prove it is true or give a counterexample. a. If $P(a|b,c)=P(b|a,c)$, then $P(a|c)=P(b|c)$ $$P(a|b,c)=P(b|a,c) \Leftrightarrow \\ \frac{P(a,b,c)}{P(b,c)}=\frac{P(a,b,c)}{P(a,c)} \Leftrightarrow \\ P(a,c)=P(b,c) \Leftrightarrow \\ \frac{P(a,c)}{P(c)}=\frac{P(b,c)}{P(c)} \Leftrightarrow \\ P(a|c)=P(b|c)$$ b. If $P(a|b,c)=P(a)$, then $P(b|c)=P(b)$ False: If $P(a)=P(a|b,c)=P(a|\lnot b,c)=P(a|b, \lnot c)=P(a|\lnot b,\lnot c)=0.1$ ($P(\lnot a)$ elided for brevity), then still can b be dependent on c, for example $P(b|c)=0.2$, $P(\lnot b|c)=0.8$, $P(b|\lnot c)=0.3$, $P(\lnot b|\lnot c)=0.7$, and $P(c)=P(\lnot c)=0.5$ (which would make $P(b)=\sum_{c \in C} P(b|c)*P(c)=0.5*0.2+0.5*0.3=0.25$ and $P(\lnot b)=\sum_{c \in C} P(\lnot b|c)*P(c)=0.5*0.8+0.5*0.7=0.75$). c. If $P(a|b)=P(a)$, then $P(a|b,c)=P(a|c)$ $a$ and $b$ are independent. However, this does not imply conditional independence given $c$. E.g.: $P(a)=0.5, P(b)=0.5, P(c|a, b)=1, P(c|\lnot a, \lnot b)=0, P(c|\lnot a, b)=1, P(c|a, \lnot b)=1$ So this is false. This question deals with the properties of possible worlds, defined on page 488 as assignments to all random variables. We will work with propositions that correspond to exactly one possible world because they pin down the assignments of all the variables. In probability theory, such propositions are called atomic events. For example, with Boolean variables $X_1, X_2, X_3$, the proposition $x_1 \land \lnot x_2 \land \lnot x_3$ fixes the assignment of the variables,; in the language of propositional logic, we would say it has exactly one model. a. Prove, for the case of $n$ Boolean variables, that any two distinct atomic events are mutually exclusive; that is, their conjunction is equivalent to false. Let $s_1, s_2$ be two distinct atomic events. That means there exists at least one $x_i$ so that $x_i$ is part of the conjunction in $s_1$ and $\lnot x_i$ is part of the conjunction in $s_2$. Then: $$s_1 \land s_2 = \\ s_1(1) \land \dots \land s_1(i-1) \land x_i \land s_1(i+1) \land \dots \land s_1(n) \land s_2(1) \land \dots \land s_2(i-1) \land \lnot x_i \land s_2(i+1) \land \dots \land s_2(n)=\\ s_1(1) \land \dots \land s_1(i-1) \land s_1(i+1) \land \dots \land s_1(n) \land s_2(1) \land \dots \land s_2(i-1) \land s_2(i+1) \land \dots \land s_2(n) \land x_i \land \lnot x_i=\\ s_1(1) \land \dots \land s_1(i-1) \land s_1(i+1) \land \dots \land s_1(n) \land s_2(1) \land \dots \land s_2(i-1) \land s_2(i+1) \land \dots \land s_2(n) \land false=\\ false$$ b. Prove that the disjunction of all possible atomic events is logically equivalent to true. For every atomic event $s$, there is an atomic event $s'=\lnot s=\lnot s(1) \land \dots \lnot s(n)$. Then the disjunction of all atomic events contains $s \lor s' \lor \dots=True$. c. Prove that any proposition is logically equivalent to the disjunction of the atomic events that entail its truth. Let $\mathcal{A}$ be the set of $n$ assignments that make the proposition true. Then each assignment $A_i \in \mathcal{A}$ corresponds to exactly one atomic event $a_i$ (e.g. assigning true to $x_1$, false to $x_2$ and false to $x_3$ corresponds to $x_1 \land \lnot x_2 \land \lnot x_2$). The set of these atomic events exactly entails the proposition. One can then simply create the conjunction of sentences $\bigwedge_{i=1}^{n} a_i$ that is true only if we use an assignment that makes the proposition true. Prove Equation (13.4) from Equations (13.1) and (13.2). More explicit: Prove $P(a \lor b)= P(a)+P(b)-P(a \land b)$ from $0 \le P(ω) \le 1, \sum_{ω \in Ω} P(ω)=1$. Since $a \lor b \Leftrightarrow ω \in a \cup b$ and $\sum_{ω \in a \backslash b} P(ω) + \sum_{ω \in a \cap b} P(ω)=\sum_{ω \in a} P(ω)$: $$P(a \lor b)=\\ \sum_{ω \in a \cup b} P(ω)=\\ \sum_{ω \in a \backslash b} P(ω) + \sum_{ω \in b \backslash a} P(ω) + \sum_{ω \in a \cap b} P(ω)=\\ \sum_{ω \in a \backslash b} P(ω) + \sum_{ω \in b \backslash a} P(ω) + \sum_{ω \in a \cap b} P(ω) + \sum_{ω \in a \cap b} P(ω) - \sum_{ω \in a \cap b} P(ω)=\\ \sum_{ω \in a} P(ω) + \sum_{ω \in b} P(ω) - \sum_{ω \in a \cap b} P(ω)=\\ P(a)+P(b)-P(a \land b)$$ We have a bag of three biased coins a, b, and c with probabilities of coming up heads of 20%, 60%, and 80%, respectively. One coin is drawn randomly from the bag (with equal likelihood of drawing each of the three coins), and then the coin is flipped three times to generate the outcomes $X_1$, $X_2$, and $X_3$. a. Draw the Bayesian network corresponding to this setup and define the necessary CPTs. $Coin$ $P(Coin)$ a 1/3 b 1/3 c 1/3 The three conditional tables for $X_1, X_2, X_3$ are very the same. $Coin$ $P(\{X_1, X_2, X_3\}=Head)$ a 0.2 b 0.6 c 0.8 Furthermore, $X_1, X_2, X_3$ are mutually conditionally independent given $Coin$. b. Calculate which coin was most likely to have been drawn from the bag if the observed flips come out heads twice and tails once. $C=\underset{coin \in \{a,b,c\}}{\hbox{argmax}} P(coin|H_1, H_2, T_3)$ $$P(coin|H_1, H_2, T_3)=\\ \frac{P(H_1, H_2, T_3|coin)*P(coin)}{P(H_1, H_2, H_3)}=\\ \frac{P(H_1, H_2, T_3|coin)*P(coin)}{P(H_1|Coin)*P(H_2|Coin)*P(T_3|Coin)}=\\ \frac{P(H_1, H_2, T_3|coin)*P(coin)}{\sum_{v \in \{a,b,c\}}(P(H_1|v)*P(v))*\sum_{v \in \{a,b,c\}}(P(H_2|v)*P(v))*\sum_{v \in \{a,b,c\}}(P(T_3|v)*P(v))}=\\ \frac{P(H_1|coin)*P(H_2|coin)*P(T_3|coin)*P(coin)}{\sum_{v \in \{a,b,c\}}(P(H_1|v)*P(v))^2*\sum_{v \in \{a,b,c\}}(P(T_3|v)*P(v))}=\\ \frac{P(H_1|coin)*P(H_2|coin)*P(T_3|coin)*P(coin)}{(0.2*1/3+0.6*1/3+0.8*1/3)^2*(0.8*1/3+0.4*1/3+0.2*1/3)}=\\ \frac{P(H_1|coin)*P(H_2|coin)*P(T_3|coin)*P(coin)}{0.1327407}$$ Now we plug in the values for $coin$: $$P(a|H_1, H_2, T_3)=\frac{P(H_1|a)*P(H_2|a)*P(T_3|a)*P(a)}{0.1327407}=\frac{0.2*0.2*0.8*1/3}{0.1327407}=0.0803571\\ P(b|H_1, H_2, T_3)=\frac{P(H_1|b)*P(H_2|b)*P(T_3|b)*P(b)}{0.1327407}=\frac{0.6*0.6*0.4*1/3}{0.1327407}=0.36160725384\\ P(c|H_1, H_2, T_3)=\frac{P(H_1|c)*P(H_2|c)*P(T_3|c)*P(c)}{0.1327407}=\frac{0.8*0.8*0.2*1/3}{0.1327407}=0.32142867$$ Thus, I conclude that it is most likely that coin b was pulled out of the bag. Note: the probabilities for $P(coin|H_1, H_2, T_3)$ don't sum to 1. I'm not sure what's up with that, but it's a good indicator that I have done something horribly wrong. Don't copy this solution. A professor wants to know if students are getting enough sleep. Each day, the professor observes whether the students sleep in class, and whether they have red eyes. The professor has the following domain theory: The prior probability of getting enough sleep, with no observations, is 0.7. The probability of getting enough sleep on night t is 0.8 given that the student got enough sleep the previous night, and 0.3 if not. The probability of having red eyes is 0.2 if the student got enough sleep, and 0.7 if not. The probability of sleeping in class is 0.1 if the student got enough sleep, and 0.3 if not. Formulate this information as a dynamic Bayesian network that the professor could use to filter or predict from a sequence of observations. Then reformulate it as a hidden Markov model that has only a single observation variable. Give the complete probability tables for the model. There are three variables: $E_t$ for getting enough sleep in night t, $S_t$ for sleeping in class on day t, and $R_t$ for having red eyes on day t. The conditional probabilities tables for the dynamic Bayesian network are: $P(E_{t+1}|E_t)$: $E_t$ $e_{t+1}$ $\lnot e_{t+1}$ $P(S_t|E_t)$: $E_t$ $s_t$ $\lnot s_t$ $P(R_t|E_t)$: $E_t$ $r_t$ $\lnot r_t$ For the hidden Markov model, the table for $P(E_{t+1}|E_t)$ stays the same. For $P(S_t, R_t | E_t)$ we assume that $S_t$ and $R_t$ are conditionally independent given $E_t$: $E_t$ $r_t, s_t$ $r_t, \lnot s_t$ $\lnot r_t, s_t$ $\lnot r_t, \lnot s_t$ 1 0.02 0.18 0.08 0.72 For the DBN specified in Exercise 15.13 and for the evidence values e1 = not red eyes, not sleeping in class e2 = red eyes, not sleeping in class e3 = red eyes, sleeping in class perform the following computations: a. State estimation: Compute $P(EnoughSleep_t|e_{1:t})$ for each of t = 1, 2, 3. Note: In the previous exercise, I used e as a symbol for getting enough sleep. This collides with the abstract symbol for evidence variables, but I'm too lazy to change it back (I will use $ev$ for the evidence variables instead). I will not mix abstract variables and concrete variables (here R, S and E) to keep the confusion minimal. For t=1: $$P(E_1|e_{1:1})=\\ E(E_1|\lnot r, \lnot s)=\\ \alpha P(\lnot r, \lnot s| E_1)*(P(E_1|e_0)*P(e_0)+P(E_1|\lnot e_0)*P(\lnot e_0)=\\ \alpha \langle 0.72, 0.21 \rangle * (\langle 0.8, 0.2 \rangle * 0.7 + \langle 0.2, 0.8 \rangle * 0.3)=\\ \alpha \langle 0.4464, 0.0798 \rangle \approx \\ \langle 0.8483, 0.151653 \rangle $$ $$P(E_2|e_{1:2})=\\ E(E_2|r, \lnot s)=\\ \alpha P(r, \lnot s| E_2)*(P(E_2|e_1)*P(e_1)+P(E_2|\lnot e_1)*P(\lnot e_1)=\\ \alpha \langle 0.18, 0.49 \rangle * (\langle 0.8, 0.2 \rangle * 0.8483 + \langle 0.3, 0.7 \rangle * 0.151653)=\\ \alpha \langle 0.13034446, 0.13515 \rangle \approx \\ \langle 0.490949, 0.50905 \rangle $$ $$P(E_3|e_{1:3})=\\ E(E_3|r, s)=\\ \alpha P(r, s| E_3)*(P(E_3|e_2)*P(e_2)+P(E_3|\lnot e_2)*P(\lnot e_2)=\\ \alpha \langle 0.02, 0.21 \rangle * (\langle 0.8, 0.2 \rangle * 0.490949 + \langle 0.3, 0.7 \rangle * 0.50905)=\\ \alpha \langle 0.0109095, 0.09545 \rangle \approx \\ \langle 0.1025715, 0.89742846\rangle $$ b. Smoothing: Compute $P(EnoughSleep_t|e_{1:3})$ for each of t = 1, 2, 3. I'll use k instead of t for the point of smoothing here, because, let's be real, I don't need more double-usage of symbols: For k=1: $$P(E_1|ev_{1:t}=\alpha P(E_1|ev_{1:1})\times P(ev_{2:3}|E_1)=\alpha f_{1:1} \times b_{2:3}=\\ \alpha \langle 0.8483, 0.151653 \rangle \times b_{2:3}=\\ \alpha \langle 0.8483, 0.151653 \rangle \times P(ev_{2:3}|E_1)=\\ \alpha \langle 0.8483, 0.151653 \rangle \times P(r, \lnot s | e_2)*P(ev_{3:3}|e_2)*P(e_2|E_1)+P(r, \lnot s| \lnot e_2)*P(ev_{3:3}|\lnot e_2) * P(\lnot e_2 | E_1)=\\ \alpha \langle 0.8483, 0.151653 \rangle \times P(r, \lnot s | e_2)*P(r,s|e_2)*P(e_2|E_1)+P(r, \lnot s| \lnot e_2)*P(r,s|\lnot e_2) * P(\lnot e_2 | E_1)=\\ \alpha \langle 0.8483, 0.151653 \rangle \times 0.18*0.02*\langle 0.8, 0.3 \rangle + 0.49*0.21*\langle 0.2, 0.7 \rangle= \alpha \langle 0.8483, 0.151653 \rangle \times \langle 0.02346, 0.07311 \rangle=\\ \langle 0.64221, 0.3577896 \rangle $$ $$P(E_2|ev_{1:t}=\alpha P(E_2|ev_{1:2})\times P(ev_{3:3}|E_2)=\alpha f_{1:2} \times b_{3:3}=\\ \alpha \langle 0.490949, 0.50905 \rangle \times \langle 0.490949, 0.50905\rangle \times b_{3:3}=\\ \alpha \langle 0.490949, 0.50905 \rangle \times \langle 0.490949, 0.50905\rangle \times P(ev_{3:3}|E_2)=\\ \alpha \langle 0.490949, 0.50905 \rangle \times P(r, s | e_3)*P(ev_{4:3}|e_3)*P(e_3|E_2)+P(r, s| \lnot e_3)*P(ev_{4:3}|\lnot e_3) * P(\lnot e_3 | E_2)=\\ \alpha \langle 0.490949, 0.50905 \rangle \times P(r, s | e_3)*P(e_3|E_2)+P(r, s| \lnot e_3) * P(\lnot e_3 | E_2)=\\ \alpha \langle 0.490949, 0.50905 \rangle \times 0.02*\langle 0.8, 0.3 \rangle + 0.21*\langle 0.2, 0.7 \rangle= \alpha \langle 0.490949, 0.50905 \rangle \times \langle 0.058, 0.153\rangle=\\ \langle 0.2677723998, 0.732276 \rangle $$ Since I don't know $e_{4:3}$ (I think nobody does), I assign it probability 1. Should I assign it probability 0? I don't know! The number is the same as for filtering, since k=t. c. Compare the filtered and smoothed probabilities for t = 1 and t = 2. As a reminder, $P(E_1|ev_{1:1})=\langle 0.8483, 0.151653 \rangle, P(E_2|ev_{1:2})=\langle 0.490949, 0.50905 \rangle$, and $P(E_1|ev_{1:3})=\langle 0.64221, 0.3577896 \rangle, P(E_2|ev_{1:3})=\langle 0.2677723998, 0.732276 \rangle$. The probabilities don't disagree sharply at any point. Interestingly, $P(E_1|ev_{1:1})$ is more confident than $P(E_1|ev_{1:3})$, but it's the other way around for $E_2$. Otherwise, what's there to compare further? (Adapted from David Heckerman.) This exercise concerns the Almanac Game, which is used by decision analysts to calibrate numeric estimation. For each of the questions that follow, give your best guess of the answer, that is, a number that you think is as likely to be too high as it is to be too low. Also give your guess at a 25th percentile estimate, that is, a number that you think has a 25% chance of being too high, and a 75% chance of being too low. Do the same for the 75th percentile. (Thus, you should give three estimates in all—low, median, and high—for each question.) Using Klong for dealing with the arrays of values when doing calculations. a. Number of passengers who flew between New York and Los Angeles in 1989. 80k, 500k, 6m. b. Population of Warsaw in 1992. Population of Warsaw today (2021): Around 2m, I think? Was probably lower back then. Assume growth of 1.5% a year. [600000 2000000 3000000]%(1.015^29) [389615.319813051588 1298717.73271017196 1948076.59906525794] c. Year in which Coronado discovered the Mississippi River. Hm, no idea. Mississippi is near to the east coast, so probably discovered relatively early. I know that Yucatán was discovered very early. d. Number of votes received by Jimmy Carter in the 1976 presidential election. Population of the US at that time around 250m? Electorate is probably ~70% of population (maybe less because population was younger then, say 65%), combined with 60% participation in presidential elections, and presidents receiving on average ~50% of the vote. [180000000 250000000 300000000]*0.65*0.6*0.5 [35100000.0 48750000.0 58500000.0] e. Age of the oldest living tree, as of 2002. 1.5k, 4k, 10k. f. Height of the Hoover Dam in feet. ~3 feet in a meter. [50 85 180]*3 [120 255 540] g. Number of eggs produced in Oregon in 1985. Let's say every Oregonian eats an egg a day, and Oregon produces all its own eggs. [100000 300000 1500000]*365 [36500000 109500000 547500000] Maybe even less for the smallest value, because Oregone might not produce all its eggs on its own. 10m, 109.5m, 547.5m h. Number of Buddhists in the world in 1992. World population in 1992: Around 6b, I think? I vaguely remembers Buddhists making up 2% of the world population. 6000000000*[0.003 0.02 0.1] [18000000.0 120000000.0 600000000.0] Other method: Most buddhists probably live in China/India/Japan. China had 1b, India had ~700m (?), Japan had ~100m. Let's say 20% in each of those countries (major religion, being generous because other countries are not included). Median comes out to 0.2*700000000+1000000000+100000000 That's not that far off of the other number. Let's say 100m, 240m (the mean of the two estimates for the median), 600m. i. Number of deaths due to AIDS in the United States in 1981. 0 (when did AIDS start exactly?), 50k, 250k. j. Number of U.S. patents granted in 1901. Let's say 1 patent per year for every 1000/2000/10000 people, for 100m/150m/200m people. That results in 10k, 75k, 200k. But that seems a bit much. How big could the patent office be? 10k patents would mean processing ~25 patents a day. Let's knock these numbers down a little. 5k, 50k, 150k. Ranking My Answers The correct answers appear after the last exercise of this chapter. From the point of view of decision analysis, the interesting thing is not how close your median guesses came to the real answers, but rather how often the real answer came within your 25% and 75% bounds. If it was about half the time, then your bounds are accurate. But if you're like most people, you will be more sure of yourself than you should be, and fewer than half the answers will fall within the bounds. With practice, you can calibrate yourself to give realistic bounds, and thus be more useful in supplying information for decision making. a. Lies in my given range, but I was a bit pessimistic. b. Again, I was pessimistic, but I wasn't so bad, only 300k off. c. Yeah, I didn't perform well on this one. I guess I should have been more aggressive in my estimation how early much of the US was explored. Still, 1541 is surprising (the American continent was discovered in 1492, and only 50 years later they find the Mississippi?). d. I'm proud of this one–only 7m too optimistic, for a question I know next to nothing about (I couldn't name a single thing Jimmy Carter did during his presidency). e. I roughly knew the order of magnitude for this one for today, with the major hurdle being to estimate what the state of knowledge about tree age was in 2002. f. Pretty accurate on this one, too. I corrected the number down a couple of times before checking, reflecting on dams probably not being that high. g. I was way too pessimistic about this one. I didn't know whether Oregon was a major agricultural state (is it?) and I didn't include the possibility that Oregon overproduces eggs. Too bad. h. Also proud of this one. 50m off of the real number (and too low! I was fearing I was being too optimistic, being exposed to Buddhism much more than other religions). Glad I did the dialectical bootstrapping here. g. I presume 1980 was just at the start of the AIDS pandemic. I was careful enough to go very low, but I suspected that AIDS started in the 70s, and shit really hit the fan in the 90s, but wasn't sure how bad exactly the 80s were. Still, 250k as an upper range was way too careful (COVID-19 killed ~200k in the US in 2020, and that was the biggest pandemic since the Spanish Flu). h. Very proud of this one. Bit too optimistic about the capabilities of the US patent office, but still in the right range. Summing up: 1 below my 25th percentile estimate, 4 between the 25th percentile and the median, 4 between the median and the 75th percentile, and 1 above the 75th percentile. While I am not biased (at least not in this set of answers), I am too careful (unless most people–probably the result of doing a bunch of forecasting and being punished for overconfidence once too often). I should have set my ranges to be narrower. Try this second set of questions and see if there is any improvement: a. Year of birth of Zsa Zsa Gabor. I'm not sure who this is. b. Maximum distance from Mars to the sun in miles. The average distance of the Earth from the sun is 150m km (~90m miles). [1.5 2 5]*150%1.6 [140.625 187.5 468.75] c. Value in dollars of exports of wheat from the United States in 1992. The US GDP today is ~1t, right? Then it was probably around half of that back then. Maybe exports is 10% of that, and wheat is ~0.1%/1%/4% of exports. 500000000000*0.1*[0.001 0.01 0.04] [50000000.0 500000000.0 2000000000.0] 50m, 500m, 2b. d. Tons handled by the port of Honolulu in 1991. Let's say 1/4/10 ships a day, with 20/100/500 tons cargo? [1 4 10]*[20 100 1000]*365 [7300 146000 3650000] e. Annual salary in dollars of the governor of California in 1993. Sometimes politicians get only symbolic salaries, right? Though, that seems unlikely here. Also, consider inflation. [80 130 350]%1.02^30 [44.1656711183929545 71.7692155673885511 193.224811142969175] f. Population of San Diego in 1990. 300k, 1m, 2.5m. g. Year in which Roger Williams founded Providence, Rhode Island. Providence is quite old, right? West-coast, Lovecraft already writes about it as a very old city. h. Height of Mt. Kilimanjaro in feet. The Kilimanjaro is somwhere between 5500 m and 6000 m. A meter is ~3 feet. [5500 5850 6000]*3 [16500 17550 18000] i. Length of the Brooklyn Bridge in feet. I remember taking ~10 minutes to walk over the Brooklyn Bridge (although we were walking slowly, a speed of ~4km/h). 3*4000*[8 12 15]%60 [1599.99999999999999 2400.0 3000.0] j. Number of deaths due to automobile accidents in the United States in 1992. Car safety was probably worse back then. The US population was probably smaller (today it's ~310m). I think I remember something of ~20k car deaths in the US some years ago? 1.05*[5000 20000 50000]*290%310 [4911.29032258064515 19645.1612903225806 49112.9032258064515] a. I was a bit too careful on the lower rankings (maybe I should have taken into account that being a popstar was really hard before 1880, just because the media didn't exist) b. I was quite close to the lower bound, which surprises me. I maybe estimated the orbit of Mars to be more elliptical than circular. c. My estimate is way too low. I was probably decomposing too hard here. d. Again, my estimate was too low. Probably underestimated the amount of cargo in one ship? Also, duh, Honolulu is in the middle of the Pacific, of course there's going to be a lot of cargo. e. I'm quite happy with this one. f. Ditto. g. This one was close. I shouldn't underestimate how long the history of the US is, and how early the West Coast got explored. h. My value for the number of feet per meter was probably too low. i. Just below my lower estimate. We were walking pretty slow, I should have taken that more into account. j. Again, my estimates were a bit low. I've overestimated the number of deaths in car crashes, was corrected for that, and probably overcorrected here. Summing up: 1 below my 25th percentile estimate, 2 between the 25th percentile and the median, 4 between the median and the 75th percentile, and 3 above the 75th percentile. Here, I show some bias towards underestimating the values. Maybe because I decomposed more? In 1713, Nicolas Bernoulli stated a puzzle, now called the St. Petersburg paradox, which works as follows. You have the opportunity to play a game in which a fair coin is tossed repeatedly until it comes up heads. If the first heads appears on the nth toss, you win $2^n$ dollars. a. Show that the expected monetary value of this game is infinite. $$EU=\underset{n \rightarrow \infty}{\lim} \sum_{i=1}^{n} \frac{1}{2^i}*2^i=\\ \underset{n \rightarrow \infty}{\lim} \sum_{i=1}^{n} 1=\\ \underset{n \rightarrow \infty}{\lim} n$$ b. How much would you, personally, pay to play the game? I'm not sure. Maybe ~\$20? I guess I value money linearly up to that range. c. Nicolas's cousin Daniel Bernoulli resolved the apparent paradox in 1738 by suggesting that the utility of money is measured on a logarithmic scale (i.e., $U(S_n) = a \log_2 n+b$, where $S_n$ is the state of having \$n). What is the expected utility of the game under this assumption? $$EU=\underset{n \rightarrow \infty}{\lim} \sum_{i=1}^{n} \frac{a*log_2(2^i+b)}{2^i}=\\ \underset{n \rightarrow \infty}{\lim} a*\sum_{i=1}^{n} \frac{log_2(2^i+b)}{2^i}=\\ \underset{n \rightarrow \infty}{\lim} a*\sum_{i=1}^{n} \frac{i}{2^i}=\\ \underset{n \rightarrow \infty}{\lim} a*\sum_{i=1}^{n} \frac{1}{2^i \ln(2)}= \\ \frac{a}{\ln(2)}$$ $2^i+b$ converges towards $2^i$. d. What is the maximum amount that it would be rational to pay to play the game, assuming that one's initial wealth is \$k ? I assume that "the maximum amount" refers to "the maximum number of times". Consider a student who has the choice to buy or not buy a textbook for a course. We'll model this as a decision problem with one Boolean decision node, B, indicating whether the agent chooses to buy the book, and two Boolean chance nodes, M, indicating whether the student has mastered the material in the book, and P, indicating whether the student passes the course. Of course, there is also a utility node, U. A certain student, Sam, has an additive utility function: 0 for not buying the book and -\$100 for buying it; and \$2000 for passing the course and 0 for not passing. Sam's conditional probability estimates are as follows: $P(p|b, m) = 0.9$ $P(m|b) = 0.9$ $P(p|b, \lnot m) = 0.5$ $P(m|\lnot b) = 0.7$ $P(p|\lnot b, m) = 0.8$ $P(p|\lnot b, \lnot m) = 0.3$ You might think that P would be independent of B given M, But [sic] this course has an open-book final—so having the book helps. a. Draw the decision network for this problem. b. Compute the expected utility of buying the book and of not buying it. $$EU(b)=\\ P(p|b)*U(p|b)+P(\lnot p|b)*U(\lnot p|b)=\\ (P(p|b,m)*P(m|b)+P(p|b,\lnot m)*P(\lnot m|b))*(\$2000-\$100)+((P(\lnot p|b,m)*P(m|b)+P(\lnot p|b,\lnot m)*P(\lnot m|b))*(-\$100)=\\ (0.9*0.9+0.5*0.1)*(\$2000-\$100)+(0.1*0.9+0.5*0.1)*(-\$100)=\\ \$1620$$ $$EU(\lnot b)=\\ P(p|\lnot b)*U(p|\lnot b)+P(\lnot p|\lnot b)*U(\lnot p|\lnot b)=\\ (P(p|\lnot b,m)*P(m|\lnot b)+P(p|\lnot b,\lnot m)*P(\lnot m|\lnot b))*(\$2000)=\\ (0.8*0.7+0.3*0.3)*(\$2000)=\\ \$1300$$ Since $U(\lnot p|\lnot b)=0$, it can be left out of the calculation. c. What should Sam do? Sam should buy the book, since that yields the highest expected utility. (Adapted from Pearl (1988). A used-car buyer can decide to carry out various tests with various costs (e.g., kick the tires, take the car to a qualified mechanic) and then, depending on the outcome of the tests, decide which car to buy. We will assume that the buyer is deciding whether to buy car $c_1$, that there is time to carry out at most one test, and that $t_1$ is the test of $c_1$ and costs \$50. A car can be in good shape (quality $q^+$) or bad shape (quality $q^-$), and the tests might help indicate what shape the car is in. Car $c_1$ costs \$1,500, and its market value is \$2,000 if it is in good shape; if not, \$700 in repairs will be needed to make it in good shape. The buyer's estimate is that $c_1$ has a 70% chance of being in good shape. a. Draw the decision network that represents this problem. b. Calculate the expected net gain from buying $c_1$, given no test. $E(U|b, \lnot t)=0.7*(\$2000-\$1500)+0.3*(\$2000-(\$700+\$1500))=\$290$ c. Tests can be described by the probability that the car will pass or fail the test given that the car is in good or bad shape. We have the following information: $P(pass(c_1, t_1)|q^+(c_1))=0.8$ $P(pass(c_1, t_1)|q^-(c_1))=0.35$ Use Bayes' theorem to calculate the probability that the car will pass (or fail) its test and hence the probability that it is in good (or bad) shape given each possible test outcome. $$P(q^+(c_1)|pass(c_1, t_1))=\\ \frac{P(pass(c_1, t_1)|q^+(c_1)*P(q^+(c_1))}{P(pass(c_1, t_1))}=\\ \frac{0.8*0.7}{\sum_{i \in \{q^+(c_1), q^-(c_1)\}} P(pass(c_1, t_1)|i)*P(i)}=\\ \frac{0.8*0.7}{0.8*0.7+0.35*0.3} \approx 0.8421$$ With that, $P(q^-(c_1)|pass(c_1, t_1)) \approx 0.1579$. $$P(q^+(c_1)|\lnot pass(c_1, t_1))=\\ \frac{P(\lnot pass(c_1, t_1)|q^+(c_1)*P(q^+(c_1))}{P(\lnot pass(c_1, t_1))}=\\ \frac{0.2*0.7}{\sum_{i \in \{q^+(c_1), q^-(c_1)\}} P(\lnot pass(c_1, t_1)|i)*P(i)}=\\ \frac{0.2*0.7}{0.2*0.7+0.65*0.3} \approx 0.4179$$ With that, $P(\lnot q^-(c_1)|\lnot pass(c_1, t_1) \approx 0.5821$. d. Calculate the optimal decisions given either a pass or a fail, and their expected utilities. $$E(U|b, t, pass(c_1, t_1))=0.8421*(\$2000-\$1500)+0.1579*(\$2000-(\$1500+\$700))=\$389.47 \\ E(U|\lnot b, t, pass(c_1, t_1))=\$0 \\ E(U|b, t, \lnot pass(c_1, t_1))=0.4179*(\$2000-\$1500)+0.5821*(\$2000-(\$1500+\$700))=\$92.53 \\ E(U|\lnot b, t, \lnot pass(c_1, t_1))=0.4179*(-\$50)+0.5821*(-\$50)=\$0$$ e. Calculate the value of information of the test, and derive an optimal conditional plan for the buyer. $$VOI(pass(c_1, t_1))=(P(pass(c_1, t_1))*EU(b|t, pass(c_1, t_1)+P(\lnot pass(c_1, t_1))*EU(b|t, \lnot pass(c_1, t_1)))-EU(b|\lnot t)=\\ ((0.8*0.7+0.35*0.3)*\$389.47+(0.2*0.7+0.65*0.3)*\$92.53)-\$290 \approx \\ \$0$$ This makes sense, since in all cases (even if the test says that the car is a lemon!), the optimal decision is to buy the car. Suppose that we define the utility of a state sequence to be the maximum reward obtained in any state in the sequence. Show that this utility function does not result in stationary preferences between state sequences. Is it still possible to define a utility function on states such that MEU decsion making gives optimal behavior? Preferences between state sequences are stationary iff $[s_0, s_1, \dots] \bullet [s_0', s_1', s_2', \dots] \Rightarrow [s_1, s_2, \dots] \bullet [s_1', s_2', \dots]$ for a fixed $\bullet \in \{\succ, \sim, \prec\}$ and $s_0=s_0'$. Assume that $s_0=s_0'$ is the maximum of the two state sequences $S_1, S_2$. Then $S_1 \sim S_2.$ Assume that $\max(S_1(1..))>\max(S_2(1..))$. Then $S_1(1..) \succ S_2(1..)$, even though they two sequences start with the same value. Stationarity violated. However, not all hope is lost. Given a sequence of rewards on states, one can define the utility to be the maximum of the average of all rewards in the sequence. This utility should be stationary. Disclaimer: I'm not sure what is exactly being asked in the second part of the question. We have sequences of rewards, we have the utility function that ranks sequences based on the maximum, and we then have an agent that acts based on these utilities. Is my job to modify the reward sequence so that using the maximum utility function is still optimal? Or do I have to modify the utility function itself? In the second case, I set the utility to be the sum of rewards (discounted, if you will).
CommonCrawl
Back-of-the-envelope calculation A back-of-the-envelope calculation is a rough calculation, typically jotted down on any available scrap of paper such as an envelope. It is more than a guess but less than an accurate calculation or mathematical proof. The defining characteristic of back-of-the-envelope calculations is the use of simplified assumptions. A similar phrase in the U.S. is "back of a napkin", also used in the business world to describe sketching out a quick, rough idea of a business or product.[1] In British English, a similar idiom is "back of a fag packet". History In the natural sciences, back-of-the-envelope calculation is often associated with physicist Enrico Fermi,[2] who was well known for emphasizing ways that complex scientific equations could be approximated within an order of magnitude using simple calculations. He went on to develop a series of sample calculations, which are called "Fermi Questions" or "Back-of-the-Envelope Calculations" and used to solve Fermi problems.[3][4] Fermi was known for getting quick and accurate answers to problems that would stump other people. The most famous instance came during the first atomic bomb test in New Mexico on 16 July 1945. As the blast wave reached him, Fermi dropped bits of paper. By measuring the distance they were blown, he could compare to a previously computed table and thus estimate the bomb energy yield. He estimated 10 kilotons of TNT; the measured result was 18.6.[5][6] Perhaps the most influential example of such a calculation was carried out over a period of a few hours by Arnold Wilkins after being asked to consider a problem by Robert Watson Watt. Watt had learned that the Germans claimed to have invented a radio-based death ray, but Wilkins' one-page calculations demonstrated that such a thing was almost certainly impossible. When Watt asked what role radio might play, Wilkins replied that it might be useful for detection at long range, a suggestion that led to the rapid development of radar and the Chain Home system.[7] Another example is Victor Weisskopf's pamphlet Modern Physics from an Elementary Point of View.[8] In these notes Weisskopf used back-of-the-envelope calculations to calculate the size of a hydrogen atom, a star, and a mountain, all using elementary physics. Examples Nobel laureate Charles Townes describes in a video interview for the University of California, Berkeley on the 50th anniversary of the laser, how he pulled an envelope from his pocket while sitting in a park and wrote down calculations during his initial insight into lasers.[9] During lunch with NFL commissioner Pete Rozelle in 1966, Tiffany & Co. vice president Oscar Riedner made a sketch on a cocktail napkin of what would become the Vince Lombardi Trophy, awarded annually to the winner of the Super Bowl.[10] An important Internet protocol, the Border Gateway Protocol, was sketched out in 1989 by engineers on the back of "three ketchup-stained napkins", and is still known as the three-napkin protocol.[11] UTF-8, the dominant character encoding for the World Wide Web,[12] was designed by Ken Thompson and Rob Pike on a placemat.[13] The Bailey bridge is a type of portable, pre-fabricated, truss bridge and was extensively used by British, Canadian and US military engineering units. Donald Bailey drew the original design for the bridge on the back of an envelope.[14] The Laffer Curve, which claims to show the relationship between tax cuts and government income, was drawn by Arthur Laffer in 1974 on a bar napkin to show an aide to President Gerald R. Ford why the federal government should cut taxes.[15] Upon hearing that the S-IV 2nd Stage of the Saturn I would need transport from California to Florida for launch as part of the Apollo program, Jack Conroy sketched the cavernous cargo airplane, the Pregnant Guppy.[16] The Video Toaster was designed on placemats in a Topeka pizza restaurant.[17] See also • Buckingham pi theorem, a technique often used in fluid mechanics to obtain order-of-magnitude estimates • Guesstimate • Scientific Wild-Ass Guess • Heuristic • Order-of-magnitude analysis • Rule of thumb • Sanity testing Notes and references 1. Brown, Bob (2011-07-19). "Napkins: Where Ethernet, Compaq and Facebook's cool data center got their starts". Network World. Archived from the original on 15 August 2019. Retrieved 2020-10-06. Robert Metcalfe's early Ethernet diagrams from his days at Xerox PARC back in the early 1970s might be the most famous napkin sketches in the technology industry. 2. Where Fermi stood. - Bulletin of the Atomic Scientists | Encyclopedia.com (Archived) 3. Back of the Envelope Calculations 4. High School Mathematics at Work: Essays and Examples for the Education of All Students 5. Rhodes, Richard (1986). The Making of the Atomic Bomb. New York: Simon & Schuster. p. 674. ISBN 978-0-671-44133-3. OCLC 13793436. 6. Nuclear Weapons Journal, Los Alamos National Laboratory, Issue 2 2005. 7. Austin, B.A. (1999). "Precursors To Radar — The Watson-Watt Memorandum And The Daventry Experiment" (PDF). International Journal of Electrical Engineering & Education. 36 (4): 365–372. doi:10.7227/IJEEE.36.4.10. S2CID 111153288. Archived from the original (PDF) on 2015-05-25. Retrieved 2016-07-08. 8. Lectures given in the 1969 Summer Lecture Programme, CERN (European Organization for Nuclear Research), CERN 70-8, 17 March 1970. 9. Video of interview with Charles Townes; envelope mention comes about halfway in 10. "Vince Lombardi Trophy". ProFootballHOF.com. NFL Enterprises, LLC. Retrieved November 21, 2019. 11. Timberg, Craig (31 May 2015). "Net of Insecurity; Quick fix for an early Internet problem lives on a quarter-century later". The Washington Post. Archived from the original on 1 June 2015. Retrieved 4 January 2021. As the prospect of system meltdown loomed, the men began scribbling ideas for a solution onto the back of a ketchup-stained napkin. Then a second. Then a third. The "three-napkins protocol," as its inventors jokingly dubbed it, would soon revolutionize the Internet. And though there were lingering issues, the engineers saw their creation as a "hack" or "kludge," slang for a short-term fix to be replaced as soon as a better alternative arrived. 12. "Usage Survey of Character Encodings broken down by Ranking". w3techs.com. Retrieved 2018-11-01. 13. Email Subject: UTF-8 history, From: "Rob 'Commander' Pike", Date: Wed, 30 Apr 2003..., ...UTF-8 was designed, in front of my eyes, on a placemat in a New Jersey diner one night in September or so 1992...So that night Ken wrote packing and unpacking code and I started tearing into the C and graphics libraries. The next day all the code was done... 14. Services, From Times Wire (1985-05-07). "Sir Donald Bailey, WW II Engineer, Dies". Los Angeles Times. ISSN 0458-3035. Retrieved 2018-11-01. He sketched the original design for the Bailey Bridge on the back of an envelope as he was being driven to a meeting of Royal Engineers to debate the failure of existing portable bridges 15. "This is not Arthur Laffer's famous napin" NY Times 13 Oct. 2017 16. Bloom, Margy (15 September 2011). "PilotMag Aviation Magazine | The Pregnant Guppy | The Problem: Logistics". Pilot Magazine. Archived from the original on 15 July 2011. Retrieved 25 April 2012. [Conroy] listened to the conversations around him, then picked up a cocktail napkin and a ballpoint pen. And with the precision he'd learned during the brief months he'd attended engineering school many years before, he drew an airplane that had never been built, to carry a rocket that had never been launched, to take man to a place nobody had ever been before. Jack Conroy had just sketched the airplane that would become the Pregnant Guppy. 17. Reimer, Jeremy (18 March 2016). "A history of the Amiga, part 9: The Video Toaster". Ars Technica. Archived from the original on 18 March 2016. Retrieved 4 January 2021. Montgomery suggested that Jenison meet his friend Brad Carvey, who had been working on projects involving robotic vision. The three of them got together in a pizza restaurant in Topeka and started drawing block diagrams on the placemats. External links Look up back-of-the-envelope or back-of-an-envelope in Wiktionary, the free dictionary. • Syllabus at UCSD Orders of magnitude Quantity • Acceleration • Angular momentum • Area • Bit rate • Charge • Computing • Current • Data • Density • Energy / Energy density • Entropy • Force • Frequency • Illuminance • Length • Luminance • Magnetic field • Mass • Molarity • Numbers • Power • Pressure • Probability • Radiation • Sound pressure • Specific heat capacity • Speed • Temperature • Time • Voltage • Volume See also • Back-of-the-envelope calculation • Best-selling electronic devices • Fermi problem • Powers of 10 and decades • 10th • 100th • 1000000th • Metric (SI) prefix • Macroscopic scale • Microscopic scale Related • Astronomical system of units • Earth's location in the Universe • Cosmic View (1957 book) • To the Moon and Beyond (1964 film) • Cosmic Zoom (1968 film) • Powers of Ten (1968 and 1977 films) • Cosmic Voyage (1996 documentary) • Cosmic Eye (2012) • Category
Wikipedia
Performance of genomic prediction within and across generations in maritime pine Jérôme Bartholomé ORCID: orcid.org/0000-0002-0855-38281, Joost Van Heerwaarden2, Fikret Isik3, Christophe Boury1, Marjorie Vidal1,4, Christophe Plomion1 & Laurent Bouffier1 Genomic selection (GS) is a promising approach for decreasing breeding cycle length in forest trees. Assessment of progeny performance and of the prediction accuracy of GS models over generations is therefore a key issue. A reference population of maritime pine (Pinus pinaster) with an estimated effective inbreeding population size (status number) of 25 was first selected with simulated data. This reference population (n = 818) covered three generations (G0, G1 and G2) and was genotyped with 4436 single-nucleotide polymorphism (SNP) markers. We evaluated the effects on prediction accuracy of both the relatedness between the calibration and validation sets and validation on the basis of progeny performance. Pedigree-based (best linear unbiased prediction, ABLUP) and marker-based (genomic BLUP and Bayesian LASSO) models were used to predict breeding values for three different traits: circumference, height and stem straightness. On average, the ABLUP model outperformed genomic prediction models, with a maximum difference in prediction accuracies of 0.12, depending on the trait and the validation method. A mean difference in prediction accuracy of 0.17 was found between validation methods differing in terms of relatedness. Including the progenitors in the calibration set reduced this difference in prediction accuracy to 0.03. When only genotypes from the G0 and G1 generations were used in the calibration set and genotypes from G2 were used in the validation set (progeny validation), prediction accuracies ranged from 0.70 to 0.85. This study suggests that the training of prediction models on parental populations can predict the genetic merit of the progeny with high accuracy: an encouraging result for the implementation of GS in the maritime pine breeding program. The use of genome-wide DNA markers to predict genomic estimated breeding values (GEBV), first proposed by Meuwissen et al. [1], has radically changed perspectives in molecular breeding. Breeders now have access to large numbers of single-nucleotide polymorphisms (SNPs). They have therefore focused their efforts on genomic selection (GS), which is based on a large set of markers expected to be in linkage disequilibrium (LD) with every QTL controlling the phenotype of interest. In comparison to classical marker-assisted selection, which uses a small set of well-characterized markers tracing a small number of quantitative trait loci (QTLs), each with a medium-to-large effect, GS offers the possibility of a higher genetic gain per unit of time [2–4]. Thus, with the availability of cost-effective genotyping platforms [5], the use of this approach has become widespread in the breeding of animals [4, 6] and plants [7, 8], including forest trees [9, 10]. GS requires the development of a predictive model with a calibration population for which both genotype and phenotype have been characterized. This model is then used to predict GEBV, from marker genotypes alone, in the targeted breeding populations. As in traditional selection based on estimated breeding values (EBV), prediction accuracy is a key issue in evaluations of the efficiency of GS strategies. The prediction accuracy of GS models is evaluated by assessing the correlation between the GEBV obtained with GS models and the EBV obtained by classical genetic evaluation based on progeny testing. Simulation studies, either general [1, 11–15], or species-based (maize [3], oil palm [16], barley [17], Japanese cedar [18]), have attempted to identify key factors affecting the accuracy of GEBV. In a review on dairy cattle, Hayes et al. [4] highlighted four major factors: i) the heritability of the target trait, ii) the genetic architecture of the trait (number and effect of underlying QTLs), iii) the level of LD between markers and QTLs in the reference and target populations, and iv) the size of the reference population, and the degree to which the reference and target populations are related. The statistical methods used to predict GEBV may also affect the accuracy of this prediction [19], but to a lesser extent. In forest tree breeding, the duration of a single cycle of selection-recombination is driven by the time at which flowering first occurs (e.g. 7–8 years in maritime pine) and the age at which early indirect selection for mature properties can be carried out (e.g. 10–12 years for total height and stem straightness in maritime pine). A full cycle therefore generally lasts more than two decades. In addition, the low–to-medium heritabilities of most complex traits, such as growth, stem form, and branching characteristics, limit the response to selection, and, thus, the expected genetic gain. GS may overcome these limitations, by decreasing breeding cycle duration and improving selection efficiency/intensity for traits with a low heritability, thereby increasing the efficiency of breeding strategies. Preliminary studies on major plantation forest trees (eucalyptus, spruces and pines) have given encouraging results [9, 10], with accuracies of up to 0.8 (Table 1), despite the low level of LD in these outcrossing species, which have large population sizes [20, 21], and low marker coverage (i.e. a few thousand loci). These studies showed that GS with DNA markers provided accuracies similar to those obtained for classical genetic evaluation with progeny testing (Table 1). Rather than capturing historical LD associations between markers and QTLs, this approach derives its prediction accuracy from better estimations of realized genomic relationships [22, 23]. The relatively small effective population sizes of the reference populations and validation within the same population clearly contributed to higher accuracies. Indeed, lower accuracies (around 0.5) were obtained for larger reference populations [24, 25] or when GS models were applied to target populations different from the reference population [26]. It is important to assess the prediction accuracy of GS models across generations, because recombination may modify marker-allele phases in subsequent generations, and because selection may change allele frequencies [10]. These effects may decrease GS accuracy over generations [11, 27]. The validation of GS models across generations, with assessment of the predictive ability of markers, is essential before the implementation of GS strategies in tree breeding. The marker-trait associations established in "parental" populations (the parents or preceding generations) should be validated in progeny populations (i.e., progeny validation) [28, 29]. To our knowledge, no study on forest tree species has yet used empirical data to address this issue. Indeed, in all the studies listed in Table 1, individuals of the same generation were split into calibration and validation sets for the evaluation of GS models. Table 1 List of genomic selection studies based on real data sets conducted on forest tree species. Studies are listed in chronological order of publication. This study is the last one listed Maritime pine (Pinus pinaster) is a major forest tree species in south-western Europe. A breeding program based on a recurrent selection strategy was initiated in France in the 1960s [30]. A base population of 635 founders (the G0 trees) was selected from the "Landes" ecotype (an ecotype found in South-West France) for growth (height and circumference) and stem straightness. This population was subjected to two cycles of breeding, testing and selection (i.e. the G1 and G2 generations). The potential of GS for use in maritime pine breeding is currently being evaluated alongside the implementation of a forward selection strategy with pedigree reconstruction [31]. A preliminary investigation based on a population of 661 individuals from the first two generations, with low marker coverage (2500 SNPs, i.e. ~1.39 markers/cM), showed the prediction accuracy of GS models to be about 0.50 for growth and stem straightness [25]. In this study, we first selected a reference population on the basis of the following criteria: i) high performance for the main traits of the breeding program, ii) limited effective population size, and iii) combining the three generations of the maritime pine breeding population. Simulations were carried out to optimize the set of individuals to be genotyped for genomic prediction. Finally, using the reference population with real phenotypic and genotypic data, we aimed: i) to compare the predictive power of SNP markers with that of the pedigree-based method, ii) to investigate the effect on prediction accuracy of pedigree depth and relatedness between the calibration and validation sets, and iii) to investigate the impact of the use of third-generation individuals as a validation set (progeny validation) on the prediction accuracy of GS models. Design of the reference population The reference population was designed in two steps, as summarized in Fig. 1. A pre-selection step based on pedigree and phenotype information was first applied to G2 individuals and their progenitors. Simulations were then used to select a subset of about 800 individuals (a priori on the basis of genotyping constraints) to maximize the expected genomic prediction accuracy. Strategy for selecting the reference population and validation methods for model evaluation. The reference population was designed in two steps. The first was based on breeding value and pedigree information and the second was based on the use of simulated data to optimize the population to be genotyped. The reference population was then used to evaluate the performance of prediction models with different validation methods Pre-selection of G2 individuals G2 individuals were pre-selected in series of polycross trials involving 414 half-sib families (identified mothers were crossed with a pollen mixture) and 27,265 G2 trees. Breeding values based exclusively on maternal pedigree (as the paternal pedigree was unknown) were estimated for height, stem circumference at breast height and stem straightness, in a mixed model framework. Two criteria were used to select a subset of G2 trees: i) an index combining the best linear unbiased predictions (BLUP) for volume and stem straightness (equal weighting) to select the best half-sib family, ii) a maximum of 40 half-sib families with a maximum contribution of a single founder (G0) of 0.15, to prevent the over-representation of a few founders and to give a limited status number (NS, an estimate of effective population size). This procedure resulted in the selection of 2038 G2 trees. Pedigree recovery with 63 SNP markers was carried out on these trees to identify the paternal parent and to check the maternal genotype (see Vidal et al. [31] for a description of the methodology). Maternal identity was confirmed and paternal parents (pollen donors) were identified for 1308 G2 individuals. At least one of the grandparents (G0 individuals) was unknown for 208 of the 1308 G2 individuals. We decided to select only G2 trees for which full pedigree information was available. Thus, 1100 G2 trees and their progenitors (78 G1 and 50 G0) were available for the design of the reference population on the basis of simulation data. Simulation to optimize the final selection of the reference population We used 4000 markers evenly distributed over a 1665 cM composite genetic map of maritime pine [25], including 2965 mapped positions. A gene-dropping algorithm developed in R [32] was used to generate the genotypes of the G1 and G2 offspring. Starting with a set of identified founder haplotypes in generation G0, this algorithm modeled the process of segregation and gamete association over the three generations resulting in known founder alleles at each marker position for each individual in the G2 population. The probability of recombination between adjacent markers was set according to the genetic distance between them. Marker states were assigned randomly to each founder allele, assuming an allele frequency of 0.5 for all markers. The trait of interest was modeled by assigning a non-zero QTL effect, assuming a normal distribution, to 100 random marker positions and setting the environmental error term to give a narrow-sense heritability of 0.3, corresponding to the observed heritability of the target traits [31]. Four methods were applied to the 1100 G2 plants, to establish a reference population of about 800 trees (G0, G1 and G2). In the first method, G2 trees were selected at random (the random method). The second method was based on sampling within the largest maternal half-sib families, with equal numbers of individuals selected from each half-sib family (the HS method). In the third method, G2 trees were sampled from the largest full-sib families, with a maximum of two individuals selected per family (the FS method). For the fourth method, we maximized the mean generalized coefficient of determination (CD method) [33, 34]. The CD method provides a measurement of the expected reliability of predictions based on the pedigree. Briefly, a specified number (eight in our case) of individuals with the highest CD values are removed one-by-one, with the individuals causing the largest decrease in mean CD being retained. This process is repeated until the desired number of individuals remain. We evaluated these four methods by simulating 100 replicates corresponding to 10 different datasets (simulated genotypes and phenotypes), each with 10 different samplings of the G2 generation. Status number (NS, [35]) was estimated as \( {N}_S=\frac{1}{2}F \), where \( F \) is the mean inbreeding value calculated from the realized kinship matrix; see the methods below. Phenotypic and genotypic data for the reference population Traits analyzed The estimated breeding values (EBV) for three different traits — circumference and height at 12 years of age and stem straightness at 8 years of age — were obtained from a meta-analysis based on the TREEPLAN framework [36]. The correlations between circumference and height (Spearman's correlation coefficient ρ = 0.61, p < 0.01) and between circumference and stem straightness (ρ = 0.45, p < 0.01) were moderate. A weaker correlation was observed between height and stem straightness (ρ = 0.36 p < 0.01, Additional file 1: Figure S1). EBV reliability was generally high (0.97 ± 0.02) for G0 and G1 individuals, and mean EBV reliability for the G2 population was 0.75. Parental effects on the EBV of individuals can be large and may introduce bias into genomic estimated breeding values. The BLUP method shrinks the breeding values towards the mean and reduces the variation. We addressed the issues of bias and reduced heterogeneity by deregressing the EBV of individuals, as suggested by Garrick et al. [37]. We used the heritabilities estimated from TREEPLAN evaluation for deregression: 0.17, 0.32 and 0.26 for circumference, height and stem straightness, respectively. The resulting deregressed breeding values were used as pseudo-phenotypes for the genomic prediction analysis. Genotyping and linkage disequilibrium analysis The DNA extraction method and the Illumina Infinium array used to genotype the reference population have been described elsewhere [38]. SNP clustering was performed with GenomeStudio (Genotyping module V1.9, Illumina, San Diego, USA), with the manual checking of each SNP. One G2 individual, with a call rate below 0.98 and a 10 % GenCall score below 0.24, was removed. We analyzed 8411 SNP loci: genotyping failed for 2429 (low fluorescence intensity, GenTrain score below 0.35), 1539 were monomorphic and 4443 were polymorphic (52.8 %). The pattern of SNP inheritance was checked with MERLIN [39]. SNPs presenting an aberrant inheritance pattern or for which more than 2 % of values were missing were removed from subsequent analyses. For the remaining 4436 polymorphic SNPs, the mean GenTrain score was 77.7 %, the mean percentage of missing data was 0.05 % and the repeatability, based on eight duplicated genotypes, was greater than 99.9 %. For genomic prediction models, 4332 SNPs were retained on the basis of their minor allele frequency (MAF > 0.01). Genetic location on the P. pinaster composite map [40] was determined for 3962 SNPs (91.5 %, Additional file 1: Figure S2), corresponding to a total of 2548 contigs of the P. pinaster unigene [41]. The number of markers per linkage group ranged from 279 to 376, with a mean of 330, corresponding to 2.4 SNPs per cM. The intra-chromosomal LD between markers was calculated as r 2 with R software and expressed as a function of the genetic distance between markers. The effect of selection (differentiation between generations), resulting in changes in allele frequencies between generations, was assessed by calculating a fixation index (FST) [42] with the R package pegas [43]. Methods for genomic prediction Data for genomic prediction models were handled in the R 3.2.2 environment [32] with the R packages synbreed [44] and BGLR [45]. The results were visualized with the ggplot2 package [46]. Genetic relationship matrices Kinship coefficients between individuals of the three-generation pedigree were estimated from pedigree and genomic data. Two expected additive genetic relationship matrices (matrix A) based on pedigree were derived. The first, A P , used only data for the maternal parents and corresponds to polymix breeding, in which only the maternal parents are known. For the second (A F ), the full pedigree was used. G0 plants were considered to be unrelated and no population structure was identified [20]. In parallel to pedigree-based matrices, a realized genomic relationship matrix (matrix G) was also calculated, as described by Van Raden [47]. $$ \mathbf{G}=\frac{\left(\mathbf{M}-\mathbf{P}\right)\left(\mathbf{M}-\mathbf{P}\right)\mathit{\hbox{'}}}{2{\displaystyle \sum {p}_i\left(1-{p}_i\right)}} $$ where \( \mathbf{M} \) and \( \mathbf{P} \) are two matrices of dimension n (number of individuals) × p (number of markers). \( \mathbf{M} \) is the matrix of gene content, with values of −1, 0, and 1, for one homozygote, the heterozygote, and the other homozygote, respectively. \( \mathbf{P} \) is the matrix of allele frequencies in the following form \( 2\left({p}_i-0.5\right) \), where \( {p}_i \) is the observed allele frequency at the marker i for all individuals. Use of the matrix of minor allele frequency scales \( \mathbf{G} \) such that it lies on the scale of the expected additive genetic relationships matrix derived from the pedigree. Statistical models for genomic prediction We used genomic BLUP and Bayesian LASSO [48] to predict genomic estimated breeding values. The classical genetic evaluation (BLUP) was used to predict genomic estimated breeding values (GEBV) by a mixed model approach, in which the pedigree-based relationship matrix A was replaced with the realized genetic relationship matrix G. The methods used have been described in detail elsewhere [25], but we summarize them in brief here. $$ \mathbf{y}=\mathbf{1}\mu +\mathbf{Z}\mathbf{u}+\mathbf{e} $$ where \( \mathbf{y} \) is a vector of the pseudo-phenotypes (EBV) (dimension \( n\kern0.28em \times \kern0.28em 1 \)), \( \mu \) is the overall mean with a vector of 1, \( \mathbf{Z} \) is a design matrix of the random effects with \( n\kern0.28em \times \kern0.28em n \) dimensions, \( \mathbf{u} \) is the vector of random tree effect (\( n\kern0.28em \times \kern0.28em 1 \)) \( \mathbf{u}\sim N\left(0,\;\mathbf{G}{\sigma}_{\mathrm{u}}^2\right) \), and \( \mathbf{e} \) is the vector of residuals (dimension \( n\times 1 \)) with expectations \( \mathbf{e}\sim N\left(0,\;{\mathbf{I}}_n{\sigma}_{\mathrm{e}}^2\right) \). The diagonal elements of the residual variance covariance matrix R are prediction accuracies. For the prediction of GEBV, the G matrix derived from DNA markers is used to solve mixed model equations: $$ \left[\begin{array}{c}\hfill \mathbf{1}\mathit{\hbox{'}}\mathbf{1}\hfill \\ {}\hfill \mathbf{Z}\mathit{\hbox{'}}\mathbf{1}\hfill \end{array}\kern1em \begin{array}{c}\hfill \mathbf{1}\mathit{\hbox{'}}\mathbf{Z}\hfill \\ {}\hfill \mathbf{Z}\hbox{'}\mathbf{Z}+{\mathbf{G}}^{-1}\upalpha \hfill \end{array}\right]\left[\begin{array}{c}\hfill \widehat{\boldsymbol{\upmu}}\hfill \\ {}\hfill \widehat{\mathbf{u}}\hfill \end{array}\right]=\left[\begin{array}{c}\hfill \mathbf{1}\mathit{\hbox{'}}\mathbf{y}\hfill \\ {}\hfill \mathbf{Z}\mathit{\hbox{'}}\mathbf{y}\hfill \end{array}\right] $$ where \( {\mathbf{G}}^{-1} \) is the inverse of the realized genomic relationship matrix, \( \upalpha \) is the residual variance (\( {\sigma}_e^2 \)) divided by the variance associated with the random tree effect \( {\sigma}_u^2 \). This ratio is equal to the sum across loci \( 2\Sigma {\mathrm{p}}_{\mathrm{i}}\left(1-{\mathrm{p}}_{\mathrm{i}}\right) \) times the ratio \( {\sigma}_e^2/{\sigma}_a^2 \) where \( {\sigma}_a^2 \) represents the total genetic variance and \( {p}_{\mathrm{i}} \) is the minor allele frequency at the i th locus. The \( {\mathbf{G}}^{-1} \) matrix was replaced with the \( {\mathbf{A}}^{-1} \) matrix for predictions of the breeding values of individuals from expected genetic relationships. GBLUP assumes that markers have the same effects and that each marker has a small effect on the phenotype. We tested the marker specific shrinkage model, Bayesian LASSO and compared it to GBLUP in terms of GEBV reliability. The linear model has the form: \( \mathbf{y}=\boldsymbol{\upmu} +\mathbf{X}\boldsymbol{\upbeta } +\boldsymbol{\upvarepsilon} \), where \( \mathbf{X} \) (\( n\times p \)) is the incidence matrix of markers, \( \boldsymbol{\upbeta} \) (\( p\times 1 \)) is the vector of marker effects, and \( \boldsymbol{\upvarepsilon} \) (\( n\times 1 \)) is the random residual effect with expectations \( \boldsymbol{\upvarepsilon} \sim N\left(0,\;{\mathbf{I}}_n{\sigma}_{\boldsymbol{\upvarepsilon}}^2\right) \). The solutions of marker effects are obtained as $$ {\widehat{\upbeta}}_{\mathrm{L}}=\underset{\upbeta}{ \arg\;\min}\left\{{\left|\mathbf{y}-\mathbf{X}\boldsymbol{\upbeta } \right|}^2+\lambda {\displaystyle {\sum}_{i=1}^p\left|{\upbeta}_i\right|}\right\} $$ The expression outside the curly brackets minimizes the error variance. The shrinkage of markers towards the intercept is marker-specific and regulated by the \( \lambda \) parameter [49]. The coefficients of uninformative markers are shrunk to exactly zero, reducing the complexity of the model and this can be used as the basis of a model selection method. A scaled inverse \( {\upchi}^{-2} \) prior with \( {df}_{\varepsilon } \) degrees of freedom and scale parameter \( {S}_{\varepsilon } \) was assigned as a flat prior to residual effect as \( {\sigma}_{\boldsymbol{\upvarepsilon}}^2\sim {\upchi}^{-2}\left({\sigma}_{\boldsymbol{\upvarepsilon}}^2,\;{S}_{\varepsilon}\right) \). We used the same priors and rate parameters as Isik et al. [25] for the Bayesian LASSO regression coefficients. The vector \( {\boldsymbol{\upbeta}}_{\mathrm{L}} \) is assumed to have a multivariate normal distribution with marker-specific prior variances with expectations \( {\boldsymbol{\upbeta}}_{\mathrm{L}}\sim \mathrm{N}\left(0,\mathbf{T}\Big({\sigma}_{\mathrm{e}}^2\right)\Big) \), where \( \mathbf{T}=\mathrm{diag}\left({\tau}_1^2,\dots, {\tau}_q^2\right) \). We assigned \( {\tau}_j^2 \) parameters independently and used identically distributed exponential priors, \( {\tau}_j^2\sim Exp\left({\lambda}^2\right) \) for \( j=1,\dots, q \), where parameter \( {\uplambda}^2 \) is given a gamma prior distribution with hyper-parameters \( r \) (shape) and \( \updelta \) (rate), giving \( {\uplambda}^2\sim \mathrm{gamma}\left(r,\updelta \right) \) [48, 50]. Definition of the calibration and validation sets and model evaluation Based on the reference population; two different validation methods were used to evaluate the effect of the structure of the calibration set on genomic prediction accuracies: subset validation and progeny validation (Fig. 1). The subset validation method, in which the G2 population was split into calibration and validation sets, evaluated the effect of the relatedness of the calibration and validation sets on prediction accuracy. Three different sampling strategies were used to sample 20 % of the G2 population to form the validation set: i) random selection of G2 trees (random), ii) selection of G2 trees from the same half-sib families, to obtain a low level of relatedness between the calibration and validation sets (S1), iii) sampling of G2 trees from different full-sib families, to obtain a high level of relatedness between the calibration and validation sets (S2). For each sampling strategy, two types of calibration sets were used to evaluate the effect of pedigree depth. The first was the remaining 80 % of the G2 population and the second was the remaining 80 % of the G2 population plus all progenitors (G0 and G1). Model fit statistics were obtained for 100 replications for each scenario. In addition to subset validation (different sampling approaches applied to G2 trees), we performed progeny validation to evaluate the prediction accuracy of GS models over generations. The individuals of the G0 and G1 generations were used as the calibration set and the individuals of the G2 generation were used as the validation set. This second validation method was used to assess the accuracy of genomic prediction models across generations, with the model trained on ancestral generations (Gn, Gn-1, etc.) and validated on progeny generation (Gn + 1). The prediction accuracy of GS models was estimated as the coefficient of correlation between the genomic estimate breeding values (GEBV) of the validation set and the EBV obtained by TREEPLAN evaluation. The prediction bias was calculated as the slope of the regression line between EBV and GEBV. A slope of b > 1 indicates deflation and a slope of b < 1 indicates inflated predictions. For all pre-selection methods, (Random, HS, FS and CD), use of the full-pedigree information (matrix A F ) substantially increased the prediction accuracy of GS models (p < 0.05) over that for the partial pedigree based only on maternal information (matrix A P , Fig. 2a). Small but significant increases in prediction accuracy (0.03 on average, p < 0.05) were achieved by using GBLUP rather than AFBLUP. For example, for the HS selection method, the mean accuracies of genomic predictions were 0.53 for AFBLUP and 0.56 for GBLUP (Additional file 1: Table S1). For all relationship matrices, the CD method performed significantly better than the other three methods (Fig. 2a). However, the differences were small: the mean prediction accuracies for GBLUP were 0.54, 0.56, 0.55 and 0.56 for the Random, HS, FS and CD selection methods, respectively. Status number depended on the selection method used (Fig. 2b). The highest NS value was obtained for the CD selection method (25.1 on average). This value was significantly higher than those for the HS (19.8), FS (20.7) or Random (20.4) methods (Additional file 1: Table S1). We therefore used the CD method to select the reference population, as it gave the highest prediction accuracy and NS. Prediction accuracy (a) and status number (b) based on simulated data. Results are given for four methods for selecting G2 individuals (Random, HS: half-sib family, FS: full-sib family and CD: coefficient of determination). The prediction accuracy was calculated as Pearson's correlation coefficient for the relationship between GEBV and true breeding values for the validation set assessed by the cross-validation method. The results obtained with APBLUP are in orange, those obtained with AFBLUP are in green, and those obtained with GBLUP are shown in purple. A Tukey boxplot is used to represent the data Characterization of the reference population The reference population selected by the CD method comprised 818 individuals from the three generations (Additional file 1: Figure S3): 710 G2 trees and all their progenitors (62 G1 and 46 G0). The G2 individuals came from 35 maternal half-sib families, corresponding to 355 full-sib families. The number of individuals per half-sib family ranged from 13 to 34, with a mean value of 22.2. As expected, given the low level of relatedness in the population (founder G0 trees are not related), a large majority of the kinship coefficients estimated from the pedigree were zero. The coefficients obtained were grouped into 11 classes and ranged from 0 to 0.75 (Fig. 3). By using markers, we were able to estimate the proportion of the genome shared by different individuals. The relationships predicted from markers were more consistent with the actual relationships than the expected genetic relationships derived from the pedigree (Fig. 3). Unlike the expected genetic relationships derived from the pedigree, the realized genetic relationships in the G matrix were continuously distributed, with values between −0.18 and 0.77 (Fig. 3). Some of the realized genetic relationships were negative, suggesting that some individuals shared fewer markers than expected on the basis of allele frequencies. Similarly, some pairs of coefficients were positive and close to zero due to the sharing of a larger number of alleles than expected from allele frequencies. Comparison between expected and realized genetic relationship coefficients. Expected additive genetic relationships from the pedigree (top panel) and realized genetic relationships from SNP markers (bottom panel), for the reference population The extent of LD in the reference population was estimated by calculating r 2 from 3962 markers mapped onto the P. pinaster composite map. A rapid decrease in intra-chromosomal LD was observed for an inter-marker distance of about 5 cM on all linkage groups (Additional file 1: Figure S4). The overall LD was close to zero (average r 2 = 0.016) and only a few marker pairs (0.5 %) had r 2 values greater than 0.4. Most of the markers concerned (96.5 %) were physically linked (on the same contig) or genetically linked (less than 5 cM apart on the composite map). The remaining markers displaying high levels of LD (2.5 %) probably reflected a bias in composite linkage map construction rather than true long-distance LD, as suggested by their positions on component maps. Changes in allele frequencies were observed between G0 and G1, with an FST value greater than 0.05 for 19 SNPs, mostly located on chromosomes 5, 6, 9 and 12 (Additional file 1: Figure S5). By contrast, no difference was observed between G1 and G2. Overall, almost no differentiation was found between generations, with a global FST <0.01 between G0 and G1 and between G1 and G2. Prediction accuracy of genomic selection models for the reference population Effect of calibration set structure on accuracy The mean prediction accuracies of models using 80 % of the G2 for the calibration set ranged from 0.52 to 0.87, depending on the trait and the scenario considered (Table 2). When G0 and G1 trees were added to the calibration set, mean prediction accuracies ranged from 0.66 to 0.91. Whatever the calibration set or trait considered, mean prediction accuracies for models using only pedigree information (ABLUP) were higher than those for models using marker information (GBLUP or B-LASSO, Additional file 1: Figure S6). This difference was larger (up to 0.1 larger on average) for the scenario including the progenitors of the G2 trees (generations G0 and G1) in the calibration set, suggesting that it is important to use a deep pedigree to increase prediction accuracy. As expected, when the level of relatedness between the calibration and validation sets was low (S1), mean prediction accuracy was lower than that for random sampling or S2 sampling (Table 2, Additional file 1: Figure S6). Overall, the prediction accuracy of S2 was about 0.17 lower than that for S1, for all traits and all models, if only G2 trees were used for the calibration set. Inclusion of the progenitors of the G2 trees in the calibration set resulted in a much smaller difference in prediction accuracy between S1 and S2 (maximum difference of 0.03). For random or S2 sampling, the gain in prediction accuracy achieved by adding the progenitors of G2 trees to the calibration set was smaller (0.03 and 0.02 on average, for random and S2 sampling, respectively) than that for S1 sampling (0.12 on average). However, not all traits followed this general trend. For example, the increase in prediction accuracy for stem straightness was close to zero when progenitors of the G2 trees were added to the calibration population, for the GBLUP and B-LASSO models (Table 2, Additional file 1: Figure S6). Table 2 Comparison of prediction accuracies across three sampling and two calibration strategies. Three sampling strategies for the selection of 20 % of the G2 population as the validation set were applied: random, S1: between half-sib families and S2: within full-sib families. Two calibration strategies were used for each sampling strategy. For predictions for the 20 % of the G2 population selected, we used the remaining 80 % of the G2 plus their progenitors (G0 and G1) as the calibration set. The mean prediction accuracy (and range) for models based on pedigree information (ABLUP) and marker information (GBLUP and B-LASSO), and the results for the three traits studied (tree diameter, height and stem straightness) are presented Predictive value of markers across generations with progeny validation The prediction accuracies of models using only the G0 and G1 genotypes for the calibration set and only G2 for the validation set ranged from 0.70 to 0.85, depending on the trait and the method considered (Fig. 4, Additional file 1: Table S2). For all traits, ABLUP had a similar or slightly higher (up to 0.03) prediction accuracy than genomic predictions (GBLUP and B-LASSO). For all models and all three traits (except for circumference with B-LASSO model), a bias greater than one was observed, indicating that GEBV was overestimated relative to EBV. The B-LASSO model had the lowest bias: 0.99, 1.07 and 1.06 for circumference, height and stem straightness, respectively. Conversely, ABLUP had the highest bias, at 1.15, 1.22 and 1.36 for circumference, height and stem straightness, respectively (Fig. 4, Additional file 1: Table S2). Relationship between predicted breeding values (x-axis) and empirical breeding values (y-axis) for the progeny validation method. The three traits (circumference, height and stem straightness) and three different models (ABLUP, GBLUP and B-LASSO) are represented. The prediction accuracy (r) of genomic prediction models evaluated on the validation set (G2 genotypes are shown as open green circles) is indicated. Closed circles represent the calibration set with G0 genotypes (n = 46) in blue and G1 genotypes (n = 62) in orange Factors affecting the prediction accuracy of GS models Our reference population was specifically designed to maximize prediction accuracy given the available genetic material. By contrast to previous GS studies on forest trees, we used simulation to select individuals on the basis of an explicit criterion maximizing the expected prediction accuracy for the population. As a result, we obtained medium-to-high prediction accuracies for all three traits studied (0.52 to 0.91), consistent with published results for forest tree species (Table 1). Indeed, despite differences in species, population structure and GS models between studies, similar accuracies were reported for height in eucalyptus hybrids [26] and loblolly pine [51], with values ranging from 0.66 to 0.79 for eucalyptus and from 0.64 to 0.74 for loblolly pine, depending on effective population size (Ne) or environment. No clear trend was observed for the relationship between accuracy and trait heritability. Using a large number of traits, resulting in a wider range of heritability, Resende et al. [52] reported a strong correlation (R 2 = 0.79) between predictive ability and narrow-sense heritability. Compared to a previous study on maritime pine with a broader genetic basis [25], our results showed higher prediction accuracies on the same traits. The smaller effective population size in this study, measured as status number (NS = 25), than in a previous study (NS ≈ 100) and the inclusion of multiple generations might account for the higher prediction accuracies in this study. Indeed, effective population size, which is directly related to level of LD and relatedness in populations, is known to be an important factor determining GS accuracy [10]. The importance of effective population size for prediction accuracy was highlighted by Resende, MDV et al. [26], in a study using empirical datasets for Eucalyptus with contrasting effective population sizes: Ne = 11 and Ne = 51. The level of relatedness between the calibration and validation sets also affected GEBV estimates. We found a 0.17 difference in prediction accuracy between low (S1) and high (S2) levels of relatedness. Our findings are consistent with previous results for white spruce [24] and mice [53]. Similar results have been reported for eucalyptus, in which the GS model was applied to populations other than that used to build the prediction model [26]. In cases of a higher marker density, yielding a more stable linkage phase between markers and QTLs across populations, population-specific models have also been described [54]. Similarly, Hayes et al. [55] reported an accuracy close to zero for the use of models developed for the Holstein breed to predict GEBVs for the Jersey breed, and vice versa. Thus, in the presence of short-distance LD, as in the maritime pine population in this study, the relatedness of the calibration and validation sets may be the main driver of prediction accuracy [22]. Comparison between pedigree- and marker-based models Given all the possible mechanisms separating genomic variants, such as SNPs, from phenotype expression and the efforts required to identify them, one of the main issues in GS studies is demonstrating the predictive value of markers relative to conventional BLUP. In this study, regardless of the scenario used, the model using pedigree information (ABLUP) had a higher prediction accuracy than marker-based models. The marker density (2.4 SNPs per cM) used to predict GEBV may account for this difference. Indeed, simulation studies have suggested that there may be a positive asymptotic relationship between marker density and prediction accuracy [14, 22, 56]. Using a deterministic approach, Grattapaglia [10] showed that the minimal density at which marker-based models achieve accuracies similar to those of ABLUP was 2–3 SNPs per cM for an effective population size below 60. In addition, our reference population selection strategy may also have reduced the additional gain of information provided by molecular markers relative to the pedigree. Indeed, one of the steps in the selection process was pedigree recovery, which improved the estimation of BVs [31]. Indeed, Munoz et al. [57] reported that using the G matrix to correct the pedigree and re-estimate EBVs increased prediction accuracy. In the presence of pedigree errors, which are frequently reported in tree breeding programs [31, 58, 59], the differences in prediction accuracy between ABLUP and GBLUP observed in previous GS studies may be biased. However, our results are consistent with previous findings for forest trees based on simulated [14, 18] or empirical [24, 26, 60, 61] data, with conventional BLUP having an accuracy similar to or slightly higher than that of GS models, particularly for traits with a low heritability. The genetic gain per unit of time of the GS approach over conventional BLUP would therefore be dependent solely on the decrease in breeding cycle length. This decrease in breeding cycle length raises questions about the loss of genetic variation and the maintenance of long-term genetic gain relative to conventional BLUP [62–64]. GS accuracy over generations This study is novel because, unlike previous empirical studies on forest trees, we assessed the predictive value of markers across generations, rather than splitting a single population in two for model development and validation [27]. GS in forest trees is likely to be used to select progeny within families without the need for progeny testing, to reduce breeding cycle length. In this case, GS evaluation must be carried out with the progeny population. During the breeding process, recombination between haplotypes should decrease the marker-QTL linkage phase. As a result, prediction accuracy would be expected to decrease over generations [11, 17]. In this study, we assessed the predictive value of the markers, using the parents (G1) and grandparents (G0) as the training set, with validation of the model on the descendants (G2). Interestingly, prediction accuracy remained high (0.70 to 0.85, depending on the trait considered) in the validation set. These accuracies were very similar to those estimated by subset validation with a high level of relatedness between the calibration and validation sets (S2), although the calibration set was larger in this second case (567 vs. 108). These results are consistent with those of Sallam et al. [28] for a five-generation population of barley and with findings for oat breeding lines and cultivars from distant generations [65]. Indeed, both studies reported consistent prediction accuracies over generations for most traits. In sugar beet, Hofheinz et al. [29] reported that prediction accuracy was similar across generations for sugar content but that it decreased by 0.4 for molasses loss. These results suggest that the predictive value of markers across generations is sensitive to the genetic architecture of the trait. Marker density was low in this study and in the three studies described above. However, a larger number of markers should become available in the near future, because further decreases in the cost of genotyping are anticipated. Additional markers will, therefore, probably be included in GS models over generations to maintain the accuracy of GEBV at an operational level [64]. When progenitors (G0 and G1) of the G2 population were included in calibration models, differences in prediction accuracy between low (S1) and high (S2) levels of relatedness were less than 0.03. Moreover, a slight increase in prediction accuracy was observed for all scenarios, highlighting the importance of genotyping the ancestral populations, which are generally conserved in tree breeding programs, to increase prediction accuracy. Simulation studies have also highlighted the importance of including multiple generations in the calibration set, to update the prediction equation [18, 66]. Indeed, a simulation study carried out on Cryptomeria japonica trees generated over a period of 60 years showed that GS outperformed phenotypic selection only if the GS model was updated [18]. Sallam et al. [28] reported contrasting results for empirical data from barley, for which the inclusion of previous generations increased prediction accuracy for some traits, but decreased it for others. Prospects for the use of GS in the maritime pine breeding program The maritime pine breeding program follows a recurrent selection scheme, with breeding value estimated from polycross and bi-parental progeny trials. The genetic gain achieved in the released varieties over the first two generations was estimated at 30 % for both growth and stem straightness. The improved varieties generated by this program in the future will need to be adapted to predict changes in climate, pest and disease outbreaks and the demand for diversified wood-based products. The major challenge faced in this breeding program will therefore be the integration of new traits to deliver suitable varieties. With the rapid decrease in genotyping costs and the promising results obtained for forest trees (Table 1), GS could prove an essential tool for addressing these challenges and overcoming the limitations of marker-assisted selection [27, 67]. One of the main advantages of GS is that it can be included in the framework of current genetic evaluation. Indeed, the currently used pedigree-based BLUP method could be replaced with the "single-step" GS strategy [68] with only minimal changes. This strategy is based on the integration of both genotyped and ungenotyped individuals into the genetic evaluation through a hybrid pedigree-genomic relationship matrix [69, 70]. As an increasing number of individuals are being genotyped for higher densities of markers, the information obtained could be used, to decrease the error rate in pedigrees. By eliminating pedigree errors and adding more information (concerning the father), this method should increase the accuracy of genetic evaluation [31, 57]. In addition, GS on the progeny population should make it possible to capture the Mendelian segregation effect in families. In forest trees, crossing can generate large numbers of offspring. In the absence of GS models, all the offspring are considered to have the same mid-parent BV at the seed or seedling stage (before progeny testing) [71]. The challenge is thus to select the superior plants without progeny testing. GS models can meet this challenge, by selecting a subset of progeny on the basis of their GEBV. This should greatly shorten the breeding cycle and decrease the costs of progeny testing, which is expensive and time-consuming for forest trees. Furthermore, a more complete knowledge of the genotype of all candidates for selection should improve the management of genetic diversity and inbreeding depression. However, shortening of the breeding cycle in maritime pine should combine GS with artificial flower induction by top-grafting, as in loblolly pine [51], or by growth regulators, as suggested for Eucalyptus [72] and white spruce [24]. These techniques have already been successfully implemented in these species [73, 74], but not yet in maritime pine. We selected a reference population covering three generations, with a limited status number (NS = 25) and a marker density of 2.5 SNPs per cM, for assessment of the prediction accuracy of GS models within and across generations. We studied three major traits used in maritime pine breeding: circumference, height and stem straightness. These three traits have low heritabilities, from 0.17 to 0.32. Prediction accuracies of up to 0.85 were obtained with progeny validation, confirming the potential of GS to predict progeny performance for low-heritability traits. However, the pedigree-based model had prediction accuracies similar to or greater than that of marker-based models. The optimization of current breeding strategies based on polymix breeding will therefore be required to enhance the potential of the GS approach in the maritime pine breeding program. A, expected additive genetic relationship matrix; B-LASSO, Bayesian least absolute shrinkage and selection operator; BLUP, best linear unbiased prediction; CD, coefficient of determination; EBV, estimated breeding value; FS, full-sibs; FST, Fixation index; G, realized genomic relationship matrix; G0, G1 and G2, generations of the breeding program; GEBV, genomic estimated breeding value; GS, genomic selection; HS, half-sibs; LD, linkage disequilibrium; Ne, effective population size; NS, status number; QTL, quantitative trait loci Meuwissen THE, Hayes BJ, Goddard ME. Prediction of total genetic value using genome-wide dense marker maps. Genetics. 2001;157(4):1819–29. Heffner EL, Lorenz AJ, Jannink J-L, Sorrells ME. Plant breeding with genomic selection: gain per unit time and cost. Crop Sci. 2010;50(5):1681–90. Bernardo R, Yu J. Prospects for genomewide selection for quantitative traits in Maize. Crop Sci. 2007;47(3):1082–90. Hayes BJ, Bowman PJ, Chamberlain AJ, Goddard ME. Invited review: genomic selection in dairy cattle: progress and challenges. J Dairy Sci. 2009;92(2):433–43. Thomson MJ. High-throughput SNP genotyping to accelerate crop improvement. Plant Breed Biotechnol. 2014;2(3):195–212. Goddard ME, Hayes BJ. Genomic selection. J Anim Breed Genet. 2007;124(6):323–30. Jannink J-L, Lorenz AJ, Iwata H. Genomic selection in plant breeding: from theory to practice. Brief Funct Genomics. 2010;9(2):166–77. Heslot N, Jannink J-L, Sorrells ME. Perspectives for genomic selection applications and research in plants. Crop Sci. 2015;55(1):1–12. Isik F, Kumar S, Martínez-García PJ, Iwata H, Yamamoto T. Chapter Three - Acceleration of Forest and Fruit Tree Domestication by Genomic Selection. In: Plomion C, Adam-Blondon A-F, editors. Advances in Botanical Research, vol. 74. Academic; 2015. p. 93–124. doi:10.1016/bs.abr.2015.05.002. Grattapaglia D. Breeding Forest Trees by Genomic Selection: Current Progress and the Way Forward. In: Tuberosa R, Graner A, Frison E, editors. Genomics of Plant Genetic Resources. Springer: Netherlands; 2014. p. 651–82. Habier D, Fernando RL, Dekkers JCM. The impact of genetic relationship information on genome-assisted breeding values. Genetics. 2007;177(4):2389–97. Calus MPL, Meuwissen THE, de Roos APW, Veerkamp RF. Accuracy of genomic selection using different methods to define haplotypes. Genetics. 2008;178(1):553–61. Piyasatian N, Fernando RL, Dekkers JCM. Genomic selection for marker-assisted improvement in line crosses. Theor Appl Genet. 2007;115(5):665–74. Grattapaglia D, Resende MV. Genomic selection in forest tree breeding. Tree Genet Genomes. 2011;7(2):241–55. Pszczola M, Strabel T, Mulder HA, Calus MPL. Reliability of direct genomic values for animals with different relationships within and to the reference population. J Dairy Sci. 2012;95(1):389–400. Wong CK, Bernardo R. Genomewide selection in oil palm: increasing selection gain per unit time and cost with small populations. Theor Appl Genet. 2008;116(6):815–24. Zhong S, Dekkers JCM, Fernando RL, Jannink J-L. Factors affecting accuracy from genomic selection in populations derived from multiple inbred lines: a Barley case study. Genetics. 2009;182(1):355–64. Iwata H, Hayashi T, Tsumura Y. Prospects for genomic selection in conifer breeding: a simulation study of Cryptomeria japonica. Tree Genet Genomes. 2011;7(4):747–58. Heslot N, Yang H-P, Sorrells ME, Jannink J-L. Genomic selection in plant breeding: a comparison of models. Crop Sci. 2012;52(1):146–60. Plomion C, Chancerel E, Endelman J, Lamy J-B, Mandrou E, Lesur I, Ehrenmann F, Isik F, Bink M, van Heerwaarden J, et al. Genome-wide distribution of genetic diversity and linkage disequilibrium in a mass-selected population of maritime pine. BMC Genomics. 2014;15(1):171. Slavov GT, DiFazio SP, Martin J, Schackwitz W, Muchero W, Rodgers-Melnick E, Lipphardt MF, Pennacchio CP, Hellsten U, Pennacchio LA, et al. Genome resequencing reveals multiscale geographic structure and extensive linkage disequilibrium in the forest tree Populus trichocarpa. New Phytol. 2012;196(3):713–25. Habier D, Fernando RL, Garrick DJ. Genomic BLUP decoded: a look into the black box of genomic prediction. Genetics. 2013;194(3):597–607. Hickey JM. Sequencing millions of animals for genomic selection 2.0. J Anim Breed Genet. 2013;130(5):331–2. Beaulieu J, Doerksen T, Clement S, MacKay J, Bousquet J. Accuracy of genomic selection models in a large population of open-pollinated families in white spruce. Heredity. 2014. Isik F, Bartholomé J, Farjat A, Chancerel E, Raffin A, Sanchez L, Plomion C, Bouffier L. Genomic selection in maritime pine. Plant Sci. 2016;242:108–19. Resende MDV, Resende MFR, Sansaloni CP, Petroli CD, Missiaggia AA, Aguiar AM, Abad JM, Takahashi EK, Rosado AM, Faria DA, et al. Genomic selection for growth and wood quality in Eucalyptus: capturing the missing heritability and accelerating breeding for complex traits in forest trees. New Phytol. 2012;194(1):116–28. Isik F. Genomic selection in forest tree breeding: the concept and an outlook to the future. New For. 2014;45(3):379–401. Sallam AH, Endelman JB, Jannink J-L, Smith KP. Assessing genomic selection prediction accuracy in a dynamic Barley breeding population. The Plant Genome. 2015;8(1). Hofheinz N, Borchardt D, Weissleder K, Frisch M. Genome-based prediction of test cross performance in two subsequent breeding cycles. Theor Appl Genet. 2012;125(8):1639–45. Illy G. Recherches sur l'amélioration génétique du Pin maritime. Ann Sci For. 1966;1966:765–948. Vidal M, Plomion C, Harvengt L, Raffin A, Boury C, Bouffier L. Paternity recovery in two maritime pine polycross mating designs and consequences for breeding. Tree Genet Genomes. 2015;11(5):1–13. R Core Team. R: A Language and Environment for Statistical Computing. Vienna: R Foundation for Statistical Computing; 2015. https://www.r-project.org/. Laloë D. Precision and information in linear models of genetic evaluation. Genet Sel Evol. 1993;25(6):1–20. Laloë D, Phocas F, Ménissier F. Considerations on measures of precision and connectedness in mixed linear models of genetic evaluation. Genet Sel Evol. 1996;28(4):1–20. Lindgren D, Gea L, Jefferson P. Loss of genetic diversity monitored by status number. Silvae genetica. 1996;45(1):52–8. McRae T, Dutkowski G, Pilbeam D, Powell M, Tier B. Genetic evaluation using the TREEPLAN system. Charleston: IUFRO; 2004. Garrick D, Taylor J, Fernando R. Deregressing estimated breeding values and weighting information for genomic regression analyses. Genet Sel Evol. 2009;41(1):55. Plomion C, Bartholomé J, Lesur I, Boury C, Rodríguez-Quilón I, Lagraulet H, Ehrenmann F, Bouffier L, Gion JM, Grivet D, et al. High-density SNP assay development for genetic analysis in maritime pine (Pinus pinaster). Mol Ecol Resour. 2016;16(2):574–87. Abecasis GR, Cherny SS, Cookson WO, Cardon LR. Merlin - rapid analysis of dense genetic maps using sparse gene flow trees. Nat Genet. 2002;30(1):97–101. de Miguel M, Bartholomé J, Ehrenmann F, Murat F, Moriguchi Y, Uchiyama K, Ueno S, Tsumura Y, Lagraulet H, de Maria N, et al. Evidence of intense chromosomal shuffling during conifer evolution. Genome Biol Evol. 2015;7(10):2799–809. PubMed Central Google Scholar Canales J, Bautista R, Label P, Gómez-Maldonado J, Lesur I, Fernández-Pozo N, Rueda-López M, Guerrero-Fernández D, Castro-Rodríguez V, Benzekri H, et al. De novo assembly of maritime pine transcriptome: implications for forest breeding and biotechnology. Plant Biotechnol J. 2014;12(3):286–99. Weir BS, Cockerham CC. Estimating F-statistics for the analysis of population structure. Evolution. 1984;38(6):1358–70. Paradis E. pegas: an R package for population genetics with an integrated–modular approach. Bioinformatics. 2010;26(3):419–20. Wimmer V, Albrecht T, Auinger H-J, Schön C-C. synbreed: a framework for the analysis of genomic prediction data using R. Bioinformatics. 2012;28(15):2086–7. Pérez P, de los Campos G. Genome-wide regression and prediction with the BGLR statistical package. Genetics. 2014;198(2):483–95. Wickham H. ggplot2: Elegant Graphics for Data Analysis. Springer Publishing Company, Incorporated; 2009. doi: 10.1007/978-0-387-98141-3. Van Raden PM. Efficient methods to compute genomic predictions. J Dairy Sci. 2008;91(11):4414–23. Pérez P, de los Campos G, Crossa J, Gianola D. Genomic-enabled prediction based on molecular markers and pedigree using the Bayesian linear regression package in R. The Plant Genome. 2010;3(2):106–16. Park T, Casella G. The bayesian lasso. J Am Stat Assoc. 2008;103(482):681–6. Gianola D. Priors in whole-genome regression: the Bayesian alphabet returns. Genetics. 2013;194(3):573–96. Resende MFR, Muñoz P, Acosta JJ, Peter GF, Davis JM, Grattapaglia D, Resende MDV, Kirst M. Accelerating the domestication of trees using genomic selection: accuracy of prediction models across ages and environments. New Phytol. 2012;193(3):617–24. Resende MFR, Muñoz P, Resende MDV, Garrick DJ, Fernando RL, Davis JM, Jokela EJ, Martin TA, Peter GF, Kirst M. Accuracy of genomic selection methods in a standard data set of loblolly pine (Pinus taeda L.). Genetics. 2012;190(4):1503–10. Legarra A, Robert-Granié C, Manfredi E, Elsen J-M. Performance of genomic selection in mice. Genetics. 2008;180(1):611–8. Windhausen VS, Atlin GN, Hickey JM, Crossa J, Jannink J-L, Sorrells ME, Raman B, Cairns JE, Tarekegne A, Semagn K, et al. Effectiveness of genomic prediction of maize hybrid performance in different breeding populations and environments. G3 Genes Genom Genet. 2012;2(11):1427–36. Hayes BJ, Bowman P, Chamberlain A, Verbyla K, Goddard M. Accuracy of genomic breeding values in multi-breed dairy cattle populations. Genet Sel Evol. 2009;41(1):51. Solberg TR, Sonesson AK, Woolliams JA, Meuwissen TH. Genomic selection using different marker types and densities. J Anim Sci. 2008;86(10):2447–54. Munoz PR, Resende MFR, Huber DA, Quesada T, Resende MDV, Neale DB, Wegrzyn JL, Kirst M, Peter GF. Genomic relationship matrix for correcting pedigree errors in breeding populations: impact on genetic parameters and genomic selection accuracy. Crop Sci. 2014;54(3):1115–23. Hansen OK, Nielsen UB. Microsatellites used to establish full pedigree in a half-sib trial and correlation between number of male strobili and paternal success. Ann For Sci. 2010;67(7):703. Kumar S, Richardson TE. Inferring relatedness and heritability using molecular markers in radiata pine. Mol Breed. 2005;15(1):55–64. Beaulieu J, Doerksen T, MacKay J, Rainville A, Bousquet J. Genomic selection accuracies within and between environments and small breeding groups in white spruce. BMC Genomics. 2014;15(1):1048. Gamal El-Dien O, Ratcliffe B, Klapste J, Chen C, Porth I, El-Kassaby Y. Prediction accuracies for growth and wood attributes of interior spruce in space using genotyping-by-sequencing. BMC Genomics. 2015;16(1):370. Bastiaansen J, Coster A, Calus M, van Arendonk J, Bovenhuis H. Long-term response to genomic selection: effects of estimation method and reference population structure for different genetic architectures. Genet Sel Evol. 2012;44(1):3. Jannink J-L. Dynamics of long-term genomic selection. Genet Sel Evol. 2010;42(1):35. Goddard M. Genomic selection: prediction of accuracy and maximisation of long term response. Genetica. 2008;136(2):245–57. Asoro FG, Newell MA, Beavis WD, Scott MP, Jannink J-L. Accuracy and training population design for genomic selection on quantitative traits in Elite North American Oats. Plant Gen. 2011;4(2):132–44. Sonesson A, Meuwissen T. Testing strategies for genomic selection in aquaculture breeding programs. Genet Sel Evol. 2009;41(1):37. Muranty H, Jorge V, Bastien C, Lepoittevin C, Bouffier L, Sanchez L. Potential for marker-assisted selection for forest tree breeding: lessons from 20 years of MAS in crops. Tree Genet Genomes. 2014;1–20. Legarra A, Christensen OF, Aguilar I, Misztal I. Single step, a general approach for genomic selection. Livest Sci. 2014;166:54–65. Aguilar I, Misztal I, Johnson DL, Legarra A, Tsuruta S, Lawlor TJ. Hot topic: a unified approach to utilize phenotypic, full pedigree, and genomic information for genetic evaluation of Holstein final score1. J Dairy Sci. 2010;93(2):743–52. Legarra A, Aguilar I, Misztal I. A relationship matrix including full pedigree and genomic information. J Dairy Sci. 2009;92(9):4656–63. Zapata-Valenzuela J, Whetten RW, Neale D, McKeand S, Isik F. Genomic estimated breeding values using genomic relationship matrices in a cloned population of loblolly pine. G3 Genes Genom Genet. 2013;3(5):909–16. Grattapaglia D. Marker-assisted selection in Eucalyptus. Rome: Food and Agriculture Organization of the United Nations (FAO); 2007. Beaulieu J, Deslauriers M, Daoust G. Flower induction treatments have no effects on seed traits and transmission of alleles in Picea glauca. Tree Physiol. 1998;18(12):817–21. Griffin AR, Whiteman P, Rudge T, Burgess IP, Moncur M. Effect of paclobutrazol on flower-bud production and vegetative growth in two species of Eucalyptus. Can J For Res. 1993;23(4):640–7. Zapata-Valenzuela J, Isik F, Maltecca C, Wegrzyn J, Neale D, McKeand S, Whetten R. SNP markers trace familial linkages in a cloned population of Pinus taeda—prospects for genomic selection. Tree Genet Genomes. 2012;8(6):1307–18. Ratcliffe B, El-Dien OG, Klapste J, Porth I, Chen C, Jaquish B, El-Kassaby YA. A comparison of genomic selection models across time in interior spruce (Picea engelmannii × glauca) using unordered SNP imputation methods. Heredity. 2015. The authors would like to thank the experimental unit of INRA Pierroton (UE 0570) and the GIS "Groupe Pin Maritime du Futur" for planting and the field trials and making the necessary measurements. This work was supported by the European Union Seventh Framework Programme (Procogen, no. 289841) and the INRA SelGen metaprogram. JB was awarded a postdoctoral fellowship from the "Conseil Général des Landes". Information concerning the SNP array used to genotyped our reference population are available in [38]. The datasets supporting the conclusions of this article are available upon request. CB extracted the DNA. LB, MV and JVH selected the reference population. JB performed the analysis and wrote the first draft of the manuscript. LB, CP, FI and JVH were involved in the writing and editing of the manuscript. LB designed and coordinated the study. All authors have read and approved the manuscript. BIOGECO, INRA, Univ. Bordeaux, 33610, Cestas, France Jérôme Bartholomé, Christophe Boury, Marjorie Vidal, Christophe Plomion & Laurent Bouffier Biometris, Wageningen University and Research Centre, NL-6700 AC, Wageningen, The Netherlands Joost Van Heerwaarden Department of Forestry and Environmental Resources, North Carolina State University, Raleigh, NC, USA Fikret Isik FCBA, Biotechnology and Advanced Silviculture Department, Genetics & Biotechnology Team, 33610, Cestas, France Marjorie Vidal Jérôme Bartholomé Christophe Boury Christophe Plomion Laurent Bouffier Correspondence to Laurent Bouffier. Scatter plots (lower diagonal), histograms (diagonal) and correlations, with their significance (H0: \( r=0 \), upper diagonal), between breeding values for the traits: circumference, height and stem straightness. Individuals from the G0 generation are in blue, G1 in orange and G2 in green. Figure S2. P. pinaster composite map [40]. Markers in red correspond to 3965 of the 4335 SNPs used for genomic prediction analysis. Figure S3. Pedigree of the 818 trees comprising the reference population (NS = 25) with the following frequency for each generation: G0 = 46, G1 = 62 and G2 = 710. Links in purple represent mother–progeny relationships and those in orange represent father–progeny relationships. Pedigree Viewer software was used to represent the relatedness between individuals from the three generations. Figure S4. Pairwise linkage disequilibrium (LD) based on 3962 single-nucleotide polymorphisms mapped onto the twelve linkage groups (LG) of P. pinaster. Only loci with minor allele frequencies greater than 0.01 were included in the analysis. Figure S5. Distribution of the fixation index (FST) over the 12 chromosome of maritime pine. The top panel represents the FST between G0 and G1 and the bottom panel represents the FST between G1 and G2. Figure S6. Comparison of prediction accuracies across three sampling and two calibration strategies. Three sampling strategies for the selection of 20 % of the G2 population for use as the validation set were used: random, S1: between half-sib families and S2: within full-sib families. Two calibration scenarios were used for each sampling strategy. For predictions for the 20 % of the G2 population selected, we used the remaining 80 % of the G2 (in green) plus their progenitors (G0 and G1, in blue) as the calibration set. The results for models based on pedigree information (ABLUP) and marker information (GBLUP and B-LASSO), and the results for the three traits studied (tree diameter, height and stem straightness) are presented. The data are represented in a Tukey boxplot. Table S1. Prediction accuracy and status number (NS) for four methods of selecting G2 individuals. Prediction accuracy was estimated with three relationship matrices (AP, AF and G). Mean values (standard deviation in parentheses) are based on 100 replicates per model. The four selection methods were: Random, HS: half-sib family, FS: full-sib family and CD: coefficient of determination. Table S2. Prediction accuracy and bias for the use of the progeny population for validation (calibration set = G0 and G1, validation set = G2). The results for the ABLUP, GBLUP and Bayesian LASSO (B-LASSO) models and for the traits tree circumference, height and stem straightness are presented. (PDF 1725 kb) Bartholomé, J., Van Heerwaarden, J., Isik, F. et al. Performance of genomic prediction within and across generations in maritime pine. BMC Genomics 17, 604 (2016). https://doi.org/10.1186/s12864-016-2879-8 Genomic selection Multiple generations Progeny validation Stem straightness
CommonCrawl
$\zeta_1, \zeta_2,$ and $\zeta_3$ are complex numbers such that \[\zeta_1+\zeta_2+\zeta_3=1\]\[\zeta_1^2+\zeta_2^2+\zeta_3^2=3\]\[\zeta_1^3+\zeta_2^3+\zeta_3^3=7\] Compute $\zeta_1^{7} + \zeta_2^{7} + \zeta_3^{7}$. We let $e_1 = \zeta_1 + \zeta_2 + \zeta_3,\ e_2 = \zeta_1\zeta_2 + \zeta_2\zeta_3 + \zeta_3\zeta_1,\ e_3 = \zeta_1\zeta_2\zeta_3$ (the elementary symmetric sums). Then, we can rewrite the above equations as\[\zeta_1+\zeta_2+\zeta_3=e_1 = 1\]\[\zeta_1^2+\zeta_2^2+\zeta_3^2= e_1^2 - 2e_2 = 3\]from where it follows that $e_2 = -1$. The third equation can be factored as\[7 =\zeta_1^3+\zeta_2^3+\zeta_3^3 = (\zeta_1+\zeta_2+\zeta_3)(\zeta_1^2+\zeta_2^2+\zeta_3^2-\zeta_1\zeta_2-\zeta_2\zeta_3 -\zeta_3\zeta_1)+3\zeta_1\zeta_2\zeta_3\\ = e_1^3 - 3e_1e_2 + 3e_3,\]from where it follows that $e_3 = 1$. Thus, applying Vieta's formulas backwards, $\zeta_1, \zeta_2,$ and $\zeta_3$ are the roots of the polynomial\[x^3 - x^2 - x - 1 = 0 \Longleftrightarrow x^3 = x^2 + x + 1\]Let $s_n = \zeta_1^n + \zeta_2^n + \zeta_3^n$ (the power sums). Then from $(1)$, we have the recursion $s_{n+3} = s_{n+2} + s_{n+1} + s_n$. It follows that $s_4 = 7 + 3 + 1 = 11, s_5 = 21, s_6 = 39, s_7 = \boxed{71}$.
Math Dataset
Control of chaos In lab experiments that study chaos theory, approaches designed to control chaos are based on certain observed system behaviors. Any chaotic attractor contains an infinite number of unstable, periodic orbits. Chaotic dynamics, then, consists of a motion where the system state moves in the neighborhood of one of these orbits for a while, then falls close to a different unstable, periodic orbit where it remains for a limited time and so forth. This results in a complicated and unpredictable wandering over longer periods of time.[1] Control of chaos is the stabilization, by means of small system perturbations, of one of these unstable periodic orbits. The result is to render an otherwise chaotic motion more stable and predictable, which is often an advantage. The perturbation must be tiny compared to the overall size of the attractor of the system to avoid significant modification of the system's natural dynamics.[2] Several techniques have been devised for chaos control, but most are developments of two basic approaches: the OGY (Ott, Grebogi and Yorke) method and Pyragas continuous control. Both methods require a previous determination of the unstable periodic orbits of the chaotic system before the controlling algorithm can be designed. OGY method E. Ott, C. Grebogi and J. A. Yorke were the first to make the key observation that the infinite number of unstable periodic orbits typically embedded in a chaotic attractor could be taken advantage of for the purpose of achieving control by means of applying only very small perturbations. After making this general point, they illustrated it with a specific method, since called the OGY method (Ott, Grebogi and Yorke) of achieving stabilization of a chosen unstable periodic orbit. In the OGY method, small, wisely chosen, kicks are applied to the system once per cycle, to maintain it near the desired unstable periodic orbit.[3] To start, one obtains information about the chaotic system by analyzing a slice of the chaotic attractor. This slice is a Poincaré section. After the information about the section has been gathered, one allows the system to run and waits until it comes near a desired periodic orbit in the section. Next, the system is encouraged to remain on that orbit by perturbing the appropriate parameter. When the control parameter is actually changed, the chaotic attractor is shifted and distorted somewhat. If all goes according to plan, the new attractor encourages the system to continue on the desired trajectory. One strength of this method is that it does not require a detailed model of the chaotic system but only some information about the Poincaré section. It is for this reason that the method has been so successful in controlling a wide variety of chaotic systems.[4] The weaknesses of this method are in isolating the Poincaré section and in calculating the precise perturbations necessary to attain stability. Pyragas method In the Pyragas method of stabilizing a periodic orbit, an appropriate continuous controlling signal is injected into the system, whose intensity is practically zero as the system evolves close to the desired periodic orbit but increases when it drifts away from the desired orbit. Both the Pyragas and OGY methods are part of a general class of methods called "closed loop" or "feedback" methods which can be applied based on knowledge of the system obtained through solely observing the behavior of the system as a whole over a suitable period of time.[5] Applications Experimental control of chaos by one or both of these methods has been achieved in a variety of systems, including turbulent fluids, oscillating chemical reactions, magneto-mechanical oscillators and cardiac tissues.[6] attempt the control of chaotic bubbling with the OGY method and using electrostatic potential as the primary control variable. Forcing two systems into the same state is not the only way to achieve synchronization of chaos. Both control of chaos and synchronization constitute parts of cybernetical physics, a research area on the border between physics and control theory.[1] References 1. González-Miranda, J.M. (2004). Synchronization and Control of Chaos: An Introduction for Scientists and Engineers. London: Imperial College Press. Bibcode:2004scci.book.....G. 2. Eckehard Schöll and Heinz Georg Schuster (2007). Handbook of Chaos Control. Weinheim: Wiley-VCH. 3. Fradkov A.L. and Pogromsky A.Yu. (1998). Introduction to Control of Oscillations and Chaos. Singapore: World Scientific Publishers. 4. Ditto, William; Louis M. Pecora (August 1993). "Mastering Chaos". Scientific American. 5. S. Boccaletti et al.(2000) The Control of Chaos: Theory and Applications, Physics Reports 329, 103-197 Archived 2016-03-04 at the Wayback Machine. 6. Sarnobat, S.U. (August 2000). "Modification, Identification & Control of Chaotic Bubbling via Electrostatic Potential". University of Tennessee. Masters Thesis. External links • Chaos control bibliography (1997-2000) Chaos theory Concepts Core • Attractor • Bifurcation • Fractal • Limit set • Lyapunov exponent • Orbit • Periodic point • Phase space • Anosov diffeomorphism • Arnold tongue • axiom A dynamical system • Bifurcation diagram • Box-counting dimension • Correlation dimension • Conservative system • Ergodicity • False nearest neighbors • Hausdorff dimension • Invariant measure • Lyapunov stability • Measure-preserving dynamical system • Mixing • Poincaré section • Recurrence plot • SRB measure • Stable manifold • Topological conjugacy Theorems • Ergodic theorem • Liouville's theorem • Krylov–Bogolyubov theorem • Poincaré–Bendixson theorem • Poincaré recurrence theorem • Stable manifold theorem • Takens's theorem Theoretical branches • Bifurcation theory • Control of chaos • Dynamical system • Ergodic theory • Quantum chaos • Stability theory • Synchronization of chaos Chaotic maps (list) Discrete • Arnold's cat map • Baker's map • Complex quadratic map • Coupled map lattice • Duffing map • Dyadic transformation • Dynamical billiards • outer • Exponential map • Gauss map • Gingerbreadman map • Hénon map • Horseshoe map • Ikeda map • Interval exchange map • Irrational rotation • Kaplan–Yorke map • Langton's ant • Logistic map • Standard map • Tent map • Tinkerbell map • Zaslavskii map Continuous • Double scroll attractor • Duffing equation • Lorenz system • Lotka–Volterra equations • Mackey–Glass equations • Rabinovich–Fabrikant equations • Rössler attractor • Three-body problem • Van der Pol oscillator Physical systems • Chua's circuit • Convection • Double pendulum • Elastic pendulum • FPUT problem • Hénon–Heiles system • Kicked rotator • Multiscroll attractor • Population dynamics • Swinging Atwood's machine • Tilt-A-Whirl • Weather Chaos theorists • Michael Berry • Rufus Bowen • Mary Cartwright • Chen Guanrong • Leon O. Chua • Mitchell Feigenbaum • Peter Grassberger • Celso Grebogi • Martin Gutzwiller • Brosl Hasslacher • Michel Hénon • Svetlana Jitomirskaya • Bryna Kra • Edward Norton Lorenz • Aleksandr Lyapunov • Benoît Mandelbrot • Hee Oh • Edward Ott • Henri Poincaré • Mary Rees • Otto Rössler • David Ruelle • Caroline Series • Yakov Sinai • Oleksandr Mykolayovych Sharkovsky • Nina Snaith • Floris Takens • Audrey Terras • Mary Tsingou • Marcelo Viana • Amie Wilkinson • James A. Yorke • Lai-Sang Young Related articles • Butterfly effect • Complexity • Edge of chaos • Predictability • Santa Fe Institute
Wikipedia
\begin{document} \title{\vspace*{-2cm}Stability of a point charge for the repulsive Vlasov-Poisson system} \author{Benoit Pausader} \address{Brown University, Providence, RI, USA} \email{benoit\[email protected]} \author{Klaus Widmayer} \address{University of Zurich, Switzerland} \email{[email protected]} \author{Jiaqi Yang} \address{ICERM, Brown University, Providence, RI, USA} \email{jiaqi\[email protected]} \begin{abstract} We consider solutions of the repulsive Vlasov-Poisson system which are a combination of a point charge and a small gas, i.e.\ measures of the form $\delta_{(\mathcal{X}(t),\mathcal{V}(t))}+\mu^2d{\bf x}d{\bf v}$ for some $(\mathcal{X}, \mathcal{V}):\mathbb{R}\to\mathbb{R}^6$ and a small gas distribution $\mu:\mathbb{R}\to L^2_{{\bf x},{\bf v}}$, and study asymptotic dynamics in the associated initial value problem. If initially suitable moments on $\mu_0=\mu(t=0)$ are small, we obtain a global solution of the above form, and the electric field generated by the gas distribution $\mu$ decays at an almost optimal rate. Assuming in addition boundedness of suitable derivatives of $\mu_0$, the electric field decays at an optimal rate and we derive a modified scattering dynamics for the motion of the point charge and the gas distribution. Our proof makes crucial use of the Hamiltonian structure. The linearized system is transport by the Kepler ODE, which we integrate exactly through an asymptotic action-angle transformation. Thanks to a precise understanding of the associated kinematics, moment and derivative control is achieved via a bootstrap analysis that relies on the decay of the electric field associated to $\mu$. The asymptotic behavior can then be deduced from the properties of Poisson brackets in asymptotic action coordinates. \end{abstract} \maketitle \setcounter{tocdepth}{1} \vspace*{-.75cm}\tableofcontents\vspace*{-.75cm} \section{Introduction} This article is devoted to the study of the time evolution and asymptotic behavior of a three dimensional collisionless gas of charged particles (i.e.\ a plasma) that interacts with a point charge. Under suitable assumptions, a statistical description of such a system is given via a measure $M$ on $\mathbb{R}^3_{\bf x}\times\mathbb{R}^3_{\bf v}$ that models the charge distribution, which is transported by the long-range electrostatic (Coulomb) force field generated by $M$ itself, resulting in the Vlasov-Poisson system \begin{equation}\label{eq:VP} \begin{split} \partial_tM+\hbox{div}_{{\bf x},{\bf v}}\left(M\mathfrak{V}\right)=0,\qquad \mathfrak{V}={\bf v}\cdot \nabla_{\bf x}+\nabla_{\bf x}\phi_M\cdot\nabla_{\bf v},\qquad \Delta_{\bf x}\phi_M=\int_{\mathbb{R}^3_{\bf v}}Md{\bf v}. \end{split} \end{equation} The Dirac mass $M_{eq}=\delta_{(0,0)}({\bf x},{\bf v})$ is a formal stationary solution of \eqref{eq:VP}, and we propose to investigate its stability. We thus consider solutions of the form\footnote{Here the initial continuous density $f_0=\mu_0^2$ is assumed to be non-negative, a condition which is then propagated by the flow and allows us to work with functions $\mu$ in an $L^2$ framework rather than a general non-negative function $f$ in $L^1$ -- see also the previous work \cite{IPWW2020} for more on this.} $M(t):=q_c\delta_{(\mathcal{X}(t),\mathcal{V}(t))}({\bf x},{\bf v})+q_g\mu^2({\bf x},{\bf v},t)d{\bf x} d{\bf v}$, representing a small, smooth charge distribution $\mu^2d{\bf x}d{\bf v}$ (with mass-per-particle $m_g>0$ and charge-per-particle $q_g>0$) coupled with a point charge located at $(\mathcal{X},\mathcal{V}):\mathbb{R}_t\to\mathbb{R}^3\times\mathbb{R}^3$ (of mass $M_c>0$ and charge $q_c>0$). The equations \eqref{eq:VP} then take the form \begin{equation}\label{VPPC} \begin{split} \left(\partial_t+{\bf v}\cdot\nabla_{\bf x}+\frac{q}{2}\frac{{\bf x}-\mathcal{X}}{\vert {\bf x}-\mathcal{X}\vert^3}\cdot\nabla_{\bf v}\right)\mu+Q\nabla_{\bf x}\phi\cdot\nabla_{\bf v}\mu&=0,\qquad\Delta_{\bf x}\phi=\varrho=\int_{\mathbb{R}^3_{\bf v}}\mu^2d{\bf v},\\ \frac{d\mathcal{X}}{dt}=\mathcal{V},\qquad\frac{d\mathcal{V}}{dt}=\mathcal{Q}\nabla_{\bf x}\phi(\mathcal{X},t), \end{split} \end{equation} for positive constants $q=q_cq_g/(2\pi \epsilon_0 m_g)$, $Q=q_g^2/(\epsilon_0m_g)$, $\mathcal{Q}=q_gq_c/(\epsilon_0M_c)$. This system couples a singular, nonlinear transport (Vlasov) equation for the continuous charge distribution $\mu^2$ to an equation for the trajectory of the point $(\mathcal{X},\mathcal{V})$ mass via their electrostatic Coulomb interaction through a Poisson equation. \begin{remark} \begin{enumerate} \item The physically relevant setting for these equations relates to electron dynamics in a plasma, when magnetic effects are neglected. In this context, our sign conventions correspond to the non-negative distribution function of a negatively charged gas. In this spirit, we will denote the electric field of the gas by $\mathcal{E}=\nabla_{\bf x}\phi$, a slightly unconventional choice that allows to save some minus signs in the formulas. \item The crucial qualitative feature of the forces in \eqref{VPPC} is the \emph{repulsive} nature of interactions between the gas and the point charge, i.e.\ the fact that $q>0$. Our analysis can also accommodate the setting where the gas-gas interactions are attractive. This corresponds to replacing $Q>0$ by $-Q<0$ in \eqref{VPPC}, so that (up to minor algebraic modifications) these two cases can be treated the same way. We shall henceforth focus on \eqref{VPPC} with $Q>0$, as above. (We refer to the discussion of future perspectives below in Section \ref{ssec:futpersp} for some comments regarding the attractive case.) \end{enumerate} \end{remark} \subsection{Main result} Our main result concerns \eqref{VPPC} with sufficiently small and localized initial charge distributions $\mu_0$. We establish the existence and uniqueness of global, strong solutions and we describe their asymptotic behavior as a modified scattering dynamic. While our full result can be most adequately stated in more adapted ``action-angle'' variables (see Theorem \ref{thm:global_asymptIntro} below on page \pageref{thm:global_asymptIntro}), for the sake of readability we begin here by giving a (weaker, slightly informal) version in standard Cartesian coordinates: \begin{theorem}\label{thm:main_rough} Given any $(\mathcal{X}_0,\mathcal{V}_0)\in\mathbb{R}^3_{\bf x}\times\mathbb{R}^3_{\bf v}$ and any initial data $\mu_0\in C^1_c((\mathbb{R}^3_{\bf x}\setminus\{\mathcal{X}_0\})\times\mathbb{R}^3_{\bf v})$, there exists $\varepsilon^\ast>0$ such that for any $0<\varepsilon<\varepsilon^\ast$, there exists a unique global strong solution of \eqref{VPPC} with initial data \begin{equation*} (\mathcal{X}(t=0),\mathcal{V}(t=0))=(\mathcal{X}_0,\mathcal{V}_0),\qquad \mu({\bf x},{\bf v},t=0)=\varepsilon\mu_0({\bf x},{\bf v}). \end{equation*} Moreover, the electric field decays pointwise at optimal rate, and there exists a modified point charge trajectory, an asymptotic profile $\mu_\infty\in L^2((\mathbb{R}^3\setminus\{0\})\times\mathbb{R}^3)$ and a Lagrangian map $({\bf Y},{\bf W}):\mathbb{R}^3\times\mathbb{R}^3\times \mathbb{R}_+^\ast\to \mathbb{R}^3\times\mathbb{R}^3$ along which the particle distribution converges pointwise \begin{equation*} \begin{split} \mu({\bf Y}({\bf x},{\bf v},t),{\bf W}({\bf x},{\bf v},t), t)\to \mu_\infty({\bf x},{\bf v}),\qquad t\to\infty. \end{split} \end{equation*} \end{theorem} \begin{remark}\label{rem:asymptotics} \begin{enumerate} \item Our main theorem is in fact much more precise and requires fewer assumptions, but is better stated in adapted ``action angle'' variables. We refer to Theorem \ref{thm:global_asymptIntro}. In particular, we allow initial particle distributions with positive measure in any ball around the charge $(\mathcal{X}_0,\mathcal{V}_0)$ and with noncompact support $\hbox{supp}(\mu_0)=\mathbb{R}^3_{\bf x}\times\mathbb{R}^3_{\bf v}$. \item Under the weaker assumption that $\mu_0\in C^0_c$, we still obtain in Proposition \ref{prop:global_derivsIntro} a global solution with almost optimal decay of the electric field. \item The charge trajectory and the Lagrangian map can be expressed in terms of an asymptotic ``electric field profile'' $\mathcal{E}^\infty$ and asymptotic charge velocity $\mathcal{V}_\infty$ and position shift $\mathcal{X}_\infty$: As $t\to+\infty$, we have that \begin{equation}\label{eq:asymptotics} \begin{split} \mathcal{X}(t)&=\mathcal{X}_\infty+t\mathcal{V}_\infty-\mathcal{Q}\ln(t)\mathcal{E}^\infty(0)+O(t^{-1/10}),\\ \mathcal{V}(t)&=\mathcal{V}_\infty-\frac{\mathcal{Q}}{t}\mathcal{E}^\infty(0)+O(t^{-11/10}),\\ {\bf Y}({\bf x},{\bf v},t)&=\left(at-\frac{1}{2}\frac{q}{a^2}\ln(ta^3/q)+\ln(t)[Q\mathcal{E}^\infty({\bf a})+\mathcal{Q}\mathcal{E}^\infty(0)]\right)\cdot \frac{q^2}{4a^2L^2+q^2}\left(\frac{2}{q}{\bf R}+\frac{4a}{q^2}{\bf L}\times{\bf R}\right)\\ &\qquad+ \mathcal{V}_\infty t+\mathcal{Q}\mathcal{E}^\infty(0)\ln(t)+O(1),\\ {\bf W}({\bf x},{\bf v},t)&=a\left(1-\frac{q}{2ta^3}\right)\cdot \frac{q^2}{4a^2L^2+q^2}\left(\frac{2}{q}{\bf R}+\frac{4a}{q^2}{\bf L}\times{\bf R}\right)+\mathcal{V}_\infty+O(\ln(t)t^{-2}), \end{split} \end{equation} where we used the following abbreviations to allow for more compact formulas \begin{equation} a^2=\aabs{{\bf v}}^2+\frac{q}{\aabs{{\bf x}}},\quad {\bf L}={\bf x}\times{\bf v},\quad L=\aabs{{\bf L}},\quad {\bf R}={\bf v}\times{\bf L}+\frac{q}{2}\frac{{\bf x}}{\aabs{{\bf x}}}, \end{equation} and ${\bf a}$ is defined in \eqref{eq:explicitATheta} below (these quantities are conservation laws for the linearized problem associated to \eqref{VPPC}). In the dynamics of the point charge, the term $\mathcal{Q}\ln(t)\mathcal{E}^\infty(0)$ (resp.\ $\frac{\mathcal{Q}}{t}\mathcal{E}^\infty(0)$) corresponds to a nonlinear modification of a free trajectory with velocity $\mathcal{V}_\infty$, and is also reflected in ${\bf Y}$. In addition, the first term in the expansion of ${\bf Y}$ (involving the factor $at$) derives from conservation of the energy along trajectories, the second term (involving a first logarithmic correction $\ln(ta^3/q)$) is a feature of the \emph{linear} trajectories. The term $t\mathcal{V}_\infty$ reflects a centering around the position of the point charge, and the remaining logarithmic terms are nonlinear corrections to the position. This can be compared with the asymptotic behavior close to vacuum in \cite{IPWW2020,Pan2020} by setting $q=Q=\mathcal{Q}=0$ and ignoring the motion of the point charge. \end{enumerate} \end{remark} \subsubsection{Prior work} In the absence of a point charge, the Vlasov-Poisson system has been extensively studied and the corresponding literature is too vast to be surveyed here appropriately. We focus instead on the case of three spatial and three velocity dimensions, which is of particular physical relevance. Here we refer to classical works \cite{BD1985,GI2020,LP1991,Pfa1992,Sch1991} for references on global wellposedness and dispersion analysis, to \cite{CK2016,FOPW2021,IPWW2020,Pan2020} for more recent results describing the asymptotic behavior, to \cite{Gla1996,Rei2007} for book references and to \cite{BM2018} for a historical review. The presence of a point charge introduces singular force fields and significantly complicates the analysis. Nevertheless, when the gas-point charge interaction is \emph{repulsive}, global existence and uniqueness of strong solutions when the support of the density is separated from the point charge has been established in \cite{MMP2011}, see also \cite{CM2010} and references therein. Global existence of weak solutions for more general support was then proved in \cite{DMS2015} with subsequent improvements in \cite{LZ2017,LZ2018,Mio2016}, and a construction of ``Lagrangian solutions'' in \cite{CLS2018}. For attractive interactions, strong well-posedness remains open, even locally in time, but global weak solutions have been constructed \cite{CMMP2012,CZW2015}. Concentration, creation of a point charge and subsequent lack of uniqueness were studied in a related system for ions in $1d$, see \cite{MMZ1994,ZM1994}. To the best of our knowledge, the only work concerning the asymptotic behavior of such solutions is the recent \cite{PW2020}, which studies the repulsive, radial case using a precursor to the asymptotic action method we develop here. The existence and stability of other (smooth) equilibriums has been considered for the Vlasov-Poisson system, most notably in connection to Landau damping near a homogeneous background in the confined or screened case \cite{AW2021,BMM2018,FR2016,HNR2019,MV2011}, with recent progress also in the unconfined setting \cite{HNR2020,BMM2020,IPWW2022}. In the case of attractive interactions or in the presence of several species, there are many more equilibriums and a good final state conjecture seems beyond the scope of the current theory. However, there have been many outstanding works on the \emph{linear} and \emph{orbital} (in-)stability of nontrivial equilibria \cite{GL2017,GS1995,LMR2008,LMR2012,Mou2013,Pen1960}. We further highlight \cite{FHR2021,GL2017,HRS2021} which use action-angle coordinates to solve efficiently an elliptic equation in order to understand the spectrum of the linearized operator. Finally, we note that the recent work \cite{HW2022} studies the interaction of a fast point charge with a {\it homogeneous} background satisfying a Penrose condition, for a variant of \eqref{VPPC} with a screened potential (see also the related \cite{AW2021} on Debye screening). We also refer to \cite{IJ2019} which addresses the stability of a Dirac mass in the context of the $2d$ Euler equation. \subsection{The method of asymptotic action} We describe now our approach to the study of asymptotic dynamics in \eqref{VPPC}, which is guided by their Hamiltonian structure.\footnote{We refer the reader to the recent \cite{MNP2022} for a derivation of this Hamiltonian structure from the underlying classical many-body problem.} \paragraph{\emph{Brief overview}} We first study the linearized problem, i.e.\ the setting without nonlinear self-interactions of the gas (i.e.\ we ignore the contributions of $\phi$). There the point charge moves freely along a straight line, while the gas distribution is still subject to the electrostatic field generated by the point charge and thus solves a singular transport equation which can be integrated explicitly through a canonical change of coordinates to suitable action-angle variables. Upon appropriate choice of unknown $\gamma$ in these variables, we can thus reduce to the study of a purely nonlinear equation, given in terms of the electrostatic potential $\phi$. This and the derived electric field can be conveniently expressed (thanks to the canonical nature of the change of coordinates) as integrals of $\gamma$ over phase space, and we study their boundedness properties. In particular, assuming moments and derivatives on $\gamma$, we establish that electrical functions decay pointwise. With this, we show how to propagate such moments and derivatives, relying heavily on the Poisson bracket structure. Finally, this reveals the asymptotic behavior through an asymptotic shear equation that builds on a phase mixing property of asymptotic actions. Next we present our method in more detail. It is instructive to first consider the case where \emph{the point charge is stationary}, i.e.\ that $(\mathcal{X}(t),\mathcal{V}(t))\equiv (0,0)$ in a suitable coordinate frame. This happens naturally e.g.\ if $(\mathcal{X}(0),\mathcal{V}(0))=(0,0)$ and the initial distribution is symmetric with respect to three coordinate planes, which is already a nontrivial case. In practice, \eqref{VPPC} then reduces to an equation for the gas distribution alone, which (starting from its Liouville equation reformulation) can be recast in Hamiltonian form as \begin{equation}\label{eq:VPPC-H} \begin{split} \partial_t\mu-\{\mathbb{H},\mu\}=0,\qquad \mathbb{ H}=\frac{1}{2}\mathbb{H}_2-\mathbb{H}_4\qquad \mathbb{H}_2({\bf x},{\bf v}):=\vert {\bf v}\vert^2+\frac{q}{\vert {\bf x}\vert},\qquad\mathbb{H}_4({\bf x},{\bf v},t):=Q\psi({\bf x},t), \end{split} \end{equation} where the Poisson bracket and phase space $\mathcal{P}_{{\bf x},{\bf v}}$ are given by \begin{equation}\label{PB} \{f,g\}=\nabla_{\bf x}f\cdot\nabla_{\bf v}g-\nabla_{\bf v}f\cdot\nabla_{\bf x}g,\qquad \mathcal{P}_{{\bf x},{\bf v}}:=\{({\bf x},{\bf v})\in\mathbb{R}^3\times\mathbb{R}^3:\,\, \vert{\bf x}\vert>0\}. \end{equation} This simplified setting facilitates the presentation of the main aspects of the quantitative analysis of the gas distribution dynamics. We will subsequently explain the (numerous) modifications needed to incorporate the point charge motion in Section \ref{SecAddingPC}. \subsubsection{Linearized equation and asymptotic actions} We start by considering the linearization of \eqref{eq:VPPC-H}, \begin{equation}\label{LinearHamiltonianFlow} \begin{split} 2\partial_t\mu-\{\mathbb{H}_2,\mu\}=0,\qquad \mathbb{H}_2({\bf x},{\bf v}):=\vert {\bf v}\vert^2+\frac{q}{\vert {\bf x}\vert}. \end{split} \end{equation} This is nothing but transport along the characteristics of the well-known {\it repulsive two-body} system, \begin{equation}\label{KeplerODE} \dot{\bf x}={\bf v},\qquad \dot{\bf v}=\frac{q}{2}\frac{{\bf x}}{\vert{\bf x}\vert^3}, \end{equation} which is, in particular, \emph{completely integrable}. Due to the repulsive nature of \eqref{KeplerODE}, its trajectories are hyperbolas (and thus open), with well-defined asymptotic velocities. Our first main result is the construction of \emph{asymptotic action-angle variables} which provide adapted, canonical coordinates for the phase space. We denote the phase space in these angles $\vartheta$ and actions ${\bf a}$ by \begin{equation} \mathcal{P}_{\vartheta,{\bf a}}:=\{(\vartheta,{\bf a})\in\mathbb{R}^3\times\mathbb{R}^3:\,\,\vert{\bf a}\vert>0\}, \end{equation} and have: \begin{proposition}\label{PropAA} There exists a smooth diffeomorphism $\mathcal{T}:\mathcal{P}_{{\bf x},{\bf v}}\to\mathcal{P}_{\vartheta,{\bf a}}$, $({\bf x},{\bf v})\mapsto(\Theta({\bf x},{\bf v}),\mathcal{A}({\bf x},{\bf v}))$ with inverse $(\vartheta,{\bf a})\mapsto ({\bf X}(\vartheta,{\bf a}),{\bf V}(\vartheta,{\bf a})$, which \begin{enumerate}[wide] \item\label{it:canonical} is canonical, i.e.\ \begin{equation}\label{eq:canonical} d{\bf x}\wedge d{\bf v}=d\Theta\wedge d\mathcal{A}, \end{equation} \item\label{it:cons-comp} is compatible with conservation of energy and angular momentum: \begin{equation} H({\bf x},{\bf v})=\abs{\mathcal{A}}^2,\qquad{\bf x}\times{\bf v}=\Theta\times\mathcal{A}, \end{equation} \item\label{it:lin-diag} linearizes the flow of \eqref{KeplerODE} in the sense that $({\bf x}(t),{\bf v}(t))$ solves \eqref{KeplerODE} if and only if \begin{equation}\label{LinearizationFlow} \Theta({\bf x}(t),{\bf v}(t))=\Theta({\bf x}(0),{\bf v}(0))+t\mathcal{A}({\bf x}(0),{\bf v}(0)),\qquad\mathcal{A}({\bf x}(t),{\bf v}(t))=\mathcal{A}({\bf x}(0),{\bf v}(0)), \end{equation} \item\label{it:asymp-act} satisfies the ``asymptotic action property'' \begin{equation}\label{AsymptoticActions} \begin{split} \vert{\bf X}(\vartheta+t{\bf a},{\bf a})-t{\bf a}\vert=o(t),\qquad \vert{\bf V}(\vartheta+t{\bf a},{\bf a})-{\bf a}\vert=o(1)\quad\hbox{ as }t\to+\infty. \end{split} \end{equation} \end{enumerate} \end{proposition} The \emph{asymptotic action-angle} property \eqref{it:asymp-act} will be crucial to the asymptotic analysis. In short, it ensures that {\bf a} parameterizes the trajectories that stay at a bounded distance from each other\footnote{This is a similar idea to the Gromov Boundary, see also \cite{MV2020} where this is developed for the Jacobi-Maupertuis metric.} as $t\to\infty$ and connects in an effective way the trajectories of \eqref{KeplerODE} to those of the free streaming $\ddot{\bf x}=0$. Put differently, different trajectories of \eqref{KeplerODE} asymptotically diverge linearly with time and their difference in ${\bf a}$, a property sometimes referred to as ``shearing'' or ``phase mixing''. General dynamical systems are not, of course, completely integrable, and when they are, there are many different action-angle coordinates. Here the \emph{asymptotic-action property} fixes the actions and helps restrict the set of choices. Besides, since the actions are defined in a natural way (as asymptotic velocities, see \eqref{AsymptoticActions}), one can aim to find $\mathcal{T}$ through a \emph{generating function} $\mathcal{S}({\bf x},{\bf a})$ by solving a \noindent \begin{tabularx}{\textwidth}{lX} \emph{Scattering problem:} & Given an asymptotic velocity ${\bf v}_\infty=:{\bf a}\in\mathbb{R}^3$ and a location ${\bf x}_0\in\mathbb{R}^3$, find (if they exist) the trajectories $({\bf x}(t),{\bf v}(t))$ through ${\bf x}_0$ with asymptotic velocity ${\bf a}$. \end{tabularx} \begin{figure} \caption{The scattering problem: Given a point ${\bf x}_0$ and asymptotic velocity ${\bf v}_\infty$, how to determine trajectories (one possibility dashed in blue) through ${\bf x}_0$ with asymptotic velocity ${\bf v}_\infty$?} \label{fig:scatter} \end{figure} Once such a trajectory has been found, one can define $\mathfrak{V}({\bf x},{\bf a})$ as the velocity along the trajectory at ${\bf x}$, and look for a putative $\mathcal{S}$ such that $\mathfrak{V}({\bf x},{\bf a})=\nabla_{\bf x}\mathcal{S}$. By classical arguments (see e.g.\ \cite[Chapter 8]{Meyer2017}), setting $\vartheta:=\nabla_{\bf v}\mathcal{S}$ then yields a (local) canonical change of variables. We note that there are two related difficulties with this approach in the present context, namely $(i)$ that given ${\bf a}$, there are points ${\bf x}_0$ through which no trajectory as above passes, and $(ii)$ when the scattering problem can be solved, there are in general \textit{two different trajectories} (and thus two different ``velocity'' maps $\mathfrak{V}_\pm({\bf x},{\bf a})$) through a given point ${\bf x}_0$. In fact, the set of trajectories with a given asymptotic velocity has a fold\footnote{One can think of the set of trajectories associated to a given ${\bf a}$ and angular momentum direction ${\bf L}/L$ as a (planar) sheet of paper $\mathbb{R}^2$, flatly folded over a curve $\Gamma$ (the fold), so that away from $\Gamma$ every point ${\bf x}$ corresponds to either $0$ or $2$ trajectories, depending on the side of the fold.}. Once we identify the correct projection (in phase space) of this fold, we are able to define a smooth gluing of the functions $\mathfrak{V}_\pm$ to obtain a globally smooth choice of generating function $\mathcal{S}$. \begin{remark} \begin{enumerate} \item In the present setting, we find a generating function by calculating trajectories. It is interesting to note that a reverse approach, solving Hamilton-Jacobi equations to obtain trajectories through a point with prescribed asymptotic velocity, has been used to construct families of asymptotically diverging trajectories in the more general $N$-body problem \cite{MV2020}. \item We note that a common way to obtain a generating function is by solving a Hamilton Jacobi equation $H({\bf x},\nabla_{\bf x}\mathcal{S}({\bf x},{\bf a}))=const$. Here we recover families of solutions, and observe that these solutions develop a singularity in finite time, so that the full generating function is obtain by gluing two such solutions along each trajectory. \end{enumerate} \end{remark} The action angle property \eqref{LinearizationFlow} allows to conjugate the linearized equation \eqref{LinearHamiltonianFlow} to the free streaming for the image particle distribution \begin{equation*} \left(\partial_t+{\bf a}\cdot\nabla_{\vartheta}\right)\nu=0,\qquad\nu(\Theta({\bf x},{\bf v}),\mathcal{A}({\bf x},{\bf v}),t)=\mu({\bf x},{\bf v},t), \end{equation*} which can then be easily integrated to give $\partial_t(\nu\circ\Phi_t^{-1})=0$, where \begin{equation}\label{DefPhiFreeStreaming} \Phi_t({\bf x},{\bf v})=({\bf x}-t{\bf v},{\bf v}). \end{equation} \subsubsection{Choice of nonlinear unknown} We next integrate explicitly the linear flow as above, and introduce the nonlinear unknown $\gamma:=\mu\circ\mathcal{T}\circ\Phi_t^{-1}$, \begin{equation}\label{NewNLUnknown_Intro} \begin{split} \gamma(\vartheta,{\bf a},t)&=\mu({\bf X}(\vartheta+t{\bf a},{\bf a}),{\bf V}(\vartheta+t{\bf a},{\bf a}),t),\\ \mu({\bf x},{\bf v},t)&=\gamma(\Theta({\bf x},{\bf v})-t\mathcal{A}({\bf x},{\bf v}),\mathcal{A}({\bf x},{\bf v}),t), \end{split} \end{equation} which satisfies a \emph{purely nonlinear} equation \begin{equation}\label{NLVP} \begin{split} \partial_t\gamma+\{\mathbb{H}_4,\gamma\}=0,\qquad\mathbb{H}_4:=Q\psi(\widetilde{\bf X},t),\qquad\widetilde{\bf X}(\vartheta,{\bf a})={\bf X}(\vartheta+t{\bf a},{\bf a}). \end{split} \end{equation} Equation \eqref{NLVP} involves the electric potential $\psi$ through the electric field $\mathcal{E}=\nabla\psi$, both of which can be expressed as integrals \emph{over phase space}, both in terms of $\mu$ and -- since $\mathcal{T}$ and $\Phi_t$ are canonical -- also conveniently in $\gamma$ as \begin{equation}\label{PsiE} \begin{split} \psi({\bf y},t)&:=-\frac{1}{4\pi}\iint \frac{1}{\vert {\bf y}- {\bf x}\vert}\mu^2({\bf x},{\bf v},t)\, d{\bf x} d{\bf v}=-\frac{1}{4\pi}\iint \frac{1}{\vert {\bf y}- {\bf X}(\vartheta+t{\bf a},{\bf a})\vert}\gamma^2(\vartheta,{\bf a},t)\, d\vartheta d{\bf a},\\ \mathcal{E}_j({\bf y},t)&:=\frac{1}{4\pi}\iint \frac{{\bf y}^j-{\bf x}^j}{\vert {\bf y}- {\bf x}\vert^3}\mu^2({\bf x},{\bf v},t)\, d{\bf x} d{\bf v}=\frac{1}{4\pi}\iint \frac{{\bf y}^j-{\bf X}^j(\vartheta+t{\bf a},{\bf a})}{\vert {\bf y}- {\bf X}(\vartheta+t{\bf a},{\bf a})\vert^3}\gamma^2(\vartheta,{\bf a},t)\, d\vartheta d{\bf a}. \end{split} \end{equation} \subsubsection{Analysis of the (effective) electric field and weak convergence} The proper analysis of the electric field $\mathcal{E}$ in terms of $\gamma$ requires precise kinematic bounds on the (inverse of the) asymptotic action-angle map $({\bf X},{\bf V})$ and its derivatives. Using moment bounds on $\gamma$ alone, one can reduce the question of its pointwise decay to control of an \emph{effective} electric field, which captures the leading order dynamics. This in turn can be bounded in terms of moments on $\gamma$, at the cost of some logarithmic losses, which yields almost optimal decay of the electric field. More precisely, we note that as per \eqref{PsiE}, the nonlinear evolution is governed by various integrals of the measure $\gamma^2d\vartheta d{\bf a}$ on phase space, and we thus aim to prove its weak convergence. Using a variant of the continuity equation (an argument somewhat related to \cite{LP1991}), we can obtain {\it vague, scale-localized} convergence of this measure, i.e.\ \begin{equation} \langle\varphi\rangle_R({\bf y},t):=\iint\varphi(R^{-1}({\bf y}-\widetilde{\bf X}(\vartheta,{\bf a})))\gamma^2(\vartheta,{\bf a},t)\, d\vartheta d{\bf a}\to\langle\varphi\rangle^\infty_R({\bf y}),\quad t\to\infty, \end{equation} uniformly in ${\bf y}, R$. For particle distributions $\gamma$ solving \eqref{NLVP}, this allows to obtain uniform control on the (scale-localized) effective electric field, and, after resummation of the scales, almost optimal control on the effective electric field $\mathcal{E}^{eff}$. To obtain optimal decay bounds (and precise asymptotics) we need to control extra regularity on $\gamma$. This turns out to be significantly more involved than moment control, as described below. \subsubsection{Moment propagation and almost optimal decay} In order to propagate moments, we use the Poisson bracket structure \eqref{NLVP}, and the fact that for any weight function $\mathfrak{w}:\mathcal{P}_{{\bf x},{\bf v}}\to\mathbb{R}$ or $\mathfrak{w}:\mathcal{P}_{\vartheta,{\bf a}}\to\mathbb{R}$ there holds that \begin{equation}\label{eq:mom-prop_Intro} \begin{split} \partial_t(\mathfrak{w}\gamma)+\{\mathbb{H}_4,(\mathfrak{w}\gamma)\}=\{\mathbb{H}_4,\mathfrak{w}\}\gamma=Q\mathcal{E}_j\{\widetilde{\bf X}^j,\mathfrak{w}\}\gamma, \end{split} \end{equation} together with bounds on the electric field and some classical identities such as \begin{equation}\label{ClassicalPB} \begin{split} \Big\{{\bf x},\frac{\vert{\bf v}\vert^2}{2}+\frac{q}{\vert{\bf x}\vert}\Big\}={\bf v},\qquad \{{\bf x},\vert {\bf L}\vert^2\}={\bf L}\times{\bf x},\quad{\bf L}={\bf x}\times{\bf v}. \end{split} \end{equation} Choosing as weight functions the conserved quantities for the linear equation $a:=\sqrt{\mathbb{H}_2}$ resp.\ $\xi:=qa^{-1}$ and $\lambda:=\vert{\bf L}\vert$, as well as the dynamically evolving quantity $\eta:=(a/q)\vartheta\cdot{\bf a}$, this enables a bootstrap argument that leads to almost optimal moment bounds and electric field decay, assuming only initial moments control. Our first global result for the dynamics then reads: \begin{theorem}[see Theorem \ref{thm:global_moments}]\label{thm:global_momentsIntro} Let $m\ge 30$, and assume that the initial density $\mu_0=\gamma_0$ satisfies \begin{equation}\label{eq:mom_id_Intro} \begin{split} \Vert \langle a\rangle^{2m}\mu_0\Vert_{L^r}+\Vert \langle \xi\rangle^{2m}\mu_0\Vert_{L^r}+\Vert \langle \lambda\rangle^{2m}\mu_0\Vert_{L^r}+\Vert \langle\eta\rangle^m\mu_0\Vert_{L^\infty}&\le \varepsilon_0,\qquad r\in\{2,\infty\}. \end{split} \end{equation} Then there exists a global solution $\gamma$ to \eqref{NLVP} that satisfies the bounds for $r\in\{2,\infty\}$ \begin{equation}\label{eq:gl_mom_bds_Intro} \begin{split} \Vert \langle a\rangle^{2m}\gamma(t)\Vert_{L^r}+\Vert \langle \xi\rangle^{2m}\gamma(t)\Vert_{L^r}&\le 2\varepsilon_0,\\ \Vert \langle \lambda\rangle^{2m}\gamma(t)\Vert_{L^r}&\le 2\varepsilon_0 (\ln(2+t))^{2m},\\ \Vert \langle\eta\rangle^m\gamma(t)\Vert_{L^\infty}&\le 2\varepsilon_0 (\ln(2+t))^{2m}, \end{split} \end{equation} and the associated electric field decays as \begin{equation*} \begin{split} \Vert \mathcal{E}(t)\Vert_{L^\infty_x}\lesssim \varepsilon_0^2\ln(2+t)/\langle t\rangle^2. \end{split} \end{equation*} \end{theorem} \subsubsection{Derivative propagation} In order to obtain bona fide classical solutions, we need to propagate bounds on the gradient of $\gamma$. This requires considerable care, notably because the kinematic formulas are rather involved and some derivatives produce large factors of $t$. To minimize the presence of ``bad derivatives'', we make use of the fact that the two-body problem is \emph{super-integrable}: this allows us to express all kinematically relevant quantities in terms of a set of coordinates $\textnormal{SIC}$, of which all but \emph{one, scalar} variable are constant under the flow of the two-body problem \eqref{KeplerODE} (and thus constant along the characteristics of the linearized problem \eqref{LinearHamiltonianFlow}). A natural such choice is the reduced basis $(\xi,\eta,{\bf u},{\bf L})$, where ${\bf u}=a^{-1}{\bf a},{\bf L}\in\mathbb{R}^3$, $\xi,\eta\in\mathbb{R}$ (see also \eqref{SIC_Intro} below), and only $\eta$ evolves in the linear problem. This collection has a built-in redundancy, as ${\bf u}\cdot{\bf L}=0$. In order to work with such an \emph{overdetermined} set of coordinates, we take advantage of the symplectic structure to propagate Poisson brackets with respect to a \emph{spanning family} $\textnormal{SIC}$, i.e.\ a collection $(\{f,\cdot\})_{f\in \textnormal{SIC}}$ which spans the cotangent space. Letting $\textnormal{SIC}:=\{\xi,\eta,{\bf u},{\bf L}\}$, we then obtain a system for such derivatives of $\gamma$ that reads \begin{equation}\label{eq:derivIntro} \begin{split} \partial_t\{f,\gamma\}+\{\mathbb{H}_4,\{f,\gamma\}\}&=-\{\{f,\mathbb{H}_4\},\gamma\}=-Q\mathcal{F}_{jk}\{\widetilde{\bf X}^j,\gamma\}\{f,\widetilde{\bf X}^k\}-Q\mathcal{E}_j\{\{f,\widetilde{\bf X}^j\},\gamma\}, \end{split} \end{equation} where $\mathcal{F}=\nabla^2\psi$ denotes the Hessian of the electric potential. Since $\textnormal{SIC}$ is a spanning set, we can then resolve $\{\widetilde{\bf X},\gamma\}$ in terms of $(\{f,\gamma\})_{f\in \textnormal{SIC}}$, leading to a self-consistent system for bounds on the Poisson brackets: \begin{equation*} \begin{split} \partial_t\vert\{f,\gamma\}\vert\lesssim \sum_{g\in \textnormal{SIC}}\mathfrak{m}_{fg}\vert\{g,\gamma\}\vert. \end{split} \end{equation*} Here the coefficients $\mathfrak{m}_{fg}$ have formally enough decay, but are {\it ill-conditioned} in the sense that they do not admit bounds uniformly in the coordinates outside of a compact set. To remedy this, we introduce a set of weights $\mathfrak{w}_{f}$ and manage to propagate appropriate bounds on $\mathfrak{w}_{f}\vert\{f,\gamma\}\vert$. To account for the fact that the past ($t\to-\infty$) and future ($t\to\infty$) asymptotic velocities of a linear trajectory may differ drastically in direction, we will need to work with two different sets of spanning functions in different parts of phase space: ``past'' asymptotic action-angles in an ``incoming'' region and ``future'' asymptotic action-angles in an ``outgoing'' region. For simplicity, we thus prefer to work with pointwise bounds on the above symplectic gradients. Altogether, a slightly simplified version of our result concerning global propagation of derivatives is the following: \begin{proposition}[Informal Version of Proposition \ref{prop:global_derivs}]\label{prop:global_derivsIntro} Let $\gamma$ be a global solution of \eqref{NLVP} as in Theorem \ref{thm:global_momentsIntro}, and assume that for the selection of weights $\mathfrak{w}_f$ as below in \eqref{eq:def_weights} there holds that \begin{equation}\label{eq:deriv_init_Intro} \sum_{f\in \textnormal{SIC}}\norm{(\xi^{5}+\xi^{-8})\mathfrak{w}_f\{f,\gamma_0\}}_{L^\infty}\lesssim \varepsilon_0. \end{equation} Then we have that \begin{equation}\label{eq:deriv_concl_Intro} \sum_{f\in \textnormal{SIC}}\norm{\mathfrak{w}_f\{f,\gamma(t)\}}_{L^\infty}\lesssim \varepsilon_0\ln^2(2+t). \end{equation} Moreover, the electric field $\mathcal{E}$ decays at the optimal rate \begin{equation} \norm{\mathcal{E}(t)}_{L^\infty}\lesssim \varepsilon_0^2\ip{t}^{-2}. \end{equation} \end{proposition} We comment on two points: $(i)$ Control of one derivative of $\gamma$ allows to resum the scale-localized effective electric unknowns, which implies optimal decay of the electric field (see Proposition \ref{PropEeff} below) and lays the foundation for a quantitative understanding of the asymptotic behavior. $(ii)$ There is a ``loss'' of weights in $\xi$ when we change coordinates between past and future asymptotic actions, which is reflected by the extra factors in $\xi$ we require on the initial data in \eqref{eq:deriv_init_Intro} as compared to the propagated derivatives in \eqref{eq:deriv_concl_Intro}. \subsubsection{Asymptotic behavior} As mentioned above, derivative control is tied to obtaining optimal decay for the electric field, through bounds on the \emph{effective} electric field. Roughly speaking, once we control a derivative of the particle density $\gamma$ we can sum the uniform bounds obtained on the scale-localized effective field and obtain convergence of the effective electrical functions, which allows us to deduce that \begin{equation}\label{eq:effE_Intro} \psi(\widetilde{\bf X},t)=\frac{1}{t}\Psi^\infty({\bf a})+O(t^{-1-}),\qquad \mathcal{E}_j(\widetilde{\bf X},t)=\frac{1}{t^2}\partial_j\Psi^\infty({\bf a})+O(t^{-2-}). \end{equation} The additional ingredient we use is that, for large scale, we can perform an integration by parts that replaces a derivative on the Coulomb kernel with a bound on the Poisson bracket with a \emph{constant of motion for the asymptotic flow}\footnote{Recall that a {\it constant of motion} denotes an expression of the form $F({\bf x},{\bf v},t)$ which is conserved along the flow, as opposed to an {\it integral of motion} of the form $F({\bf x},{\bf v})$ which is a function on phase space alone.} $\ddot{\bf x}=0$: for a function $\varphi({\bf x})$ of ${\bf x}$ only, there holds that \begin{equation*} \begin{split} \partial_{\bf x}\varphi=t^{-1}\{{\bf z},\varphi\},\quad{\bf z}:={\bf x}-t{\bf v}. \end{split} \end{equation*} Since ${\bf z}$ is constant along free streaming, we can show that it only increases logarithmically along the flow of \eqref{KeplerODE}. From \eqref{eq:effE_Intro} we easily obtain the asymptotic behavior: the main evolution equation for the gas distribution function $\gamma$ reads \begin{equation} \partial_t\gamma+\frac{Q}{t}\{\Psi^\infty({\bf a}),\gamma\}=l.o.t. \end{equation} To leading order, $\gamma$ is thus transported by a \emph{shear} flow, which can be integrated directly to obtain convergence of the particle distribution along \emph{modified characteristics}: \begin{equation*} \gamma(\vartheta+\ln(t)Q\nabla\Psi^\infty({\bf a}),{\bf a},t)\to\gamma_\infty(\vartheta,{\bf a}),\qquad t\to+\infty. \end{equation*} This leads us to our final result (stated here in a version without point charge dynamics): \begin{theorem}[informal version of Theorem \ref{MainThm}]\label{thm:global_asymptIntro} Let $\gamma$ be a global solution of \eqref{NLVP} as in Proposition \ref{prop:global_derivsIntro}. Then there exists an asymptotic electric field profile $\mathcal{E}^\infty\in L^\infty(\mathbb{R}^3)$ and $\gamma_\infty\in L^\infty_{\vartheta,{\bf a}}$ such that \begin{equation} \gamma(\vartheta+\ln(t)Q\mathcal{E}^\infty({\bf a}),{\bf a},t)\to\gamma_\infty(\vartheta,{\bf a}),\qquad t\to+\infty. \end{equation} \end{theorem} \subsection{Accounting for the motion of the point charge}\label{SecAddingPC} In general, the point charge will not remain stationary, as $\ddot{\mathcal{X}}\ne 0$. However, accounting for its motion brings in significant complications. Using the macroscopic conservation laws (akin to the introduction of the reduced mass for the classical two-body problem), at the cost of the introduction of a non-Galilean frame we can reduce the analysis to a problem for the gas distribution alone, which is then transported by a self-consistent Hamiltonian flow (see Section \ref{sec:red_gas}). Hereby it is natural to center the phase space for the gas distribution around the point charge, so that the new position coordinates are given simply by the distance from the point charge $\mathcal{X}(t)$. For the velocity we have two natural options: \begin{enumerate}[label=(\alph*)] \item Center around the \emph{asymptotic velocity} $\mathcal{V}_\infty$ of the point charge: This leads to an equation of the form \begin{equation*} \begin{split} \partial_t\mu-\{\mathbb{H},\mu\}=0,\qquad \mathbb{ H}=\frac{1}{2}\mathbb{H}_2-\mathbb{H}_4,\qquad\mathbb{H}_4({\bf x},{\bf v},t)=Q\psi({\bf x},t)+(\mathcal{V}(t)-\mathcal{V}_\infty)\cdot {\bf v}, \end{split} \end{equation*} with same linearized Hamiltonian as in \eqref{LinearHamiltonianFlow}, but adjusted perturbative term $\mathbb{H}_4$ that incorporates the asymptotic velocity through $\mathcal{V}_\infty$ of the point charge. We note that this induces a (small) uncertainty on the velocity $\mathcal{W}(t)=\mathcal{V}(t)-\mathcal{V}_\infty$, but no acceleration. This is well adapted to describing the dynamics far away from the point charge ($\aabs{{\bf x}}\gg 1$), and we thus refer to it as the \emph{``far formulation''}. \item Center around the \emph{instantaneous velocity} $\mathcal{V}(t)$ of the point charge: This is related to a Hamiltonian of the form \begin{equation*} \mathbb{ H}'=\frac{1}{2}\mathbb{H}_2-\mathbb{H}'_4,\qquad\mathbb{H}'_4({\bf x},{\bf v},t)=Q\psi({\bf x},t)-\dot{\mathcal{V}}\cdot {\bf x}. \end{equation*} This formulation introduces the acceleration $\dot{\mathcal{V}}$, which (although small) does not decay and becomes too influential when paired with $\vert{\bf x}\vert\gg 1$. We thus refer to this as the \emph{``close formulation''}, and will invoke it for comparatively small $\aabs{{\bf x}}\lesssim 1$. \end{enumerate} In practice, we will thus have to work with both formulations (and their respective sets of asymptotic action-angle variables). In particular, we will split phase space into ``close'' and ``far'' regions $\Omega_t^{cl}$ and $\Omega_t^{far}$, in which we work with the corresponding close and far formulations and propagate separately moment and derivative control, with a transition between them when a given trajectory passes from one region to the other. We emphasize that by construction each type of transition (far to close resp.\ close to far) happens at most once per trajectory. Although each such transition introduces some losses in $\xi$ weights (similarly to the case of past vs.\ future asymptotic actions), this is simple to account for. In conclusion, analogous results to Theorem \ref{thm:global_momentsIntro} and Proposition \ref{prop:global_derivsIntro} can be established also for any initial condition on the point charge position and velocity. In particular, one sees that its acceleration is given by an electric field that decays like $t^{-2}$ to leading order, resulting in a logarithmically corrected point charge trajectory in our final result Theorem \ref{MainThm} below. \subsection{Further remarks and perspectives}\label{ssec:prespectives} We comment on some more points of relevance: \begin{enumerate}[wide] \item \emph{The role of super-integrability:} The Kepler problem is \emph{super integrable} and as such admits $5$ independent conserved quantities. When doing computations, and especially when computing derivatives, it is useful to isolate as much as possible the conserved coordinates, for which one can hope to obtain uniform bounds, from the dynamical quantities which leads to derivatives which are large. Thus we do many computations using {\it super integrable coordinates} which are derived from the asymptotic action-angle as follows: \begin{equation}\label{SIC_Intro} \begin{split} \xi:=\frac{q}{a},\quad\eta:=\frac{a}{q}\vartheta\cdot{\bf a},\qquad{\bf u}:=\frac{{\bf a}}{a},\quad{\bf L}:=\vartheta\times{\bf a}, \end{split} \end{equation} and we note that the linear flow is simple in these coordinates: $(\xi(t),\eta(t),{\bf u}(t),{\bf L}(t))=(\xi_0,\eta_0+tq^2\xi^{-3},{\bf u}_0,{\bf L}_0)$ and only $1$ scalar coordinate changes over time (out of $7$). \item \emph{Types of trajectories:} It is worth distinguishing several types of trajectories in the linearized problem with respect to the above close and far regions of phase space $\Omega_t^{cl}=\{\aabs{{\bf x}}\leq 10\ip{t}\}$ and $\Omega_t^{far}=\{\aabs{{\bf x}}\geq \ip{t}\}$. For relatively small actions $a$, particles remain far from the point charge and move slowly. In particular, depending on their initial location, they may start in the far or close region, but will end up in the close region. In contrast, for large velocities, trajectories may start far away with high velocity, come close to the point charge and then speed off again, passing from $\Omega_t^{far}$ to $\Omega_t^{cl}$ and back to $\Omega_t^{far}$. Together with the distinction between incoming/outgoing dynamics, this gives four dynamically relevant, distinct regions of phase space (see also Remark \ref{rem:regions}). \item \emph{Possible simplifications:} We emphasize that as discussed, the analysis simplifies significantly if the charge has no dynamics, i.e.\ if $(\mathcal{X}(t),\mathcal{V}(t))\equiv (0,0)$. In a similar vein, if the initial gas distribution has compact support (and the support of velocities is thus bounded from below and above), it suffices to work with either close or far formulation. This shows a clear benefit and drastic simplification if an assumption of compact support is made. \end{enumerate} \paragraph{\emph{Future Perspectives}}\label{ssec:futpersp} We hope that the methods introduced in this work provide a template for the analysis of a large class of (collisionless) kinetic problems, for which the linearized equation is given as transport by a completely integrable ODE whose trajectories are open.\footnote{In the case of more complicated (e.g.\ non integrable) ODEs, already the linear analysis may be very challenging, and even for systems relatively close to the $2$-body problem one would need to account for Arnold diffusion (see e.g.\ \cite{KL2008}).} To be concrete we highlight some examples to this effect and further related open problems, on which we hope the analysis developed here can shed some new light: \begin{enumerate}[leftmargin=*] \item The gravitational case of \emph{attractive} interactions between the point charge and gas is of great importance in astrophysics. Despite this, as briefly hinted at above its mathematical investigation is in its infancy: strong solutions are not even known to exist locally in time, only global weak solutions have been constructed \cite{CMMP2012,CZW2015}. A key mathematical challenge lies in the presence of trapped trajectories, which already arise in the linearized system and drastically hinder the stabilization effect of dispersion. In this context, we expect Proposition \ref{PropAA} to extend in the region of positive energy, while the transition to zero and negative energies (with parabolic or elliptic orbits) introduces significant new challenges. This work should also inform on the modifications needed to account for the geometry of such trajectories. \item Along similar lines, the case of several species would also be relevant in plasma physics and may pose related challenges. On the other hand, even for repulsive interactions it would be interesting to consider a perturbation of several point charges, i.e.\ a solution to the $N$-body problem to which a small, smooth gas distribution is added. The natural starting point here is the case of two charges surrounded by a gas, which already at the linear level brings the (restricted) $3$-body problem into play. We refer to \cite{CdPD2010,KMR2009} for works in this direction for Vlasov-Poisson and related equations. \item We believe that the Vlasov-Poisson evolution of measures which are not absolutely continuous with respect to Liouville is an interesting general problem which merits further investigation. Here and in \cite{CM2010,CMMP2012,CLS2018,DMS2015,LZ2017,LZ2018,MMP2011,PW2020}, the case of a sum of pure point and smooth density is considered, but it would be interesting to have examples where the support of the measure has (say) intermediate Hausdorff dimensions (as suggested e.g.\ by some models of star formation, see e.g.\ \cite[Sec.\ 9.6.2]{MvdBW2010}). \end{enumerate} \subsection{Organization of the paper} We conclude this introduction by showing in Section \ref{sec:red_gas} how to reduce the equations \eqref{VPPC} to a problem on the gas distribution alone. Section \ref{SecLin} then studies the dynamics in the linearized problem, starting with more explanations on the method of asymptotic action (Section \ref{GenComAA}). Following this we discuss the Kepler problem (Section \ref{ssec:Kepler}) and solve the related scalar scattering problem (Section \ref{ssec:planar}), which then allows us to introduce the asymptotic action-angle variables (Section \ref{ssec:AngleAction}). Building on this, further coordinates are introduced in Section \ref{ssec:furthercoords}. Section \ref{ssec:kinematics} establishes quantitative bounds on some of the kinematically relevant quantities, including in particular their Poisson brackets with various coordinates. The electric functions are studied in Section \ref{sec:efield}. We first show that they are well approximated by simpler effective functions (Proposition \ref{PropControlEF}), and obtain convergence on effective fields (Proposition \ref{PropEeff}), building on the moment and derivative control available. Section \ref{SecBootstrap} establishes the main bootstrap arguments for the propagation of moment and derivative control. We introduce the main nonlinear unknowns in Section \ref{Sec:NLUnknowns}, and first close a bootstrap involving only moment bounds in Section \ref{sec:moments_prop}. Building on this, a second (and much more involved) bootstrap then yields control of derivatives in Section \ref{sec:derivs_prop}. Finally, in Section \ref{sec:main-asympt} we derive the asymptotic behavior of the gas distribution function and prove our main Theorem \ref{MainThm}. In appendix \ref{sec:appdx_trans}, we collect some auxiliary results. \paragraph{\textbf{Notation}} We will use the notation $A\lesssim B$ to denote the existence of a constant $C>0$ such that $A\leq C B$, when $C>0$ is independent of quantities of relevance, and write $A\lesssim_p B$ to highlight a dependence of $C$ on a parameter $p$. Moreover, to simplify some expressions we shall use the slight modification of the standard Japanese bracket as $\ip{r}:=(4+r^2)^{1/2}$, $r\in\mathbb{R}$, so that in particular $1\lesssim\ln\ip{0}$. \subsection{Reduction to a problem for the gas distribution alone}\label{sec:red_gas} The system \eqref{VPPC} can be transformed into a system that better accounts for the linear dynamics and is more easily connected to the case of radial data already investigated in \cite{PW2020}. For this, it will be convenient to recenter the phase space at the point charge. \subsubsection{Conservation laws} In a similar way as for the standard $2$-body problem, one can simplify the system somewhat by using the conservation laws. In order to study conserved quantities, it is convenient to observe that, when $\mu$ solves \eqref{VPPC}, for any function $\omega({\bf x},{\bf v},t)$, there holds that \begin{equation}\label{ConservationLawsComp} \begin{split} \frac{d}{dt}\iint\mu^2 \omega d{\bf x}d{\bf v}=\iint\mu^2\left(\partial_t+{\bf v}\cdot\nabla_{\bf x}+\left(\frac{q}{2}\frac{{\bf x}-\mathcal{X}}{\vert {\bf x}-\mathcal{X}\vert^3}+Q\nabla_{\bf x}\phi\right)\cdot\nabla_{\bf v}\right)\omega\,\, d{\bf x}d{\bf v}. \end{split} \end{equation} By testing with various choices of $\omega$, one can obtain conservation laws. We will do this for the first three moments in ${\bf v}$. The total charge is conserved and using $\omega=1$ in \eqref{ConservationLawsComp} leads to conservation of the $0$-moment: \begin{equation*} \begin{split} \frac{d}{dt}M_g(\mu)=0,\qquad M_g(\mu):=m_g\iint \mu^2\, d{\bf x}d{\bf v}. \end{split} \end{equation*} The total momentum is conserved and using $\omega={\bf v}$ in \eqref{ConservationLawsComp} leads to conservation of the $1$-moment: \begin{equation*} \begin{split} \frac{d}{dt}P(\mu)=0,\qquad P(\mu)&:=m_g\iint \mu^2 {\bf v}\,d{\bf x}d{\bf v}+M_c\mathcal{V}. \end{split} \end{equation*} Finally, the total energy is conserved and this leads to conservation of the $2$-moment: \begin{equation*} \begin{split} \frac{d}{dt}E(\mu,\mathcal{X},\mathcal{V})=0,\qquad E(\mu,\mathcal{X},\mathcal{V})&:=m_g\iint \mu^2\left(\vert {\bf v}\vert^2-2Q\phi\right)\,d{\bf x}d{\bf v}+\left(M_c\vert \mathcal{V}\vert^2-2\frac{q_cq_g}{\epsilon_0}\phi(\mathcal{X},t)\right). \end{split} \end{equation*} \subsubsection{Modulation and reduced equations} Since the Vlasov-Poisson system is invariant by Galilean transformation, we can choose a frame where the total momentum vanishes: $P(\mu)=0$. This determines the motion of the point charge in terms of the motion of the gas: \begin{equation*} \begin{split} \mathcal{V}(t)&:=-\frac{m_g}{M_c}\iint\mu^2 {\bf v}\, d{\bf x}d{\bf v}. \end{split} \end{equation*} As explained in Section \ref{SecAddingPC}, we will need a ``close'' and a ``far'' chart, which we introduce next. \subsubsection*{Far formulation} Given a solution $\mu$ on some time interval $[0,T^\ast)$, we define \begin{equation}\label{DefVinfty} \mathcal{V}_\infty:=\lim_{t\to T^\ast}\mathcal{V}(t),\qquad\mathcal{W}(t):=\mathcal{V}(t)-\mathcal{V}_\infty, \end{equation} and we introduce the new unknowns \begin{equation}\label{NewVariables} \begin{split} {\bf y}&:={\bf x}-\mathcal{X},\qquad {\bf w}:={\bf v}-\mathcal{V}_\infty,\\ \nu({\bf y},{\bf w},t)&:=\mu({\bf y}+\mathcal{X},{\bf w}+\mathcal{V}_\infty,t),\qquad\mu({\bf x},{\bf v},t)=\nu({\bf x}-\mathcal{X},{\bf v}-\mathcal{V}_\infty,t). \end{split} \end{equation} The new equation for terms of $\nu({\bf y},{\bf w},t)$ becomes self-consistent with a parameter $\mathcal{V}_\infty$: \begin{equation}\label{NewVP} \begin{split} &\left(\partial_t+{\bf w}\cdot\nabla_{\bf y}+\frac{q}{2}\frac{{\bf y}}{\vert {\bf y}\vert^3}\cdot\nabla_{\bf w}\right)\nu+Q\nabla_{\bf y}\psi\cdot\nabla_{\bf w}\nu=\mathcal{W}(t)\cdot\nabla_{\bf y}\nu,\\ &\psi({\bf y},t)=\phi({\bf y}+\mathcal{X},t)=-\frac{1}{4\pi}\iint \frac{1}{\vert {\bf y}-{\bf r}\vert}\nu^2({\bf r},{\boldsymbol\pi},t)d{\bf r} d{\boldsymbol\pi},\\ &\mathcal{W}(t)=-\frac{M_g+M_c}{M_c}\mathcal{V}_\infty-\frac{m_g}{M_c}\iint {\bf w}\nu^2({\bf y},{\bf w},t)\,\, d{\bf y}d{\bf w}. \end{split} \end{equation} We introduce the Hamiltonians: \begin{equation}\label{Hamiltonians} \begin{split} \mathbb{ H}&=\frac{1}{2}\mathbb{H}_2-\mathbb{H}_4,\qquad \mathbb{H}_2({\bf y},{\bf w}):=\vert {\bf w}\vert^2+\frac{q}{\vert {\bf y}\vert},\qquad\mathbb{H}_4({\bf y},{\bf w},t)=Q\psi({\bf y},t)+\mathcal{W}(t)\cdot {\bf w}. \end{split} \end{equation} Note that $\mathbb{H}_2$ is independent of the unknown $\nu$ and will give the linearized equation, while $\mathbb{H}_4=O(\nu^2)$ and decays in time. The density $\nu$ is transported by the corresponding Hamiltonian vector field in \eqref{Hamiltonians} in the sense that the first equation in \eqref{NewVP} is equivalent to \begin{equation} \partial_t\nu-\{\mathbb{H},\nu\}=0. \end{equation} \begin{lemma} Given $C^1$ functions $(\mu,\nu,\mathcal{X},\mathcal{V})$ on a time interval $[0,T^\ast)$ and a constant $\mathcal{V}_\infty\in\mathbb{R}^3$, related by \eqref{DefVinfty}-\eqref{NewVariables}. The functions $(\mu,\mathcal{X},\mathcal{V})$ solve \eqref{VPPC} if and only if $(\nu,\mathcal{V}_\infty)$ solves \begin{equation}\label{HamiltonianTransport} \begin{split} \partial_t\nu-\{\mathbb{H},\nu\}=0,\qquad\lim_{t\to T^\ast}\mathcal{W}(t)=0, \end{split} \end{equation} with $\mathbb{H}$ defined in \eqref{Hamiltonians}. \end{lemma} \begin{proof} We recall that $\frac{d\mathcal{X}}{dt}=\mathcal{V}$. With notations in \eqref{NewVariables}, we have \begin{equation} \begin{split} \nabla_{\bf y}\nu({\bf y},{\bf w},t)=\nabla_{\bf x}\mu({\bf x},{\bf v},t),&\qquad\nabla_{\bf w}\nu({\bf y},{\bf w},t)=\nabla_{\bf v}\mu({\bf x},{\bf v},t),\\ \partial_t\nu({\bf y},{\bf w},t)=(\partial_t+\mathcal{V}\cdot\nabla_{\bf x})\mu({\bf x},{\bf v},t),&\qquad\nabla_{\bf y}\psi({\bf y},t)=\nabla_{\bf x}\phi({\bf x},t). \end{split} \end{equation} Then in terms of $\mu, {\bf x},{\bf v}$, the first equation in \eqref{NewVP} becomes \begin{equation} \left(\partial_t+\mathcal{V}(t)\cdot\nabla_{\bf x}+({\bf v}-\mathcal{V}_\infty)\cdot\nabla_{\bf x}+\frac{q}{2}\frac{{\bf x}-\mathcal{X}}{\vert {\bf x}-\mathcal{X}\vert^3}\cdot\nabla_{\bf v}\right)\mu+Q\nabla_{\bf x}\phi\cdot\nabla_{\bf v}\mu=(\mathcal{V}(t)-\mathcal{V}_\infty)\cdot\nabla_{\bf x}\mu, \end{equation} which is exactly the first equation in \eqref{VPPC}. \end{proof} \subsubsection*{Close formulation}\label{ssec:pinnedframe} Close to the point charge / for large velocities, we will prefer to center our coordinate frame around the instantaneous velocity of the point charge, and thus let \begin{equation}\label{CenteredCoordinates} {\bf y}^\prime={\bf y}={\bf x}-\mathcal{X}(t),\qquad {\bf w}^\prime={\bf v}-\mathcal{V}(t)={\bf w}-\mathcal{W}(t). \end{equation} This can be obtained by means of the generating function $S({\bf y}^\prime,{\bf w},t)={\bf y}^\prime\cdot({\bf w}-\mathcal{W}(t))$ and leads to the new Hamiltonian \begin{equation}\label{NewHamiltonian2} \begin{split} \mathbb{H}^{\prime}({\bf y'},{\bf w'})=\mathbb{H}({\bf y},{\bf w})-\partial_tS=\frac{1}{2}\mathbb{H}_2({\bf y}^\prime,{\bf w}^\prime)-Q\psi({\bf y}^\prime,t)+\dot{\mathcal{W}}\cdot {\bf y}^\prime-\frac{1}{2}\vert\mathcal{W}\vert^2. \end{split} \end{equation} We also note for further use that \begin{equation}\label{DerW} \dot{\mathcal{W}}(t)=\mathcal{Q}\nabla_{\bf y}\psi(0,t). \end{equation} We let \begin{equation} \nu'({\bf y'},{\bf w'},t):=\mu({\bf y'}+\mathcal{X},{\bf w'}+\mathcal{V}(t),t). \end{equation} Then equation \eqref{VPPC} is equivalent to \begin{equation}\label{equivVPPCNu} \partial_t\nu'-\{\mathbb{H}^{\prime},\nu'\}=0, \end{equation} or \begin{equation}\label{NewVP2} \begin{split} \left(\partial_t+{\bf w}'\cdot\nabla_{\bf y'}+\frac{q}{2}\frac{{\bf y'}}{\vert {\bf y'}\vert^3}\cdot\nabla_{\bf w'}\right)\nu'+Q\nabla_{\bf y'}\psi\cdot\nabla_{\bf w'}\nu' &=\mathcal{Q}\nabla_{\bf y'}\psi(0,t)\cdot\nabla_{\bf w'}\nu',\\ \psi({\bf y'},t)&=\phi({\bf y'}+\mathcal{X},t). \end{split} \end{equation} \section{Analysis of the linearized flow}\label{SecLin} In this section, we solve the linearized equation \eqref{LinearHamiltonianFlow} using asymptotic action angle variables and introduce various other adapted coordinates. \subsection{General comments on the asymptotic action-angle method}\label{GenComAA} \subsubsection{Overview of the method of asymptotic actions} There is no general way to find ``good choices'' of action-angle variable $(\Theta,\mathcal{A})$ besides trial and error; however, a few guidelines can be useful. \begin{enumerate} \item It is desirable that the change of variable $({\bf x},{\bf v})\mapsto(\vartheta,{\bf a})$ be canonical, i.e. $d{\bf x}\wedge d{\bf v}=d\vartheta\wedge d{\bf a}$. This in particular ensures that the jacobian of the change of variable is $1$. A good way to enforce this is to use a {\it generating function} $S({\bf x},{\bf a})$ such that ${\bf v}=\nabla_{\bf x} S$ and $\vartheta=\nabla_{\bf a} S$. In this case \begin{equation*} 0=ddS=d\left(\frac{\partial S}{\partial {\bf x}^j}d{\bf x}^j+\frac{\partial S}{\partial {\bf a}^j}d{\bf a}^j\right)=d({\bf v}d{\bf x}+\vartheta d{\bf a}). \end{equation*} Such functions are, however often difficult to find explicitly. \item Since we are concerned with longtime behavior, we choose ${\bf a}$ so that it captures the dispersive nature of the problem, i.e. that trajectories corresponding to different choices of ${\bf a}$ diverge as $t\delta a$. In scattering situations, one has a natural Hamiltonian at $\infty$, often $H_\infty=\vert {\bf v}_\infty\vert^2/2$ and a useful choice is ${\bf a}={\bf v}_\infty$. In this case, in the simplest situation, one needs to solve a scattering problem to define ${\bf v}=V({\bf x},{\bf v}_\infty)$ (i.e. compute the full trajectory knowing the incoming velocity and the position at one time), then integrate $V$ to recover the generating function \begin{equation*} \begin{split} V({\bf x},{\bf v}_\infty)=(\nabla_{\bf x}S)({\bf x},{\bf v}_\infty), \end{split} \end{equation*} and then deduce the angle $\vartheta$. \end{enumerate} Note that this leads to a cascade of Hamiltonians which describe various components of the dynamics: $\mathbb{H}\to \mathbb{H}_2$ to describe solutions to the perturbed problem as envelopes of solutions to the linearized problem, and $\mathbb{H}_2\mapsto H_\infty$ to describe solutions of the linearized problem via their asymptotic state. \subsubsection{The case of radial data} The strategy laid out above can be most easily carried out for the case of radial data, a problem already treated in \cite{PW2020}. Starting from the energy, we can express the outgoing velocity as a function of $a$ and $r$: \begin{equation}\label{RadialV} \begin{split} V(r,a)=\sqrt{a^2-\frac{q}{r}}=a\sqrt{1-\frac{q}{ra^2}} \end{split} \end{equation} and integrating in $r$, we find the generating function: \begin{equation}\label{GeneratingFunctionRadial} \begin{split} S(r,a)=\frac{q}{a}K(\frac{ra^2}{q})=ar_{min}K(\frac{r}{r_{min}}),\qquad\frac{\partial S}{\partial r}=V, \end{split} \end{equation} where $K$ is defined in \eqref{DefK} below. This gives a formula for the angle \begin{equation*} \begin{split} \Theta&=\frac{\partial S}{\partial a}=-\frac{q}{a^2}K(\frac{ra^2}{q})+2r\sqrt{1-\frac{q}{ra^2}}=-\frac{q}{a^2}K(\frac{ra^2}{q})+2r\frac{V}{a}. \end{split} \end{equation*} This simplifies to \begin{equation}\label{NewDefinitionTheta} \begin{split} \Theta&=-r_{min}K(\frac{r}{r_{min}})+2\frac{rv}{a}=-r_{min}K(\frac{r}{r_{min}})+2\frac{rv}{\sqrt{v^2+\frac{q}{r}}}, \end{split} \end{equation} and since $r_{min},a$ are invariant along the trajectory, we can verify that \begin{equation*} \begin{split} \dot{\Theta}&=-\dot{r}K^\prime(\frac{r}{r_{min}})+2\frac{\dot{r}v}{a}+2\frac{r\dot{v}}{a}=-\frac{v^2}{a}+2\frac{v^2}{a}+\frac{q}{ra}=a, \end{split} \end{equation*} where we have used that \begin{equation*} \begin{split} K^\prime(\frac{r}{r_{min}})=\sqrt{1-\frac{r_{min}}{r}}=\frac{v}{a},\qquad \dot{r}=v,\qquad\dot{v}=\frac{q}{2r^2}. \end{split} \end{equation*} In the discussion above, we have used the function $K$ defined by \begin{equation}\label{DefK} \begin{split} K^\prime(s)=\left[1-\frac{1}{s}\right]^\frac{1}{2},\qquad K(1)=0,\qquad K(s):=\sqrt{s(s-1)}-\ln(\sqrt{s}+\sqrt{s-1}). \end{split} \end{equation} Since we can verify that \begin{equation*} \begin{split} G(s)=2\sqrt{s(s-1)}-K(s),\qquad G^\prime(s)=\left[1-\frac{1}{s}\right]^{-\frac{1}{2}},\quad G(1)=0, \end{split} \end{equation*} we see that, in the outgoing case $v>0$, this gives a similar choice of unknown than in \cite{PW2020}, but through a different approach. In the incoming case $v<0$, we need to change sign in \eqref{RadialV} and this leads to the new generating function \begin{equation*} \begin{split} S_{in}(r,a):=-\frac{q}{a}K(\frac{ra^2}{q}), \end{split} \end{equation*} and in turn we obtain \begin{equation*} \begin{split} \Theta&=r_{min}K(\frac{r}{r_{min}})-2r\sqrt{1-\frac{q}{ra^2}}=\frac{v}{\vert v\vert}\left[-r_{min}K(\frac{r}{r_{min}})+2r\frac{\vert v\vert}{a}\right]=r_{min}\frac{v}{\vert v\vert}G(\frac{r}{r_{min}}). \end{split} \end{equation*} We thus see that, in the radial case, the scattering problem is degenerate in the sense that there are, in general, {\it two} trajectories that pass through $r$ at time $t$ and have asymptotic velocity $a$ (one incoming one outgoing). This fold degeneracy can be resolved by introducing a function $\sigma=v/\vert v\vert=\theta/\vert\theta\vert$ and defining \begin{equation*} \begin{split} S_{tot}=\sigma\frac{q}{a}K(\frac{ra^2}{q}) \end{split} \end{equation*} with a choice of sign that is not expressed in terms of $(r,a)$ alone. \subsection{Angle-Action coordinates for the Kepler problem}\label{ssec:Kepler} We want to study the Kepler system: \begin{equation}\label{ODE} \begin{split} \frac{dX}{dt}=V,\qquad\frac{dV}{dt}=\frac{qX}{2\vert X\vert^3}, \end{split} \end{equation} in the repulsive case $q>0$. All the orbits are open and can be described using the conservation of energy $H$, angular momentum ${\bf L}$ and {\it Runge-Lenz} vector $\bf{R}$, \begin{equation}\label{ConservedQuantitiesODE} \begin{split} H&:=\vert{\bf v}\vert^2+\frac{q}{\vert{\bf x}\vert},\qquad {\bf L}:={\bf x}\times{\bf v},\qquad{\bf R}:={\bf v}\times {\bf L}+\frac{q}{2}\frac{{\bf x}}{\vert{\bf x}\vert}. \end{split} \end{equation} In the following formulas, it will be very useful to keep track of the homogeneity and we note that\footnote{Here $\dm{{\bf x}}$ denotes the dimension of spatial length, $\dm{t}$ the dimension of time, $\dm{{\bf v}}=\dm{{\bf x}}\dm{t}^{-1}$ the dimension of velocity and $\dm{q}=\dm{{\bf x}}^3\dm{t}^{-2}$ the dimension of an electric charge.} \begin{equation}\label{eq:dims} \begin{split} \dm{H}=\frac{\dm{{\bf x}}^2}{\dm{t}^2}=\dm{{\bf v}}^2=\frac{\dm{q}}{\dm{{\bf x}}},\qquad \dm{q}=\dm{\mathbf{R}}=\dm{{\bf x}}\dm{{\bf v}}^2,\qquad \dm{{\bf L}}=\dm{{\bf x}}\dm{{\bf v}}. \end{split} \end{equation} The conservation laws \eqref{ConservedQuantitiesODE} give $5$ functionally independent conservation laws, although of course only $3$ of these can be in involution. A classical choice is $\{H,\vert{\bf L}\vert,L_3\}$ which leads to the classical solution of the $2$-body problem by reducing it to a planar problem. Indeed, if one chooses the $z$ axis such that $L_1=L_2=0$, then they remain $0$ for all time and the motion remains in the plane $\{z=0\}$. This allows us to exhibit a convenient action-angle transformation satisfying the asymptotic action property \eqref{AsymptoticActions}. We define the phase space to be \begin{equation*} \begin{split} \mathcal{P}_{{\bf x},{\bf v}}:=\{({\bf x},{\bf v})\in\mathbb{R}^3\times\mathbb{R}^3:\,\, \vert{\bf x}\vert>0\},\qquad\mathcal{P}_{\vartheta,{\bf a}}:=\{(\vartheta,{\bf a})\in\mathbb{R}^3\times\mathbb{R}^3:\,\,\vert{\bf a}\vert>0\}, \end{split} \end{equation*} where the angles $\vartheta$ resp.\ actions ${\bf a}$ will have the dimension of length $\dm{\vartheta}=\dm{{\bf x}}$ resp.\ velocity $\dm{{\bf a}}=\dm{{\bf v}}$. Our main result in this section is the construction of asymptotic action-angle variables in Proposition \ref{PropAA} which will provide adapted coordinates for the phase space. This will be proved later in Section \ref{ssec:AngleAction} after we have solved a scattering problem. Here we highlight that by construction as a \emph{canonical transformation}, the change of variables $\mathcal{T}$ preserves the Liouville measure \begin{equation} \det\frac{\partial(\Theta,\mathcal{A})}{\partial({\bf x},{\bf v})}=\det\frac{\partial({\bf X},{\bf V})}{\partial(\vartheta,{\bf a})}=1. \end{equation} \begin{remark}\label{rem:virials} Using that \eqref{ODE} is super-integrable, one can express $5$ of the 6 coordinates of a trajectory in terms of conserved quantities. The last one corresponds to the ``trace'' of time and cannot be deduced by the conservation laws alone. In our case, a good proxy for the ``trace'' of time are the quantities ${\bf x}\cdot {\bf v}$ and $\theta\cdot{\bf a}$. That this measures time lapsed is obvious in action angle coordinates from \eqref{LinearizationFlow}. For the physical variables, this follows from Virial-type computations \begin{equation}\label{eq:virials} \begin{split} \frac{d}{dt}\frac{\vert {\bf x}\vert^2}{2}={\bf x}\cdot{\bf v},\qquad \frac{d}{dt}({\bf x}\cdot{\bf v})=\vert{\bf v}\vert^2+\frac{q}{2\vert{\bf x}\vert}=\frac{\vert{\bf v}\vert^2}{2}+\frac{1}{2}H. \end{split} \end{equation} \end{remark} Although not very illuminating, we can obtain explicit expressions using \eqref{ExplicitA1} and \eqref{DefTheta}: \begin{equation}\label{eq:explicitATheta} \begin{split} \mathcal{A}({\bf x},{\bf v})&=\frac{2q\sqrt{H}}{4HL^2+q^2}{\bf R}+\frac{4H}{4HL^2+q^2}{\bf L}\times {\bf R},\\ \Theta({\bf x},{\bf v})&=\frac{\iota}{2}\vert{\bf x}\vert K^\prime(\rho)\left(\frac{\bf x}{\vert{\bf x}\vert}+\frac{\bf a}{\vert{\bf a}\vert}\right)+\frac{\vert{\bf x}\vert}{2}\left(\frac{\bf x}{\vert{\bf x}\vert}-\frac{\bf a}{\vert{\bf a}\vert}\right)-\sigma(\rho)\frac{\bf a}{\vert{\bf a}\vert^3},\qquad{\bf a}=\mathcal{A}({\bf x},{\bf v}), \end{split} \end{equation} where $K$ is defined in \eqref{DefK}, and \begin{equation*} \begin{split} \iota&=\hbox{sign}({\bf x}\cdot{\bf v}+\sqrt{H}L^2/q),\qquad\sigma=-\iota\ln(\sqrt{\rho}+\sqrt{\rho-1}),\\ \rho&=\frac{\sqrt{H}}{2q}\left(\vert{\bf x}\vert\sqrt{H}+\frac{1}{4HL^2+q^2}(4HL^2{\bf x}\cdot{\bf v}+q^2\sqrt{H}\vert{\bf x}\vert+2qL^2\sqrt{H})\right). \end{split} \end{equation*} This follows combining \eqref{DefGamma1} and \eqref{DefTheta} for $\iota$, \eqref{def:sigma} for $\sigma$ and \eqref{eq:def_rho2} and \eqref{AxAXperp} for $\rho$. In the radial case, ${\bf L}=0$, $\rho=r/r_{min}$ and we recover the formulas from the paper \cite{PW2020}. Explicit expressions for ${\bf X}(\vartheta,{\bf a}),{\bf V}(\vartheta,{\bf a})$ will be most useful and are given in \eqref{ExpressionsX} and \eqref{ExpressionsV} below. \subsubsection{Planar dynamics}\label{ssec:planar} To study the geometry of trajectories in the Kepler problem \eqref{ODE}, we note that by the conservation laws \eqref{ConservedQuantitiesODE} we may choose coordinates such that the motion takes place in the $xy$-plane with angular momentum $(0,0,L)$ for some $L>0$. Switching to polar coordinates $(r,\phi)$\footnote{Note that ${\bf x}=r(\cos\phi,\sin{\phi},0)$. We recall vectors ${\bf e}_r=(\cos\phi,\sin\phi,0)$ and ${\bf e}_\phi=(-\sin\phi,\cos\phi,0)$ in the basis.}, we find that \begin{equation}\label{ConservationLaws2dplanar} \begin{split} H=\dot{r}^2+r^2\dot{\phi}^2+\frac{q}{r},\qquad L=r^2\dot{\phi},\qquad{\bf R}=\left(\frac{L^2}{r}+\frac{q}{2}\right){\bf e}_r-\dot{r}L{\bf e}_\phi, \end{split} \end{equation} and these can be used to integrate the equation. \begin{figure} \caption{A sample trajectory of the Kepler problem (in blue), with conserved quantities (in red).} \label{fig:cons-laws} \end{figure} More precisely, we have that \begin{equation} \frac{d\bf{v}}{dt}=\frac{q}{2r^2}(\cos\phi,\sin\phi,0), \quad L=r^2\frac{d\phi}{dt}\qquad\Rightarrow\qquad \frac{d\bf{v}}{d\phi}=\frac{q}{2L}(\cos\phi,\sin\phi,0), \end{equation} and hence \begin{equation} {\bf v}(\phi)=\frac{q}{2L}(\sin\phi,-\cos\phi,0)+(c_1,c_2,0), \end{equation} where ${\bf c}=(c_1,c_2,0)$ is the constant of integration. Thus ${\bf v}$ moves along a circle (the so-called \emph{``velocity circle''} -- see e.g.\ \cite{Mil1983} and Figure \ref{fig:velocity-circle}) with center ${\bf c}$ and satisfies ${\bf v-c\perp x}$, with periapsis in direction $-{\bf c}^\perp$.\footnote{For a vector ${\bf c}=(c_1,c_2,0)$, we denote ${\bf c}^\perp=(-c_2,c_1,0)$.} As is well known, in the repulsive case ($q>0,H>0$) the trajectories are hyperbolas, and asymptotic velocities ${\bf v_{\pm\infty}}$ are thus well-defined.\footnote{In the present formulation, this can be seen for example by assuming that the velocity circle is centered on the positive $y$-axis, i.e.\ $c_1=0,c_2=\epsilon\cdot\frac{q}{2L}\geq 0$, so that from the equation $(0,0,L)={\bf x}\times{\bf v}$ it follows that $r=\frac{2L^2}{q}\frac{1}{\epsilon\cos\phi-1}$, where $\epsilon>1$ is the eccentricity of the hyperbola.} \begin{figure} \caption{An illustration of a sample trajectory of the Kepler problem with associated velocity circle (in blue) and asymptotic velocities ${\bf v}_{-\infty}$ and ${\bf v}_\infty$ (in red).} \label{fig:velocity-circle} \end{figure} \subsubsection*{Scattering problem} In order to obtain our asymptotic action, we need to understand to which extent knowledge of ${\bf x}$ and ${\bf v}_\infty$ allows to determine the full trajectory, i.e. to solve the following problem: ``find the trajectories passing through ${\bf x}$ at time $t$ and whose (forward) asymptotic velocity is ${\bf v}_\infty$''. Inspecting the behavior of the conservation laws at $\infty$ and at periapsis, we find that\footnote{Here and in the following, given a nonzero vector ${\bf v}$, we denote by $\hat{\bf v}={\bf v}/\vert{\bf v}\vert$ its direction vector.} \begin{equation}\label{ConsLaws2dPlanar2} \begin{split} H=\frac{L^2}{r_{min}^2}+\frac{q}{r_{min}}=\vert{\bf v}_\infty\vert^2,\qquad {\bf R}=\left(\frac{L^2}{r_{min}}+\frac{q}{2}\right)\hat{\bf x}_p=\frac{q}{2}{\hat{\bf v}_\infty}-\vert {\bf v}_\infty\vert\cdot L\hat{{\bf v}}_\infty^\perp, \end{split} \end{equation} and in particular \begin{equation*} 2R=\sqrt{4HL^2+q^2},\qquad\frac{1}{r_{min}}=\frac{\sqrt{q^2+4HL^2}-q}{2L^2},\quad r_{min}=\frac{q+\sqrt{q^2+4HL^2}}{2H}, \end{equation*} so that only the direction of ${\bf R}$ will be important (and we recover $r_{min}=q/H$ in the radial case). For the sake of definiteness, for a given ${\bf v_\infty}$ let us further rotate our coordinates such that ${\bf v_\infty}=(\sqrt{H},0,0)$ is parallel to the (positive) $x$-axis (see also Figure \ref{fig:scatter}). Then the possible trajectories in this setup all lie in the lower half-plane $\{y\leq 0\}$, and the center ${\bf c}$ of the velocity circle is determined by the requirement that ${\bf v}(2\pi)=(\sqrt{H},0,0)$, i.e. \begin{equation}\label{eq:v-theta} {\bf v}(\phi)=\frac{q}{2L}(\sin\phi,-\cos\phi,0)+(\sqrt{H},\frac{q}{2L},0), \end{equation} and periapsis lies in direction $-{\bf c}^\perp=(\frac{q}{2L},-\sqrt{H},0)$. The quantity \begin{equation}\label{eq:def_rho} \rho(r,\phi):=\frac{rH}{2q}(1+\cos\phi) \end{equation} plays a key role for the dynamics, as illustrated by the following lemma: \begin{lemma}\label{lem:trajectories} Let ${\bf x}_0=r_0(\cos\phi_0,\sin\phi_0,0)$ be given with $r_0>0$ and $\phi_0\in(\pi,2\pi)$. \begin{enumerate} \item If $\rho(r_0,\phi_0)<1$, there does not exist a trajectory through ${\bf x}_0$ with asymptotic velocity ${\bf v}_\infty$. \item If $\rho(r_0,\phi_0)=1$, there exists exactly one trajectory through ${\bf x}_0$ with asymptotic velocity ${\bf v}_\infty$. If ${\bf v}_0$ denotes the instantaneous velocity, there holds that ${\bf x}_0\cdot{\bf v}_0=-\sqrt{H}L^2/q<0$. \item If $\rho(r_0,\phi_0)>1$ there are two trajectories $h_\pm$ through ${\bf x}_0$ with asymptotic velocity ${\bf v}_\infty$, corresponding to values $0<L_-< L_+$ for the angular momentum. \end{enumerate} Besides, in case $(3)$, we have that, locally around $({\bf x}_0,{\bf v}_0)$, $\rho$ decreases on $h_-$ and increases on $h_+$. \end{lemma} \begin{proof} On the one hand, since $(0,0,L)={\bf x}\times {\bf v}$ we obtain directly that \begin{equation}\label{eq:L} L=r\left(-\frac{q}{2L}+\frac{q}{2L}\cos\phi-\sqrt{H}\sin\phi\right)\quad\Leftrightarrow\quad L^2+Lr\sqrt{H}\sin\phi+\frac{rq}{2}(1-\cos\phi)=0. \end{equation} This has real solutions if and only if \begin{equation} rH(1+\cos\phi)\geq 2q\quad\Leftrightarrow\quad \rho(r,\phi)=\frac{rH}{2q}(1+\cos\phi)\geq 1, \end{equation} and they are given by \begin{equation}\label{eq:L-sol} L=-\sin\phi\frac{r\sqrt{H}}{2}\left(1\pm\sqrt{1-\frac{1}{\rho(r,\phi)}}\right). \end{equation} Since each choice of $L$ leads to a trajectory by \eqref{eq:v-theta}, this yields the claimed trichotomy. Along a trajectory, we see from \eqref{eq:L} that \begin{equation}\label{eq:r-shape} r=-\frac{L^2}{L\sqrt{H}\sin\phi+\frac{q}{2}(1-\cos\phi)}, \end{equation} and therefore, \begin{equation} \partial_\phi r(\phi)=\frac{qL^2}{2[L\sqrt{H}\sin\phi+\frac{q}{2}(1-\cos\phi)]^2}\left(\frac{2L\sqrt{H}}{q}\cos\phi+\sin\phi\right)=\frac{r}{L}{\bf c\cdot x}=\frac{r}{L}{\bf v\cdot x}. \end{equation} Using \eqref{eq:def_rho} and \eqref{eq:L-sol}, we see that \begin{equation}\label{AngleFold} -\tan(\phi/2)=\frac{-\sin\phi}{1+\cos\phi}=\frac{L\sqrt{H}}{q}\frac{1}{\rho\pm\sqrt{\rho(\rho-1)}}=\frac{\sqrt{H}L}{q}\left[1\mp\sqrt{1-1/\rho}\right], \end{equation} and therefore, when $\rho=1$ we can plug in the equation above and obtain $\partial_\phi r<0$. Deriving both side of \eqref{AngleFold}, we obtain \begin{equation*} \begin{split} -\frac{1}{2}(1+\tan^2(\phi/2))=\mp\frac{L\sqrt{H}}{q}\frac{1}{2\rho\sqrt{\rho(\rho-1)}}\frac{d}{d\phi}[\rho(r(\phi),\phi)], \end{split} \end{equation*} which is enough since we know that, along a trajectory, $\dot{\phi}=r^{-2}L>0$. \end{proof} \begin{figure} \caption{An illustration of Lemma \ref{lem:trajectories} with two trajectories $h_1$ (red) and $h_2$ (blue) with asymptotic velocity ${\bf v}_\infty$ through a given point ${\bf x}_0$, and their corresponding velocity circles. The green line is the level set $\{\rho=1\}$.} \label{fig:trajectories} \end{figure} From the proof we observe that, via the velocity circle, we can directly compute the angle from periapsis to asymptotic velocity as $\phi_{p}=\frac{\pi}{2}-\tilde\phi$, where $\tan\tilde\phi=\frac{q}{2L\sqrt{H}}$ (see also Figure \ref{fig:vel-circle}). \begin{figure} \caption{An illustration of the utility of the velocity circle.} \label{fig:vel-circle} \end{figure} In addition, by \eqref{eq:L} and \eqref{eq:v-theta} there holds that \begin{equation}\label{ExpressionVPlanarCase} {\bf v}(\phi)=-\frac{q}{2L}{\bf e}_\phi+(\sqrt{H},\frac{q}{2L},0)=-\left(\sqrt{H}+\frac{\sin\phi}{1-\cos\phi}\frac{L}{r}\right){\bf e}_r+\frac{L}{r}{\bf e}_\phi. \end{equation} This can be integrated using that, with $K$ as defined in \eqref{DefK}, and \eqref{eq:L-sol}, there holds that \begin{equation} \begin{aligned} \frac{L}{r}&=-\frac{\sqrt{H}}{2}\sin\phi\left[1\pm K'(\rho(r,\phi))\right]=\frac{1}{r}\frac{\partial}{\partial\phi}\left[\pm \frac{q}{\sqrt{H}}K(\rho(r,\phi))+\frac{r\sqrt{H}}{2}\cos\phi\right],\\ -\frac{\sin\phi}{1-\cos\phi}\frac{L}{r}&=\frac{\sqrt{H}}{2}(1+\cos\phi)\left[1\pm K'(\rho(r,\phi))\right]=\frac{\partial}{\partial r}\left[\pm \frac{q}{\sqrt{H}}K(\rho(r,\phi))+\frac{r\sqrt{H}}{2}(1+\cos\phi)\right], \end{aligned} \end{equation} so that, using \eqref{ExpressionVPlanarCase}, we can write \begin{equation} {\bf v}=\nabla_{\bf x} S(r,\phi),\qquad S(r,\phi)=\pm \frac{q}{\sqrt{H}}K(\rho(r,\phi))-\frac{r\sqrt{H}}{2}(1-\cos\phi). \end{equation} \subsubsection{Towards Angle-Action variables}\label{ssec:AngleAction} In general geometry, for a given ``action'' ${\bf a}={\bf v}_\infty$ (with $\abs{{\bf a}}^2=H$) the dimensionless (see \eqref{eq:dims}) function $\rho$ generalizes as \begin{equation}\label{eq:def_rho2} \rho({\bf x},{\bf a})=\frac{\abs{{\bf a}}}{2q}(\abs{{\bf x}}\abs{{\bf a}}+{\bf a}\cdot{\bf x}), \end{equation} and by the above we have established the following lemma: \begin{lemma}\label{lem:gen_functions} The functions \begin{equation}\label{GeneratingFunction} \begin{split} \mathcal{S_\pm}({\bf x},{\bf a})&=\pm\frac{q}{\vert{\bf a}\vert}K(\frac{\vert{\bf a}\vert}{2q}({\bf x}\cdot{\bf a}+\vert{\bf x}\vert\vert{\bf a}\vert))-\frac{\vert{\bf x}\vert\vert{\bf a}\vert-{\bf x}\cdot{\bf a}}{2} \end{split} \end{equation} with $K$ defined in \eqref{DefK} are generating functions in the sense that if ${\bf x},{\bf a}\in\mathbb{R}^3$ are given with $\rho({\bf x},{\bf a})\geq 1$, then \begin{equation} {\bf v}_\pm=\nabla_{\bf x}\mathcal{S}_\pm \end{equation} define velocities corresponding to trajectories of the ODE \eqref{ODE} passing through ${\bf x}$ with asymptotic velocity ${\bf a}$. Moreover, these are the only such velocities. In addition, the generating functions preserve the angular momentum in the sense that for the aforementioned trajectories their angular momenta are given by \begin{equation}\label{ConservationMomentumPlane} {\bf L}_\pm={\bf x}\times\nabla_{\bf x}\mathcal{S}_\pm=\nabla_{\bf a}\mathcal{S}_\pm\times{\bf a}, \end{equation} and we have that $\abs{{\bf L}_+}\geq\abs{{\bf L}_-}$. \end{lemma} We are now ready to prove Proposition \ref{PropAA}. \begin{proof}[Proof of Proposition \ref{PropAA}] We give first the explicit definition of the change of variables, with some additional explicit formulas. \subsubsection*{The fold} Lemma \ref{lem:gen_functions} gives us two local diffeomorphisms defined through the scattering problem. We also see from Lemma \ref{lem:trajectories} that the mapping $({\bf x},{\bf a})\mapsto {\bf v}$ has a fold, but we can hope to define a nice change of variable on either side $\Omega_p$, $\Omega_f$ of the set $\Gamma$ defined by \begin{equation}\label{DefGamma1} \begin{split} \Gamma:=\{{\bf x}\cdot{\bf v}=-\sqrt{H}L^2/q\},\qquad\Omega_p:=\{{\bf x}\cdot{\bf v}<-\sqrt{H}L^2/q\},\qquad\Omega_f:=\{{\bf x}\cdot{\bf v}>-\sqrt{H}L^2/q\}, \end{split} \end{equation} and we choose the generating function $\mathcal{S}_-$ in $\Omega_p$ and the generating function $\mathcal{S}_+$ in $\Omega_f$. From Lemma \ref{lem:trajectories}, we see that this corresponds to selecting the trajectory in the past of $\Gamma$ when $({\bf x},{\bf v})\in\Omega_p$, and the trajectory in the future of $\Gamma$ when $({\bf x},{\bf v})\in\Omega_f$. Note also that since, along trajectories, $d({\bf x}\cdot{\bf v})/dt>H/2$, each trajectory meets $\Omega_p$ and $\Omega_f$ exactly once. \subsubsection*{Construction of $\mathcal{T}$} To construct our angle-action variables, we note that for given $({\bf x},{\bf v})\in\mathbb{R}^3\times\mathbb{R}^3$, we can use the conservation laws to find the corresponding asymptotic velocity and thus action $\mathcal{A}({\bf x},{\bf v})={\bf v}_\infty$: Using \eqref{ConsLaws2dPlanar2} we see that $\vert\mathcal{A}\vert=\sqrt{H}$, that ${\bf L}\cdot\mathcal{A}=0$ and that ${\bf R}\cdot\mathcal{A}=q\sqrt{H}/2$, so that (since ${\bf L\cdot R}=0$) \begin{equation}\label{ExplicitA1} \begin{split} \mathcal{A}({\bf x},{\bf v})&=\frac{2q\sqrt{H}}{4HL^2+q^2}{\bf R}+\frac{4H}{4HL^2+q^2}{\bf L}\times {\bf R}. \end{split} \end{equation} By Lemma \ref{lem:trajectories}, $\rho({\bf x},\mathcal{A}({\bf x},{\bf v}))\ge 1$ and we can define the corresponding angle as \begin{equation}\label{DefTheta} \Theta({\bf x},{\bf v}):=\begin{cases} \nabla_{\bf a}\mathcal{S}_-({\bf x},\mathcal{A}({\bf x},{\bf v}))\,\,&\hbox{ for }({\bf x},{\bf v})\in\Omega_p,\\ \nabla_{\bf a}\mathcal{S}_+({\bf x},\mathcal{A}({\bf x},{\bf v}))\,\,&\hbox{ for }({\bf x},{\bf v})\in\Omega_f. \end{cases} \end{equation} This is well-defined by Lemma \ref{lem:gen_functions} and extends by continuity to $\Gamma$ to give a mapping $({\bf x},{\bf v})\mapsto(\vartheta,{\bf a})$ continuous\footnote{In fact any choice of sign for $\mathcal{S}_\pm$ on $\Omega_f$ or $\Omega_p$ would give a continuous mapping, smooth away from the fold $\Gamma$, but our choice will give a {\it smooth gluing} at the fold.} on $\mathcal{P}_{{\bf x},{\bf v}}$. \subsubsection*{Rescaling of the generating function} Looking at the action of scaling on the generating function, \begin{equation} \mathcal{S}_\iota(\lambda{\bf x},\lambda^{-1}{\bf a})=\iota\,\lambda\frac{q}{\vert{\bf a}\vert}K(\lambda^{-1}\frac{\vert{\bf a}\vert}{2q}({\bf x}\cdot{\bf a}+\vert{\bf x}\vert\vert{\bf a}\vert))-\frac{{\bf x}\cdot{\bf a}-\vert{\bf x}\vert\vert{\bf a}\vert}{2},\qquad \iota\in\{+,-\}, \end{equation} and differentiating at $\lambda=1$ yields that \begin{equation}\label{eq:xv-vs-thetaa} {\bf x}\cdot{\bf v}-{\bf a}\cdot\vartheta=\iota\frac{q}{\vert{\bf a}\vert}\left[K(\rho)-\rho K^\prime(\rho)\right]. \end{equation} Noting that \begin{equation} \frac{d}{dx}\left[K(x)-xK^\prime(x)\right]=-xK^{\prime\prime}(x)=-\frac{1}{2\sqrt{x(x-1)}},\quad K(1)-K'(1)=0, \end{equation} it follows that if $\rho({\bf x},\mathcal{A}({\bf x},{\bf v}))>1$, then \begin{equation}\label{SignSigma} \begin{cases}{\bf x}\cdot{\bf v}-\vartheta\cdot{\bf a}< 0,& \iota=+,\\ {\bf x}\cdot{\bf v}-\vartheta\cdot{\bf a}> 0,& \iota=-,\end{cases} \end{equation} whereas ${\bf x}\cdot{\bf v}=\vartheta\cdot{\bf a}$ if and only if $\rho=1$, in which case $\mathcal{S}_+=\mathcal{S}_-$. The function $\sigma=(\vert{\bf a}\vert/q)({\bf x}\cdot{\bf v}-{\bf a}\cdot\vartheta)$ will be used to resolve the fold degeneracy corresponding to the choice of $\iota\in\{+,-\}$. \subsubsection*{Inverse of $\mathcal{T}$} We can now define $\mathcal{T}^{-1}$ on either side of the (image of the) fold, consistent with \eqref{DefGamma1} and \eqref{SignSigma}: \begin{equation}\label{DefGamma} \begin{split} \Gamma:=\left\{\vartheta\cdot {\bf a}=-aL^2/q\right\},\qquad \Omega_{-}:=\left\{{\bf a}\cdot\vartheta<-aL^2/q\right\},\quad\Omega_{+}:=\left\{{\bf a}\cdot\vartheta>-aL^2/q\right\}, \end{split} \end{equation} so that (for points with $\rho({\bf x},{\bf a})\geq 1$) \begin{equation}\label{eq:vthetadef} ({\bf v},\vartheta)= \begin{cases} (\nabla_{\bf x}\mathcal{S}_-({\bf x},{\bf a}),\nabla_{\bf a}\mathcal{S}_-({\bf x},{\bf a})),&\hbox{ when }(\vartheta,{\bf a})\in \Omega_{-},\,\,({\bf x},{\bf v})\in\Omega_p,\\ (\nabla_{\bf x}\mathcal{S}_+({\bf x},{\bf a}),\nabla_{\bf a}\mathcal{S}_+({\bf x},{\bf a})),&\hbox{ when }(\vartheta,{\bf a})\in \Omega_{+},\,\,({\bf x},{\bf v})\in\Omega_f. \end{cases} \end{equation} We now compute the inverse transformation $(\vartheta,{\bf a})\mapsto ({\bf X},{\bf V})$. This is more challenging since both evolve over the trajectory, while the only trace of ``time'' in $(\vartheta,{\bf a})$ variables follows from $\vartheta\cdot{\bf a}$. Using the conservation laws, we can already express trajectories in terms of the ``time-proxy'' ${\bf x}\cdot{\bf v}$. We have that \begin{equation}\label{AxAXperp} \begin{split} \mathcal{A}\cdot{\bf x}&=\frac{1}{4HL^2+q^2}\left[4HL^2{\bf x}\cdot{\bf v}+q^2\sqrt{H}\vert{\bf x}\vert+2qL^2\sqrt{H}\right],\\ \mathcal{A}\cdot({\bf L}\times{\bf x})&=\frac{L}{4HL^2+q^2}\left[-2q\sqrt{H}L{\bf x}\cdot{\bf v}+2qHL\vert{\bf x}\vert+4HL^3\right], \end{split} \end{equation} and since \begin{equation}\label{eq:cons_sizes} H=\vert{\bf v}\vert^2+\frac{q}{\vert{\bf x}\vert},\qquad L^2=\vert{\bf x}\vert^2\vert{\bf v}\vert^2-({\bf x}\cdot{\bf v})^2\quad\Rightarrow\quad L^2=\vert {\bf x}\vert^2H-q\vert{\bf x}\vert-({\bf x}\cdot{\bf v})^2 \end{equation} we can express $\vert{\bf x}\vert$ in terms of $({\bf x}\cdot{\bf v})=\vartheta\cdot{\bf a}+\sigma$ as \begin{equation}\label{Formula|x|} \begin{split} \vert{\bf x}\vert&=\frac{q}{2a^2}\left[1+\sqrt{\frac{4a^2L^2+q^2}{q^2}+4\frac{a^2}{q^2}\left({\bf x}\cdot{\bf v}\right)^2}\right], \end{split} \end{equation} which gives a first formula for ${\bf X}$ as \begin{equation}\label{ExpressionsX} \begin{split} {\bf X}&=\frac{q}{a^2}\left[\frac{1}{2}+\frac{q^2}{(2R)^2}\sqrt{\frac{a^2}{q^2}({\bf x}\cdot{\bf v})^2+\frac{(2R)^2}{4q^2}}+\frac{4a^2L^2}{(2R)^2}\frac{a}{q}({\bf x}\cdot{\bf v})\right]\frac{{\bf a}}{a}\\ &\quad-2\left[\frac{1}{2}+\frac{q^2}{(2R)^2}\left(\sqrt{\frac{a^2}{q^2}({\bf x}\cdot{\bf v})^2+\frac{(2R)^2}{4q^2}}-\frac{a}{q}({\bf x}\cdot{\bf v})\right)\right]\frac{{\bf L}\times{\bf a}}{a^2}. \end{split} \end{equation} To give explicit formulas for ${\bf V}$, we use that by the conservation laws there holds \begin{equation}\label{NewInnerProducts} \begin{split} {\bf v}\cdot\mathcal{A}&=\frac{q^2}{(2R)^2}\frac{a({\bf x}\cdot{\bf v})}{\vert {\bf x}\vert}+\frac{4a^2L^2}{(2R)^2}\left(a^2-\frac{q}{2\vert{\bf x}\vert}\right),\qquad {\bf v}\cdot ({\bf L}\times\mathcal{A})=\frac{qaL^2}{(2R)^2}\left(2a^2-\frac{q}{\vert{\bf x}\vert}-\frac{2a({\bf x}\cdot{\bf v})}{\vert{\bf x}\vert}\right), \end{split} \end{equation} so that since ${\bf v}\cdot{\bf L}=0$ we obtain \begin{equation} \begin{aligned} {\bf V}&=\left[\frac{q^2}{(2R)^2}\frac{{\bf x}\cdot{\bf v}}{\vert{\bf x}\vert}+\frac{4a^2L^2}{(2R)^2}(a-\frac{q}{2a\vert{\bf x}\vert})\right]\frac{\bf a}{a}+\frac{2q^2}{(2R)^2}\left[a-\frac{q}{2a\vert{\bf x}\vert}-\frac{{\bf x}\cdot{\bf v}}{\vert{\bf x}\vert}\right]\frac{{\bf L}\times{\bf a}}{q}\\ &={\bf a}-\frac{q^2}{(2R)^2}\frac{a\vert{\bf x}\vert-{\bf x}\cdot{\bf v}}{\vert{\bf x}\vert}\left[\frac{\bf a}{a}-2\frac{{\bf L}\times{\bf a}}{q}\right]-\frac{q^2}{(2R)^2}\frac{q}{2a\vert{\bf x}\vert}\left[\frac{4a^2L^2}{q^2}\frac{\bf a}{a}+2\frac{{\bf L}\times{\bf a}}{q}\right], \end{aligned} \end{equation} which together with \eqref{Formula|x|} yields \begin{equation}\label{ExpressionsV} \begin{split} {\bf V}&={\bf a}-a\frac{q^2}{(2R)^2}\frac{\sqrt{\frac{a^2}{q^2}({\bf x}\cdot{\bf v})^2+\frac{a^2L^2}{q^2}+\frac{1}{4}}-\frac{a}{q}({\bf x}\cdot{\bf v})+\frac{1}{2}}{\sqrt{\frac{a^2}{q^2}({\bf x}\cdot{\bf v})^2+\frac{a^2L^2}{q^2}+\frac{1}{4}}+\frac{1}{2}}\left[\frac{\bf a}{a}-2\frac{{\bf L}\times{\bf a}}{q}\right]\\ &\quad-\frac{q^2}{(2R)^2}\frac{a}{2\sqrt{\frac{a^2}{q^2}({\bf x}\cdot{\bf v})^2+\frac{a^2L^2}{q^2}+\frac{1}{4}}+1}\left[\frac{4a^2L^2}{q^2}\frac{\bf a}{a}+2\frac{{\bf L}\times{\bf a}}{q}\right]. \end{split} \end{equation} \subsubsection*{Global functions} We can now introduce the function \begin{equation}\label{def:sigma} \begin{split} \sigma(\vartheta,{\bf a}):=\frac{a}{q}({\bf x}\cdot{\bf v}-\vartheta\cdot{\bf a})=\iota\left[K(\rho)-\rho K^\prime(\rho)\right]=-\iota \ln(\sqrt{\rho}+\sqrt{\rho-1}),\qquad (\vartheta,{\bf a})\in\Omega_\iota. \end{split} \end{equation} In particular, $\vert\sigma\vert$ defines $\rho$ uniquely and vice-versa, which shows that $\rho$ can indeed be defined as a function of $(\vartheta,{\bf a})$ or as a function of $({\bf x},{\bf v})$. We obtain the global function \begin{equation}\label{ExpressionsX2} \begin{split} {\bf X}(\vartheta,{\bf a})&=\vartheta+\frac{q}{a^2}\left(\frac{1}{2}+\sigma\right)\frac{\bf a}{a}+ \frac{q}{a^2}\frac{q^2}{(2R)^2}D\left(\frac{a}{q}\vartheta\cdot{\bf a}+\sigma\right)\cdot \frac{2{\bf R}}{q},\\ D(y)&:=\sqrt{y^2+(2R)^2/4q^2}-y. \end{split} \end{equation} \subsubsection*{Properties of $\mathcal{T}$} Having given the detailed construction of the diffeomorphism $\mathcal{T}$, we can quickly deduce the properties listed in Proposition \ref{PropAA}: by construction via a generating function we directly have \eqref{eq:canonical} and the change of variables is canonical as in \eqref{it:canonical}. The compatibility with conservation laws \eqref{it:cons-comp} follows from the definition. For a trajectory $({\bf x}(t),{\bf v}(t))$ of the Kepler problem \eqref{ODE} we note that by \eqref{ExplicitA1} $\mathcal{A}({\bf x}(t),{\bf v}(t))$ is independent of time, whereas by \eqref{eq:vthetadef} we have \begin{equation} \begin{aligned} \partial_t\Theta_j({\bf x}(t),{\bf v}(t))&=\partial_t\partial_{{\bf a}_j}S_\iota({\bf x}(t),\mathcal{A}({\bf x}(t),{\bf v}(t)))=\partial_{{\bf x}_k}\partial_{{\bf a}_j}S_\iota({\bf x}(t),\mathcal{A}({\bf x}(t),{\bf v}(t)))\partial_{{\bf x}_k}S_\iota({\bf x}(t),\mathcal{A}({\bf x}(t),{\bf v}(t)))\\ &=\frac{1}{2}\partial_{{\bf a}_j}\left(\mathcal{A}({\bf x}(t),{\bf v}(t))^2-\frac{q}{\abs{{\bf x}(t)}}\right)=\mathcal{A}_j({\bf x}(t),{\bf v}(t)), \end{aligned} \end{equation} and \eqref{it:lin-diag} is established. Finally, the asymptotic action property \eqref{it:asymp-act} follows from \eqref{ExpressionsX} and \eqref{ExpressionsV} after some more quantitative bounds on $\sigma$ below in Lemma \ref{EstimRho}. \end{proof} \subsubsection{The functions $\rho,\sigma$ and ${\bf x}\cdot{\bf v}$} As we saw in Section \ref{ssec:Kepler}, the function $\rho$ of \eqref{eq:def_rho2} plays an important role. It is naturally defined in terms of the mixed variables $({\bf x},{\bf a})$, but we can estimate it in terms of $(\vartheta,{\bf a})$ through an implicit relation. We introduce the functions \begin{equation}\label{eq:defGP} \begin{split} G(y)&:=\sqrt{y(y-1)}+\ln(\sqrt{y}+\sqrt{y-1}),\qquad P_\pm(y):=2y-1\pm2\sqrt{y(y-1)},\qquad y\geq 1, \end{split} \end{equation} and note that $G$ appears naturally in the study of the radial case in \cite{PW2020}, where $\rho_{rad}=a^2r/q=r/r_{min}$ plays an important role. \begin{lemma}\label{EqRho} The functions \begin{equation}\label{def:tau} \rho=\frac{a}{2q}({\bf x}\cdot{\bf a}+\vert{\bf x}\vert\vert{\bf a}\vert)\quad\textnormal{and}\quad \eta=\frac{a}{q}\vartheta\cdot{\bf a} \end{equation} are related by the equation \begin{equation}\label{RelTauOut} \eta-\iota G(\rho)+\kappa^2 P_{-\iota}(\rho)=0\quad\textnormal{ in }\Omega_{\iota},\qquad \kappa:=aL/q,\qquad\iota\in\{+,-\}. \end{equation} \end{lemma} All three quantities $\rho, \eta,\kappa$ are dimensionless, as can be seen from \eqref{eq:dims}. \begin{proof}[Proof of Lemma \ref{EqRho}] We start with an expression for $\rho$ which follows from its definition in \eqref{eq:def_rho2} via the first line in \eqref{AxAXperp} and \eqref{Formula|x|}: \begin{equation}\label{EqForRho1} \begin{split} \rho &=\frac{1}{2}+\frac{2a^2L^2}{4a^2L^2+q^2}\cdot \frac{a}{q}({\bf x}\cdot{\bf v})+\frac{2a^2L^2+q^2}{4a^2L^2+q^2}\sqrt{\frac{a^2}{q^2}({\bf x}\cdot{\bf v})^2+\frac{4a^2L^2+q^2}{4q^2}}. \end{split} \end{equation} In order to simplify the computations, we introduce the notations $\beta$, $k$ such that \begin{equation*} \begin{split} \beta=\ln(2R/q),\qquad \sinh(-\beta+k)=\frac{2q}{2R}\frac{a}{q}({\bf x}\cdot{\bf v}), \end{split} \end{equation*} so that \begin{equation*} \cosh(\beta)=\frac{2a^2L^2+q^2}{q\sqrt{4a^2L^2+q^2}},\qquad\sinh(\beta)=\frac{2a^2L^2}{q\sqrt{4a^2L^2+q^2}}, \end{equation*} and \eqref{EqForRho1} gives that \begin{equation}\label{EqForRho2} \begin{split} \rho-1=\frac{\cosh(k)-1}{2}=\sinh^2(k/2). \end{split} \end{equation} Finally, we can rewrite the equation defining $k$ to get \begin{equation*} \begin{split} \frac{a}{q}({\bf x}\cdot{\bf v})&=\frac{1}{2}e^\beta\sinh(-\beta+k)=\frac{e^{2\beta}+1}{2}\sinh(k/2)\cosh(k/2)-\frac{e^{2\beta}-1}{4}(2\cosh^2(k/2)-1). \end{split} \end{equation*} Plugging in \eqref{EqForRho2} and using that $\cosh^2(k/2)=\rho$, we get that \begin{equation*} \begin{split} \frac{a}{q}({\bf x}\cdot{\bf v})&=\pm\frac{e^{2\beta}+1}{2}\sqrt{\rho(\rho-1)}-\frac{e^{2\beta}-1}{4}(2\rho-1). \end{split} \end{equation*} Furthermore, we recall from \eqref{eq:xv-vs-thetaa} that there holds \begin{equation*} \begin{split} \frac{a}{q}\left({\bf x}\cdot{\bf v}-\vartheta\cdot{\bf a}\right)=-\iota\ln(\sqrt{\rho}+\sqrt{\rho-1})\qquad\hbox{ in }\Omega_{\iota}, \end{split} \end{equation*} so that combining the two equations above, we finally find that for two choices of signs $\iota_1,\iota_2\in\{+,-\}$ (both of which can a priori change depending on the region $\Omega_{-}$, $\Omega_{+}$) \begin{equation*} \begin{split} \eta&=\iota_1\ln\left(\sqrt{\rho}+\sqrt{\rho-1}\right)+\iota_2\sqrt{\rho(\rho-1)}-\frac{e^{2\beta}-1}{4}\left(2\rho-1-\iota_22\sqrt{\rho(\rho-1)}\right). \end{split} \end{equation*} Now we observe that when $\beta=0$, we are in the radial case and, in this case, we know that we must have $\iota_1=\iota_2$. \end{proof} We can now verify that $\rho$ and $\sigma$ are well-defined functions on phase space. \begin{lemma}\label{EstimRho} With $\kappa=aL/q$, the relation \eqref{RelTauOut} defines a $C^1$ map $\rho(\eta,\kappa)$, with $\rho\ge \rho(-\kappa^2,\kappa)=1$. For fixed $\kappa\ge0$, $\eta\mapsto\rho(\eta,\kappa)$, $\mathbb{R}\to[1,\infty)$ is $2$ to $1$, decreasing for $-\infty<\eta<-\kappa^2$ and increasing for $\eta\ge-\kappa^2$. Moreover, we have the following estimates: On $\Omega_{+}=\{\eta>-\kappa^2\}$ there holds \begin{equation}\label{DerRhoOmegaOut} \frac{1}{4}(\kappa+\eta)\le\rho\le1+\kappa^2+\eta, \quad(\rho\vert\eta\vert\le \kappa^2,\; 1\le\rho\le 1+\kappa,\text{ when } -\kappa^2\le\eta\le 0) \end{equation} while on $\Omega_{-}=\{\eta<-\kappa^2\}$ we have \begin{equation}\label{DerRhoOmegaIn} \begin{split} \frac{1}{4}\frac{\vert\eta\vert}{1+\kappa^2}\le\rho\le\frac{1+\vert\eta\vert}{1+\kappa^2}. \end{split} \end{equation} As a consequence, the function $\sigma(\eta,\kappa)=-\iota\ln(\sqrt{\rho}+\sqrt{\rho-1})$ is well defined, and for fixed $\kappa\ge 0$, $\eta\mapsto \sigma(\eta,\kappa)$ is a bijection on $\mathbb{R}$, and we have the bounds \begin{equation}\label{BoundsOnSigma} \begin{split} \vert\sigma\vert\le \ln(1+2\sqrt{\vert\eta\vert}+2\kappa),\qquad(\rho+\kappa)\vert\partial_\eta\sigma\vert+(\kappa+\kappa^{-1})\vert\partial_\kappa\sigma\vert\lesssim 1,\\ (\rho+\kappa)^2\vert\partial_\eta^2\sigma\vert+\langle\kappa\rangle(\rho+\kappa)\vert\partial_\eta\partial_\kappa\sigma\vert^2+\langle\kappa\rangle^2\vert\partial_\kappa^2\sigma\vert\lesssim 1. \end{split} \end{equation} Besides, on $\Omega_+$, we obtain slightly improved bounds: \begin{equation}\label{ImprovedBoundsOnSigma} \begin{split} \vert\partial_\kappa\sigma\vert\lesssim \frac{\kappa}{\rho^2+\kappa^2},\qquad\vert\partial_\kappa^2\sigma\vert\lesssim\frac{1}{\rho^2+\kappa^2}. \end{split} \end{equation} \end{lemma} \begin{proof}[Proof of Lemma \ref{EstimRho}] With the functions $G,P_\iota$ of \eqref{eq:defGP} we define the implicit function $F_\iota:\mathbb{R}\times[0,\infty)\times[1,\infty)\to\mathbb{R}$ by \begin{equation*} \begin{split} F_{\iota}(\eta,\kappa,y):=\eta-\iota G(y)+\kappa^2 P_{-\iota}(y), \end{split} \end{equation*} so that per \eqref{RelTauOut}, $\rho$ is defined by $F_\iota(\eta,\kappa^2,\rho)=0$ on $\Omega_\iota$, $\iota\in\{-,+\}$. We note that \begin{equation}\label{eq:derivGP} G(1)=0,\quad G^\prime(y)=\sqrt{\frac{y}{y-1}},\quad P_\iota(1)=1,\quad P_-(y)P_+(y)=1,\quad P'_{\iota}(y)=\iota\frac{P_\iota(y)}{\sqrt{y(y-1)}}, \end{equation} and we have the bounds (see \cite[Lemma 2.2]{PW2020}), \begin{equation}\label{EstimGP+P-} \begin{aligned} y-1\le G(y)&\le y+\ln(2\sqrt{y})\le 2y,\\ (4y)^{-1}\le P_-(y)\le y^{-1}\le P_-(1)&=1=P_+(1)\le y\le P_+(y)\le 4y. \end{aligned} \end{equation} Hence we see that when $F_\iota=0$, we have $\eta=-\kappa^2$ if and only if $y=1$. For $\eta<-\kappa^2$, we see that \begin{equation*} \begin{split} F_{-}(\eta,\kappa,1)=\eta+\kappa^2<0,\qquad\lim_{y\to+\infty}F_{-}(\eta,\kappa,y)=+\infty, \end{split} \end{equation*} and similarly with reversed signs for $F_{+}$ when $\eta>-\kappa^2$, so $F_{-}=0$ has at least one solution for $\eta<-\kappa^2$ and $F_{+}=0$ has at least one solution for $\eta>-\kappa^2$. We now compute the derivatives \begin{equation*} \begin{split} \partial_\eta F_{\iota}&=1,\qquad\partial_\kappa F_{\iota}=2\kappa P_{-\iota}(y), \end{split} \end{equation*} and by \eqref{eq:derivGP} \begin{equation*} \begin{split} \partial_y F_{\iota}&=-\iota\sqrt{\frac{y}{y-1}}\left[1+\kappa^2\frac{P_{-\iota}(y)}{y}\right], \end{split} \end{equation*} so that by \eqref{EstimGP+P-} \begin{equation*} \begin{split} -\partial_y F_{+}\ge\left[1-1/y\right]^{-\frac{1}{2}}\left[1+(\kappa/2y)^2\right],\qquad\partial_yF_{-}\ge\left[1-1/y\right]^{-\frac{1}{2}}\left[1+\kappa^2\right]. \end{split} \end{equation*} This shows that $F_{\iota}(\eta,\kappa,y)=0$ has a unique solution and gives the bounds on the first derivatives in \eqref{BoundsOnSigma}. Explicitly we have that \begin{equation}\label{eq:deriv_rho} \begin{split} \partial_\eta\rho&=\frac{1}{\iota G'(\rho)-\kappa^2 P'_{-\iota}(\rho)}=\iota\frac{\sqrt{\rho(\rho-1)}}{\rho+\kappa^2 P_{-\iota}(\rho)},\qquad \partial_\kappa\rho=2\kappa P_{-\iota}(\rho)\partial_\eta\rho,\\ \partial_\eta\sigma&=-\frac{1}{2(\rho+\kappa^2 P_{-\iota}(\rho))},\qquad\partial_\kappa\sigma=-\frac{\kappa P_{-\iota}(\rho)}{\rho+\kappa^2 P_{-\iota}(\rho)}, \end{split} \end{equation} which shows that the gradient of $\rho$ vanishes at the ``curve of surgery'' $\Gamma=\{\rho=1\}$ and that the gradient of $\sigma$ is smooth there. The estimates for $\rho$ follow from the bounds \begin{equation*} \eta+\kappa^2 y+y-1\le F_{-}(\eta,\kappa,y)\le\eta+4\kappa^2 y+2y,\qquad \eta+\frac{\kappa^2}{4y}-2y\le F_{+}(\eta,\kappa,y)\le\eta+\frac{\kappa^2}{y}+1-y, \end{equation*} which in turn follow from the definitions of $F_{\iota}$ and the bounds \eqref{EstimGP+P-}. Finally, we compute the second order derivatives of $\sigma$ by deriving \eqref{eq:deriv_rho} one more time. This gives \begin{equation} \begin{aligned} \partial_{\eta}^2\sigma&=\frac{-\kappa^2 P_{-\iota}(\rho)+\iota\sqrt{\rho(\rho-1)}}{2(\rho+\kappa^2 P_{-\iota}(\rho))^3},\qquad \partial_{\eta}\partial_\kappa\sigma =\kappa \frac{1+P_{-\iota}(\rho)}{2(\rho+\kappa^2 P_{-\iota}(\rho))^3},\\ \partial_{\kappa}^2\sigma &=-\frac{P_{-\iota}(\rho)}{(\rho+\kappa^2 P_{-\iota}(\rho))^3}(\rho^2-\kappa^2(1+P_{-\iota}(\rho))-\kappa^4 P_{-\iota}^2(\rho)).\\ \end{aligned} \end{equation} and direct computations give the bounds on the second derivatives in \eqref{BoundsOnSigma}. The bounds \eqref{ImprovedBoundsOnSigma} follow by direct inspection using that $\iota=+$. \end{proof} \begin{remark} It follows from \eqref{eq:deriv_rho} that the only smooth matching of action-angle variable at $\Gamma$ are the one that change the sign of $\mathcal{S}_\pm$. \end{remark} \subsection{Further coordinates}\label{ssec:furthercoords} In order to parametrize the trajectories of \eqref{ODE}, further choices of coordinates will be important. \begin{figure} \caption{A foliation of planar dynamics of \eqref{ODE} near a given trajectory (in black) with $\sqrt{H}=1$, asymptotic velocity ${\bf v}_\infty=(1,0,0)$ and $L=1$.} \label{fig:sample-trajectories} \end{figure} \subsubsection{Super-integrable coordinates}\label{ssec:SIC} The Kepler problem \eqref{ODE} is super integrable, and hence in an appropriate system of coordinates, only one scalar function evolves along a trajectory. When we consider derivatives, it will be crucial to use this simplification. Inspecting \eqref{LinearizationFlow}, we see that such a system can be obtained from our asymptotic action-angle coordinates by using $(\xi,\eta,\lambda,{\bf u},{\bf L})$ where \begin{equation}\label{SIC} \begin{split} \xi:=\frac{q}{a},\qquad\eta:=\frac{a}{q}\vartheta\cdot{\bf a},\qquad\lambda:=L,\qquad{\bf u}:=\frac{{\bf a}}{a}, \end{split} \end{equation} and ${\bf L}$ is the angular momentum, for the direction of which we write ${\bf l}=\frac{{\bf L}}{L}$. In particular, note that $\xi$, $\lambda$ and ${\bf L}$ have dimension\footnote{so that their Poisson brackets are dimensionless, e.g. $\dm{\{\xi,f\}}=\dm{f}$.} $\dm{{\bf x}}\dm{{\bf v}}$, while $\eta$ and ${\bf u}$ are dimensionless (compare \eqref{eq:dims}). Only $\eta$ evolves along a trajectory of \eqref{ODE}, namely as \begin{equation}\label{eq:eta_evol} \eta\mapsto\eta+t\frac{q^2}{\xi^3}, \end{equation} and using that $\vartheta\perp{\bf L}$ (see e.g.\ \eqref{ConservationMomentumPlane}) we can recover our angle-action variables from the super-integrable coordinates via \begin{equation}\label{TransitionFromSIC} \begin{split} \vartheta=\frac{\xi^2}{q}\eta{\bf u}-\frac{\xi}{q}{\bf L}\times{\bf u},\qquad{\bf a}=\frac{q}{\xi}{\bf u}. \end{split} \end{equation} \begin{remark} Clearly the collection $(\xi,\eta,\lambda,{\bf u},{\bf L})$ is not independent (since e.g.\ $\lambda=\abs{{\bf L}}$), and in a strict sense only gives coordinates modulo further conditions. However, the slight redundancy is convenient in that it provides relatively simple expressions for kinematic quantities, and satisfies favorable Poisson bracket properties -- see for example \eqref{PBSIC} below. \end{remark} Moreover, with \eqref{ConsLaws2dPlanar2} and \eqref{RelTauOut} we can write \begin{equation} \kappa=\frac{\lambda}{\xi},\qquad \frac{2{\bf R}}{q}={\bf u}-2\kappa{\bf l}\times{\bf u}. \end{equation} With the notations \begin{equation}\label{DefDN} \begin{aligned} D(\kappa,y)&:=\sqrt{y^2+\kappa^2+\frac{1}{4}}-y, \qquad N(\kappa,y):=\sqrt{y^2+\kappa^2+\frac{1}{4}}+\frac{1}{2}=D(\kappa,y)+y+\frac{1}{2}, \end{aligned} \end{equation} we can then use \eqref{ExpressionsX} resp.\ \eqref{ExpressionsX2} and \eqref{ExpressionsV} to see that with $\sigma=\sigma(\eta,\kappa)$ as in Lemma \ref{EstimRho} there holds \begin{equation}\label{XVinSIC} \begin{aligned} {\bf X}&=\frac{\xi^2}{q}\left(\eta+\sigma+\frac{1}{2}+\frac{D(\kappa,\eta+\sigma)}{1+4\kappa^2}\right){\bf u}-\frac{\xi}{q}\left(1+\frac{2D(\kappa,\eta+\sigma)}{1+4\kappa^2}\right){\bf L}\times{\bf u},\\ {\bf V}&=\frac{q}{\xi}\left(1-\frac{1}{2N(\kappa,\eta+\sigma)}-\frac{1}{1+4\kappa^2}\frac{D(\kappa,\eta+\sigma)}{N(\kappa,\eta+\sigma)}\right){\bf u}+\frac{q}{\xi^2}\frac{2}{1+4\kappa^2}\frac{D(\kappa,\eta+\sigma)}{N(\kappa,\eta+\sigma)}{\bf L}\times{\bf u}. \end{aligned} \end{equation} \subsubsection{Past asymptotic action}\label{ssec:past_AA} The angle-action variables of Section \ref{ssec:Kepler} are constructed such that the asymptotic action property \eqref{AsymptoticActions} holds for the ``future'' evolution, i.e.\ as $t\to+\infty$. However, we will also need to resolve the earlier ``past'' part of a trajectory, for which the direction can differ markedly from its evolution in the long run (see e.g.\ Figure \ref{fig:cons-laws}). For this, we will need the \emph{past} asymptotic action-angle coordinates $(\vartheta^{(-)},{\bf a}^{(-)})$, with inverse $({\bf X}^{(-)},{\bf V}^{(-)})$ defined in a similar way, except that we require instead that \begin{equation*} \lim_{t\to-\infty}{\bf V}^{(-)}(\vartheta^{(-)}+t{\bf a}^{(-)},{\bf a}^{(-)})={\bf a}^{(-)}. \end{equation*} Using that the trajectory is symmetric under reflexion from the plane spanned by ${\bf L}, {\bf R}$, we easily see that the past asymptotic velocity is given by \begin{equation*} {\bf a}^{(-)}=-\frac{2q\sqrt{H}}{4HL^2+q^2}{\bf R}+\frac{4H}{4HL^2+q^2}{\bf L}\times {\bf R}. \end{equation*} We can proceed as before using the solutions of the Hamilton-Jacobi equation $\mathcal{S}_\pm$. More precisely, we define the change of variable on $\Omega^{(-)}_-:=\{{\bf x}\cdot{\bf v}<L^2\sqrt{H}/q\}$ and $\Omega^{(-)}_+:=\{{\bf x}\cdot{\bf v}>L^2\sqrt{H}/q\}$ to be given by the generating functions $\mathcal{S}^{(-)}_{\iota}({\bf x},{\bf a}^{(-)})=-\mathcal{S}_{-\iota}({\bf x},-{\bf a}^{(-)})$, thus \begin{equation}\label{DefPAA} \begin{split} \mathcal{A}^{(-)}({\bf x},{\bf v})&:=-\frac{2q\sqrt{H}}{4HL^2+q^2}{\bf R}+\frac{4H}{4HL^2+q^2}{\bf L}\times {\bf R},\\ \Theta^{(-)}({\bf x},{\bf v})&:= \begin{cases} \nabla_{{\bf a}}\mathcal{S}_+({\bf x},-\mathcal{A}^{(-)}({\bf x},{\bf v}))&\quad\hbox{ for }\,({\bf x},{\bf v})\in\Omega^{(-)}_-,\\ \nabla_{{\bf a}}\mathcal{S}_-({\bf x},-\mathcal{A}^{(-)}({\bf x},{\bf v}))&\quad\hbox{ for }\,({\bf x},{\bf v})\in\Omega^{(-)}_+, \end{cases} \end{split} \end{equation} and similarly define $({\bf X}^{(-)},{\bf V}^{(-)})$. We will need to understand the properties of the transition map $(\vartheta,{\bf a})\mapsto(\vartheta^{(-)},{\bf a}^{(-)})$. Using the conservation laws, we see that \begin{equation*} \begin{split} a^{(-)}=a,\qquad{\bf L}^{(-)}=\vartheta^{(-)}\times{\bf a}^{(-)}={\bf L},\qquad\kappa^{(-)}=\kappa,\qquad \frac{2{\bf R}}{q}=-\frac{{\bf a}^{(-)}}{a}-\frac{2}{q}{\bf L}\times{\bf a}^{(-)}, \end{split} \end{equation*} and taking the dot product of the last equality with ${\bf a}$ and ${\bf L}\times{\bf a}$, we find that \begin{equation*} \begin{split} \frac{{\bf a}^{(-)}\cdot{\bf a}}{a^2}=\frac{4\kappa^2-1}{4\kappa^2+1} \end{split} \end{equation*} and therefore \begin{equation*} \begin{split} \frac{{\bf a}^{(-)}}{a}&=\frac{4\kappa^2-1}{4\kappa^2+1}\frac{{\bf a}}{a}+\frac{4\kappa}{4\kappa^2+1}\frac{{\bf l}\times{\bf a}}{a}. \end{split} \end{equation*} Once again, looking at the action of scaling, we observe that, in $\Omega^{(-)}_\iota$, \begin{equation*} \begin{split} {\bf x}\cdot{\bf v}&=\vartheta^{(-)}\cdot{\bf a}^{(-)}-\iota\frac{q}{a}\ln(\sqrt{\rho^{(-)}}+\sqrt{\rho^{(-)}-1}),\\ \rho^{(-)}({\bf x},{\bf a}^{(-)})&=\rho({\bf x},-{\bf a}^{(-)})=\frac{a}{2q}(\vert{\bf x}\vert\vert{\bf a}^{(-)}\vert-{\bf a}^{(-)}\cdot{\bf x}). \end{split} \end{equation*} Since we will transition between the past and future asymptotic actions at periapsis, we compute that \begin{equation*} \begin{split} \frac{\bf a}{a}\cdot\frac{\bf x}{\vert{\bf x}\vert}&=-\frac{{\bf a}^{(-)}}{a}\cdot\frac{{\bf x}}{\vert{\bf x}\vert}=\cos\phi_{p}=\frac{1}{\sqrt{1+4\kappa^2}},\qquad\sin\phi_{p}=\frac{2\kappa}{\sqrt{1+4\kappa^2}}\\ \rho_p=\rho^{(-)}_p&=\frac{(r+1/r)^2}{4},\,\, r=(1+4\kappa^2)^\frac{1}{4},\qquad-\eta_p=\sigma_p=-\frac{1}{4}\ln(1+4\kappa^2)=\sigma^{(-)}_p=-\eta^{(-)}_p. \end{split} \end{equation*} Together with their favorable Poisson bracket properties (see \eqref{eq:PB_SIC(-)} below), this motivates the following definition of the new coordinates as \begin{equation}\label{eq:def_SIC(-)} \begin{split} \xi^{(-)}&:=-\xi,\qquad\eta^{(-)}=-\eta+\frac{1}{2}\ln(1+4\kappa^2),\qquad\lambda^{(-)}=\lambda,\\ {\bf u}^{(-)}&:=\frac{1-4\kappa^2}{4\kappa^2+1}{\bf u}-\frac{1}{\xi}\frac{4}{4\kappa^2+1}{\bf L}\times{\bf u}=\frac{-{\bf a}^{(-)}}{a},\qquad{\bf l}^{(-)}={\bf l}, \end{split} \end{equation} and thus \begin{equation*} {\bf L}^{(-)}\times{\bf u}^{(-)}=\frac{1-4\kappa^2}{4\kappa^2+1}{\bf L}\times{\bf u}+\xi\frac{4\kappa^2}{4\kappa^2+1}{\bf u}. \end{equation*} Moreover, we see that the evolution of \eqref{ODE} in these coordinates reads \begin{equation*} \eta^{(-)}(t+s)=\eta^{(-)}(t)+sq^2/(\xi^{(-)})^3, \end{equation*} and using that \begin{equation*} \begin{split} \xi(\eta+\sigma)={\bf x}\cdot{\bf v}=\xi^{(-)}(\eta^{(-)}+\sigma^{(-)}) \end{split} \end{equation*} we also verify that for the formulas in \eqref{XVinSIC} we have \begin{equation}\label{XVinSIC(-)} \begin{split} {\bf X}(\xi,\eta,\lambda,{\bf u},{\bf L})&={\bf X}(\xi^{(-)},\eta^{(-)},\lambda^{(-)},{\bf u}^{(-)},{\bf L}^{(-)}),\\ {\bf V}(\xi,\eta,\lambda,{\bf u},{\bf L})&=-{\bf V}(\xi^{(-)},\eta^{(-)},\lambda^{(-)},{\bf u}^{(-)},{\bf L}^{(-)}). \end{split} \end{equation} \subsubsection{Far and close formulations}\label{ssec:farclose} For weights that are not conserved by the linear flow and for derivatives, we will have to also work with the alternative formulation of Section \ref{ssec:pinnedframe} in terms of $\nu'$. The associated nonlinear unknown, which replaces $\gamma$ as in \eqref{NewNLUnknown_Intro}, is then \begin{equation} \gamma^\prime:=\nu^\prime\circ\mathcal{T}^{-1}\circ\Phi_t^{-1},\quad \gamma^\prime(\vartheta,{\bf a},t)=\nu({\bf X}(\vartheta+t{\bf a},{\bf a}),{\bf V}(\vartheta+t{\bf a},{\bf a})+\mathcal{W}(t),t), \end{equation} with $\mathcal{T}$ defined in Proposition \ref{PropAA} and $\Phi_t$ defined in \eqref{DefPhiFreeStreaming}, which satisfies (compare \eqref{NewHamiltonian2}) \begin{equation}\label{NewNLEq'} \begin{split} \partial_t\gamma'+\{\mathbb{H}_4',\gamma'\}=0,\qquad \gamma'(t=0)=\nu_0',\qquad\mathbb{H}_4'=Q\psi(\widetilde{\bf X})-\dot{\mathcal{W}}\cdot \widetilde{\X}, \end{split} \end{equation} with, as in \eqref{NewVP}, \begin{equation} \begin{aligned} \psi({\bf y},t)&=-\frac{1}{4\pi}\iint \frac{1}{\vert {\bf y}-\widetilde{\bf X}(\theta,\alpha)\vert}(\gamma')^2(\theta,\alpha,t)\, d\theta d\alpha=-\frac{1}{4\pi}\iint \frac{1}{\vert {\bf y}-\widetilde{\bf X}(\theta,\alpha)\vert}\gamma^2(\theta,\alpha,t)\, d\theta d\alpha,\\ \dot{\mathcal{W}}&=\mathcal{Q}\nabla_x\psi(0,t),\qquad\mathcal{W}(t)\to_{t\to T^\ast}0. \end{aligned} \end{equation} With the notation \begin{equation} \Sigma_u:\mathcal{P}_{{\bf x},{\bf v}}\to\mathcal{P}_{{\bf x},{\bf v}},\;({\bf x},{\bf v})\mapsto({\bf x},{\bf v}+u),\qquad u\in\mathbb{R}^3, \end{equation} by construction in \eqref{NewNLEq'} we have that \begin{equation}\label{eq:defM_t} \gamma'=\gamma\circ\mathcal{M}_t,\qquad \mathcal{M}_t:=\Phi_t\circ\mathcal{T}\circ\Sigma_{\mathcal{W}(t)}\circ\mathcal{T}^{-1}\circ\Phi_t^{-1}, \end{equation} and $\mathcal{M}_t:\mathcal{P}_{\vartheta,{\bf a}}\to\mathcal{P}_{\vartheta,{\bf a}}$ is a canonical diffeomorphism. This distinction is relevant when $\widetilde{\X}={\bf X}\circ\Phi_t^{-1}$ is relatively small resp.\ large, i.e.\ in the ``close'' and ``far'' regions \begin{equation} \Omega_t^{cl}:=\{(\vartheta,{\bf a}):\vert\widetilde{\X}(\vartheta,{\bf a})\vert\leq 10\ip{t}\},\qquad \Omega_t^{far}:=\{(\vartheta,{\bf a}):\vert\widetilde{\X}(\vartheta,{\bf a})\vert\geq \ip{t}\}, \end{equation} which decompose phase space $\mathcal{P}_{\vartheta,{\bf a}}=\Omega_t^{cl}\cup\Omega_t^{far}$. We note that \begin{equation}\label{eq:XMinv} \widetilde{\X}\circ\mathcal{M}_t=\widetilde{\X}, \end{equation} so $\Omega_t^\ast$, $\ast\in\{cl,far\}$, are invariant under $\mathcal{M}_t$, and in particular \begin{equation}\label{eq:E_clfar} \iint_{\Omega_t^\ast} F(\widetilde{\X}(\vartheta,{\bf a}))\gamma^2(\vartheta,{\bf a})d\vartheta d{\bf a}=\iint_{\Omega_t^\ast} F(\widetilde{\X}(\vartheta,{\bf a}))(\gamma')^2(\vartheta,{\bf a})d\vartheta d{\bf a}. \end{equation} If $\mathfrak{w}:\mathcal{P}_{\vartheta,{\bf a}}\to\mathbb{R}$ is a weight function, then we have that \begin{equation}\label{eq:weightprimes} \mathfrak{w}\gamma'=(\mathfrak{w}'\gamma)\circ\mathcal{M}_t,\qquad \mathfrak{w}':=\mathfrak{w}\circ\mathcal{M}_t^{-1}, \end{equation} and we have the following bounds between primed and unprimed weights (in the super-integrable coordinates \eqref{SIC} of Section \ref{ssec:SIC}): \begin{lemma}\label{lem:prime_bds} Let $(a',\xi',\lambda',\eta'):=(a,\xi,\lambda,\eta)\circ\mathcal{M}^{-1}_t$. Then \begin{equation}\label{prime_bds} \begin{aligned} &\abs{a-a'}+q\abs{\xi-\xi'}/(\xi\xi^\prime)\lesssim \abs{\mathcal{W}(t)}\lesssim 1, \end{aligned} \end{equation} and, on $\Omega_t^{cl}$, there holds that \begin{equation}\label{prime_bds2} \begin{aligned} &\abs{\lambda-\lambda'}\lesssim \ip{t}\abs{\mathcal{W}(t)},\\ &\abs{\eta-\eta'}\lesssim \ip{t}\abs{\mathcal{W}(t)}(a^2+(a')^2+a+a')+\ln(1+2\sqrt{(a+a')\ip{t}\abs{\mathcal{W}(t)}}). \end{aligned} \end{equation} In particular, moments in $a$ on $\gamma$, $\gamma^\prime$ are comparable in the sense that \begin{equation}\label{ComparableMomentNorms} \begin{split} \Vert \langle a\rangle^p\gamma_1\Vert_{L^r}&\le2\Vert \langle a\rangle^p\gamma_2\Vert_{L^r}+C_p\vert\mathcal{W}(t)\vert^p\Vert\gamma_2\Vert_{L^r} \end{split} \end{equation} for $\gamma_1,\gamma_2\in\{\gamma,\gamma^\prime\}$, $p\ge 0$, $r\in\{2,\infty\}$. \end{lemma} \begin{proof} We have that \begin{equation}\label{eq:diffa2} a^2-(a')^2=\abs{{\bf V}}^2+\frac{q}{\aabs{{\bf X}}}-\abs{{\bf V}-\mathcal{W}(t)}^2-\frac{q}{\aabs{{\bf X}}}=2\mathcal{W}(t)\cdot{\bf V}-\abs{\mathcal{W}(t)}^2, \end{equation} and thus (e.g.\ by dinstinguishing whether $a\leq \abs{\mathcal{W}(t)}$ or $a> \abs{\mathcal{W}(t)}$) \begin{equation} \abs{a-a'}\lesssim \abs{\mathcal{W}(t)}. \end{equation} The bound for $\xi=\frac{q}{a}$ follows directly, whereas we compute that \begin{equation} \abs{\lambda-\lambda'}\lesssim\vert\widetilde{\X}\times\widetilde{\V}-\widetilde{\X}\times(\widetilde{\V}-\mathcal{W}(t))\vert\lesssim\ip{t}\abs{\mathcal{W}(t)}. \end{equation} Finally we have by \eqref{def:sigma} that $\eta=\frac{a}{q}\vartheta\cdot{\bf a}=\frac{a}{q}({\bf X}\cdot{\bf V})-\sigma$, and thus \begin{equation} \begin{aligned} \eta+t\frac{a^3}{q}&=\eta\circ\Phi_t^{-1}=\frac{a}{q}(\widetilde{\X}\cdot\widetilde{\V})-\sigma\circ\Phi_t^{-1},\\ \eta'+t\frac{(a')^3}{q}&=\eta\circ\Phi_t^{-1}\circ\mathcal{M}^{-1}_t=\frac{a'}{q}(\widetilde{\X}\cdot(\widetilde{\V}-\mathcal{W}(t)))-\sigma\circ\Phi_t^{-1}\circ\mathcal{M}^{-1}_t, \end{aligned} \end{equation} so that \begin{equation}\label{eq:diffeta} \eta-\eta^\prime=t\frac{(a')^3-a^3}{q}+\frac{a-a'}{q}(\widetilde{\X}\cdot\widetilde{\V})+\frac{a'}{q}(\widetilde{\X}\cdot\mathcal{W}(t))+(\sigma\circ\Phi_t^{-1}\circ\mathcal{M}^{-1}_t-\sigma\circ\Phi_t^{-1}). \end{equation} The first three terms are bounded directly, whereas for the last one we observe that (as can be seen directly from the definition \eqref{eq:def_rho2} of $\rho$ in terms of ${\bf x}$ and ${\bf a}$) there holds that \begin{equation} \abs{\rho\circ\Phi_t^{-1}\circ\mathcal{M}^{-1}_t-\rho\circ\Phi_t^{-1}}\lesssim \abs{a-a'}(a+a')\ip{t}, \end{equation} and thus from \eqref{def:sigma} we obtain \begin{equation} \begin{split} \abs{\sigma\circ\Phi_t^{-1}\circ\mathcal{M}^{-1}_t-\sigma\circ\Phi_t^{-1}} &=\left\vert\ln\left(\frac{\sqrt{\rho}+\sqrt{\rho-1}}{\sqrt{\rho^\prime}+\sqrt{\rho^\prime-1}}\right)\right\vert\lesssim\left\vert\ln(1+\frac{2(\rho-\rho^\prime)}{(\sqrt{\rho}+\sqrt{\rho^\prime})(\sqrt{\rho-1}+\sqrt{\rho^\prime-1})}\right\vert\\ &\lesssim \ln(1+2\sqrt{\abs{a-a'}(a+a')\ip{t}}). \end{split} \end{equation} Now the equivalence of norms \eqref{ComparableMomentNorms} follows from the bounds in \eqref{prime_bds} using \eqref{eq:weightprimes} as well. \end{proof} Furthermore, we need to detail how the linear trajectories interplay with the close/far regions. The following lemma shows that a trajectory of the linearized system \eqref{ODE} can enter or exit the close region $\Omega_t^{cl}$ at most once. \begin{lemma}\label{lem:traj_cl_far} Let $({\bf X}(t),{\bf V}(t))$ be a trajectory of \eqref{ODE}. Assume that for $t_1<t_2$ we have that ${\bf X}(t_1)\in\Omega_{t_1}^{cl}\setminus \Omega_{t_1}^{far}$ and ${\bf X}(t_2)\in \Omega_{t_2}^{far}\setminus\Omega_{t_2}^{cl}$. Then for all $t\geq t_2$ there holds that ${\bf X}(t)\in\Omega_t^{far}$. \end{lemma} \begin{proof} We recall from Remark \ref{rem:virials} that \begin{equation}\label{SecondOrderDerXVX} \frac{d^2}{dt^2}\abs{{\bf X}(t)}^2=\abs{{\bf V}(t)}^2+H,\qquad\frac{d}{dt}\vert{\bf X}(t)\vert^2=2{\bf X}(t)\cdot{\bf V}(t),\qquad\frac{d}{dt}\vert{\bf V}(t)\vert^2=q\frac{{\bf X}(t)\cdot{\bf V}(t)}{\vert{\bf X}(t)\vert^3}, \end{equation} so that $t\mapsto\vert{\bf X}(t)\vert^2$ is convex, and in particular has at most one minimum and $\vert{\bf X}\vert$ and $\vert{\bf V}\vert$ have same monotonicity. To prove the claim, we assume that $0\le t_1< t_2$ and \begin{equation} \abs{{\bf X}(t_1)}= \langle t_1\rangle,\qquad \abs{{\bf X}(t_2)}= 10\langle t_2\rangle. \end{equation} Since $\abs{{\bf X}(t_2)}>\abs{{\bf X}(t_1)}$ we have that the periapsis occurs before $t_2$, and thus $\abs{{\bf V}(t)}$ is monotone increasing on $[t_2,\infty)$. By convexity, we see that $\vert {\bf X}(s)\vert\le\vert{\bf X}(t_2)\vert$ for $t_1\le s\le t_2$, and by conservation of energy, this implies that $\vert{\bf V}(s)\vert\le\vert{\bf V}(t_2)\vert$. If $\vert{\bf V}(t_2)\vert\le 9$, we obtain that \begin{equation*} \vert {\bf X}(t_2)\vert\le (t_2-t_1)\vert{\bf V}(t_2)\vert+\vert{\bf X}(t_1)\vert\le 9(t_2-t_1)+\langle t_1\rangle< 10\langle t_2\rangle, \end{equation*} which is impossible. Now, since $\vert {\bf V}(t)\vert$ is increasing on $(t_2,\infty)$, integrating twice the first equation in \eqref{SecondOrderDerXVX}, we find that, for $t\ge t_2$, \begin{equation*} \abs{{\bf X}(t)}^2\geq \abs{{\bf X}(t_2)}^2+(\abs{{\bf V}(t_2)}^2+H)(t-t_2)^2/2\geq 9^2(1+t_2^2 +(t-t_2)^2/2)\geq 3(1+ t^2), \end{equation*} and thus ${\bf X}(t)\in\Omega_t^{far}$. \end{proof} \section{Some kinematics and Poisson brackets}\label{ssec:kinematics} We now develop quantitative estimates on some dynamically relevant quantities. We recall from \eqref{XVinSIC} the expressions of ${\bf X}$ and ${\bf V}$ in terms of the super-integrable coordinates as \begin{equation}\label{eq:XVgen} {\bf X}=X_1(\xi,\eta,\lambda){\bf u}+X_3(\xi,\eta,\lambda){\bf L}\times{\bf u},\qquad {\bf V}=V_1(\xi,\eta,\lambda){\bf u}+V_3(\xi,\eta,\lambda){\bf L}\times{\bf u}, \end{equation} with \begin{equation}\label{eq:XX1X3} \begin{aligned} X_1(\xi,\eta,\lambda)&=\frac{\xi^2}{q}\left(\eta+\sigma+\frac{1}{2}+\frac{D}{1+4\kappa^2}\right),\quad X_3(\xi,\eta,\lambda)=-\frac{\xi}{q}\left(1+\frac{2D}{1+4\kappa^2}\right),\\%=\lambda^{-1}X_2,\\ V_1(\xi,\eta,\lambda)&=\frac{q}{\xi}\left(1-\frac{1}{2N}-\frac{1}{1+4\kappa^2}\frac{D}{N}\right),\quad V_3(\xi,\eta,\lambda)=\frac{q}{\xi^2}\frac{2}{1+4\kappa^2}\frac{D}{N}, \end{aligned} \end{equation} where we abbreviated $D=D(\kappa,\eta+\sigma)$ and $N=N(\kappa,\eta+\sigma)$. Since only $\eta$ evolves along a trajectory of \eqref{ODE}, with \eqref{eq:eta_evol} we directly obtain the corresponding expressions \begin{equation} \widetilde{\X}={\bf X}(\xi,\eta+tq^2\xi^{-3},\lambda,{\bf u},{\bf L}),\qquad \widetilde{\V}={\bf V}(\xi,\eta+tq^2\xi^{-3},\lambda,{\bf u},{\bf L}). \end{equation} In the following, it will be useful to recall some simple bounds on the functions involved in the formulas defined in \eqref{DefDN}: \begin{equation}\label{BoundsDN} \begin{split} 0\le D(\kappa,y)\le 2\sqrt{y^2+\kappa^2+1/4},\quad \mathfrak{1}_{\{y>0\}}D(\kappa,y)\le\frac{\kappa^2+1/4}{\sqrt{y^2+\kappa^2+1/4}},\\ \partial_yD(\kappa,y)=-\frac{D(\kappa,y)}{\sqrt{y^2+\kappa^2+1/4}},\qquad\partial_\kappa D(\kappa,y)=\frac{\kappa}{\sqrt{y^2+\kappa^2+1/4}}=\partial_\kappa N(\kappa,y),\\ \max\{1,\vert y\vert,\kappa\}\le N(\kappa,y)\le 1+\vert y\vert+\kappa,\qquad -1\le \partial_y N(\kappa,y),\partial_\kappa N(\kappa,y)\le 1, \end{split} \end{equation} and \begin{equation}\label{SecondOrderDerDNFormulas} \begin{split} \partial_y^2D(\kappa,y)=\partial_y^2N(\kappa,y)&=\frac{D(\kappa,y)}{y^2+\kappa^2+1/4}\frac{\sqrt{y^2+\kappa^2+1/4}+y}{\sqrt{y^2+\kappa^2+1/4}},\\ \partial_\kappa\partial_yD(\kappa,y)=\partial_\kappa\partial_yN(\kappa,y)&=-\frac{y\kappa}{(y^2+\kappa^2+1/4)^\frac{3}{2}},\\ \partial_\kappa^2D(\kappa,y)=\partial_\kappa^2N(\kappa,y)&=\frac{y^2+1/4}{(y^2+\kappa^2+1/4)^\frac{3}{2}}. \end{split} \end{equation} \subsubsection{Estimates on ${\bf X}$.} Looking at \eqref{Formula|x|} at periapsis (when ${\bf x}\cdot{\bf v}=0$), we find that \begin{equation}\label{CrudeBoundX} \begin{split} \vert{\bf X}\vert\ge\frac{\xi^2}{2q}\langle\kappa\rangle \end{split} \end{equation} but we need more precise bounds. \begin{corollary}\label{CorBdsOnX} We have the uniform bounds \begin{equation}\label{BoundXV} \begin{split} \frac{1}{10}\sqrt{\eta^2+\kappa^2+\frac{1}{4}}\le \sqrt{\frac{a^2}{q^2}({\bf x}\cdot{\bf v})^2+\kappa^2+\frac{1}{4}}\le 10\sqrt{\eta^2+\kappa^2+\frac{1}{4}} \end{split} \end{equation} and \begin{equation}\label{BoundsX} \begin{split} \frac{ta}{100}-100\cdot\left[1+\frac{q}{a^2}\left(\abs{\eta}+\kappa\right)\right]\le \vert {\bf X}(\vartheta+t{\bf a},{\bf a})\vert\le 100 ta+100\cdot\left[1+\frac{q}{a^2}\left(\abs{\eta}+\kappa\right)\right]. \end{split} \end{equation} \end{corollary} \begin{proof}[Proof of Corollary \ref{CorBdsOnX}] The bounds \eqref{BoundXV} follow from \eqref{BoundsOnSigma} since ${\bf x}\cdot{\bf v}=\eta+\sigma$. The bound for ${\bf X}_{-1}$ follows from the estimates \eqref{BoundsDN}. The bound on $\vert{\bf X}\vert$ follows from the first formula in \eqref{ExpressionsX}. If ${\bf x}\cdot{\bf v}>0$, or if $4\kappa^2\notin [1/2,2]$, \begin{equation*} \begin{split} \vert{\bf X}\vert\ge \vert {\bf X}\cdot\frac{\bf a}{a}\vert\ge \frac{q}{a^2}\left[\frac{1}{2}+\frac{q^2}{(2R)^2}\sqrt{\frac{a^2}{q^2}({\bf x}\cdot{\bf v})^2+\frac{(2R)^2}{4q^2}}+\frac{4a^2L^2}{(2R)^2}\frac{a}{q}({\bf x}\cdot{\bf v})\right]\geq\frac{q}{a^2}(1+\frac{a}{q}\abs{{\bf x}\cdot{\bf v}}), \end{split} \end{equation*} while if ${\bf x}\cdot{\bf v}<0$ and $1/2\le 4\kappa^2\le 2$, we use the orthogonal direction to get \begin{equation*} \begin{split} \vert{\bf X}\vert\ge \vert {\bf X}\cdot\frac{{\bf L}\times{\bf a}}{aL}\vert\ge\frac{2aL}{a^2}\left[\frac{1}{2}+\frac{q^2}{(2R)^2}\left(\sqrt{\frac{a^2}{q^2}({\bf x}\cdot{\bf v})^2+\frac{(2R)^2}{4q^2}}-\frac{a}{q}({\bf x}\cdot{\bf v})\right)\right]\ge \frac{q}{2a^2}\vert\eta\vert-\frac{L}{a}\vert\sigma\vert. \end{split} \end{equation*} \end{proof} \subsection{Poisson brackets}\label{sec:PB_bounds} We will make extensive use of the properties of the Poisson bracket \begin{equation}\label{PB2} \{f,g\}=\nabla_{\bf x} f\cdot \nabla_{\bf v} g-\nabla_{\bf v} f\cdot \nabla_{\bf x} g. \end{equation} Recalling that by construction the change of variables $\mathcal{T}:({\bf x},{\bf v})\mapsto(\mathcal{A},\Theta)$ in Proposition \ref{PropAA} is canonical, we see that $\mathcal{T}$ leaves the Poisson bracket invariant: for any functions $f,g$ we have that $\{f,g\}=\{f\circ \mathcal{T},g\circ\mathcal{T}\}=\{f\circ \mathcal{T}^{-1},g\circ\mathcal{T}^{-1}\}$. In other words, we can compute Poisson brackets in either systems of physical $({\bf x},{\bf v})$ or angle-action coordinates $(\vartheta,{\bf a})$, and will slightly abuse the notation by simply writing this as \begin{equation} \{f,g\}=\nabla_{\bf x} f\cdot \nabla_{\bf v} g-\nabla_{\bf v} f\cdot \nabla_{\bf x} g=\nabla_\vartheta f\cdot \nabla_{\bf a} g-\nabla_{\bf a} f\cdot \nabla_\vartheta g. \end{equation} Two further useful facts are its Leibniz rule and the Jacobi identity \begin{equation}\label{Jacobi} \begin{split} 0=\{f,\{g,h\}\}+\{g,\{h,f\}\}+\{h,\{f,g\}\}, \end{split} \end{equation} which can be verified by straightforward computations. Finally, the nonlinear analysis will exploit in important ways that the integral of a Poisson bracket vanishes \begin{equation}\label{ZeroIntegralPB} \begin{split} \iint \{f,g\} d{\bf x} d{\bf v}=0, \end{split} \end{equation} provided the derivatives of the functions have appropriate decay. Moreover, the Poisson brackets define \emph{symplectic gradients} which we will use as vector fields to control regularity. \begin{remark} Not all vector fields are symplectic gradients (symplectic gradient are divergence-free), but the canonical basis is made of symplectic gradients. In addition, we have a few distinguished vector fields: \begin{equation*} \begin{split} \{\cdot,a^2\}&=\Big\{\cdot,\vert{\bf v}\vert^2+\frac{q}{\vert{\bf x}\vert}\Big\}=2{\bf v}\cdot\nabla_{\bf x}+2q\frac{{\bf x}}{\vert{\bf x}\vert^3}\cdot\nabla_{\bf v},\\ \{\cdot,{\bf L}^j\}&=\in^{jkl}\vartheta^k\partial_{\vartheta^l}-\in^{jkl}{\bf a}^l\partial_{{\bf a}^k},\qquad \{\cdot,\vartheta\cdot{\bf a}\}=\vartheta\cdot\nabla_{\vartheta}-{\bf a}\cdot\nabla_{\bf a}, \end{split} \end{equation*} and we recognize that the first vector field above is nothing but the Hamiltonian vector field associated to the linearized dynamics, while the next two are the Noether vector fields associated to the invariance of the equations under rotations $({\bf x},{\bf v})\mapsto (\mathcal{R}{\bf x},\mathcal{R}^{-1}{\bf v})$, $\mathcal{R}\in SO(3)$ and to the invariance of the equation under rescaling $({\bf x},{\bf v})\mapsto (\lambda{\bf x},\lambda^{-1}{\bf v})$, $\lambda\in\mathbb{R}_+$. \end{remark} A crucial first Poisson bracket identity is the one showing that the Kepler problem is integrable, i.e.\ \begin{equation} \{a^2,{\bf L}\}=\{ H,{\bf L}\}=0. \end{equation} We have several nice properties in coordinates $({\bf x},{\bf v})$, in which the nonlinearity has a simple expression: \begin{equation}\label{PBXV} \begin{split} \{{\bf x},a\}&=\frac{1}{2a}\Big\{{\bf x},\vert{\bf v}\vert^2+\frac{q}{\vert{\bf x}\vert}\Big\}=a^{-1}{\bf v},\qquad \{{\bf x},L\}={\bf l}\times{\bf x},\\ \{{\bf v},a\}&=\frac{q}{2a}\frac{{\bf x}}{\vert{\bf x}\vert^3},\qquad\{{\bf v},L\}={\bf l}\times{\bf v}. \end{split} \end{equation} Both classical coordinates $({\bf x},{\bf v})$ and angle-action coordinates $(\vartheta,{\bf a})$ satisfy the canonical Poisson bracket relations \begin{equation*} \{\vartheta^j,\vartheta^k\}=\{{\bf a}^j,{\bf a}^k\}=0=\{{\bf x}^j,{\bf x}^k\}=\{{\bf v}^j,{\bf v}^k\},\qquad\{\vartheta^j,{\bf a}^k\}=\delta_{jk}=\{{\bf x}^j,{\bf v}^k\}. \end{equation*} While the super-integrable coordinates \eqref{SIC} are better adapted to the linear evolution, their Poisson bracket relations are not canonical anymore, but still relatively convenient: we have that \begin{equation}\label{PBSIC} \begin{split} \{\xi,\eta\}&=1,\qquad 0=\{\xi,\lambda\}=\{\xi,{\bf u}\}=\{ \xi,{\bf L}\}=\{\eta,\lambda\}=\{\eta,{\bf u}\}=\{\eta,{\bf L}\}=\{\lambda,{\bf L}\},\\ \{\lambda,{\bf u}\}&=-{\bf l}\times{\bf u},\quad\{{\bf u}^j,{\bf u}^k\}=0,\quad \{{\bf L}^j,{\bf u}^k\}=\in^{jka}{\bf u}^a,\quad \{{\bf L}^j,{\bf L}^k\}=\in^{jka}{\bf L}^a, \end{split} \end{equation} and $\{({\bf L}\times{\bf u})^a,{\bf L}^b\}={\bf L}^a{\bf u}^b-{\bf L}^b{\bf u}^a$. \begin{remark}\label{PBWithLambdaRmk} Note that Poisson brackets with $\lambda$ follow from those with ${\bf L}$, since for any scalar function $\zeta$ there holds that \begin{equation*}\label{PBWithLambda} \{\zeta,\lambda\}={\bf l}^j\{\zeta,{\bf L}^j\}. \end{equation*} However, for some computations it is useful to keep treating the Poisson bracket with $\lambda$ separately. \end{remark} \subsubsection{Estimates on derivatives of ${\bf X}$ and ${\bf V}$} In treating the nonlinear terms, Poisson brackets of the kinematic quantities $\widetilde{\X}$ and $\widetilde{\V}$ with the super-integrable coordinates arise. With \eqref{PBXV} we can explicitly compute some of the relevant Poisson brackets: we have that \begin{equation}\label{PBXV2} \begin{aligned} \{a,\widetilde{\X}\}&=-\frac{\widetilde{\V}}{a},\qquad \{\xi,\widetilde{\X}\}=\frac{\xi^2}{q}\frac{\widetilde{\V}}{a},\qquad \{\lambda,\widetilde{\X}\}=-{\bf l}\times\widetilde{\X},\\ \{a,\widetilde{\V}\}&=-\frac{\xi}{2}\frac{\widetilde{\X}}{\vert\widetilde{\X}\vert^3},\qquad \{\xi,\widetilde{\V}\}=\frac{\xi^3}{2q}\frac{\widetilde{\X}}{\vert\widetilde{\X}\vert^3}, \qquad \{\lambda,\widetilde{\V}\}=-{\bf l}\times\widetilde{\V}. \end{aligned} \end{equation} As a consequence, \begin{equation}\label{DoublePoissonBracketsWithXi} \begin{aligned} \{\{\xi,\widetilde{\X}^j\},\zeta\}&=\frac{\xi^2}{q^2}\left[3\widetilde{\V}^j\{\xi,\zeta\}+\xi\{\widetilde{\V}^j,\zeta\}\right],\\ \{\{\xi,\widetilde{\V}^j\},\zeta\}&=3\frac{\xi^2}{2q}\frac{\widetilde{\X}^j}{\vert\widetilde{\X}\vert^3}\{\xi,\zeta\}+\frac{\xi^3}{2q}\frac{1}{\vert\widetilde{\X}\vert^3}\left[\{\widetilde{\X}^j,\zeta\}-3\frac{\widetilde{\X}^j\widetilde{\X}^k}{\vert\widetilde{\X}\vert^2}\{\widetilde{\X}^k,\zeta\}\right]. \end{aligned} \end{equation} More generally, for a vector of the form $\bm{U}=U_1(\xi,\eta,\lambda){\bf u}+U_3(\xi,\eta,\lambda){\bf L}\times{\bf u}$ (such as ${\bf X}$ and ${\bf V}$ in \eqref{eq:XVgen} and \eqref{eq:XX1X3}), and $\alpha\in\{\xi,\eta,\lambda\}$, we slightly abuse notation by writing $\partial_\alpha\bm{U}=\partial_\alpha U_1{\bf u}+\partial_\alpha U_3{\bf L}\times{\bf u}$, and can then use the chain rule and the Poisson bracket relations \eqref{PBSIC} to resolve Poisson brackets of a scalar function $\zeta$ with $\bm{U}$ as \begin{equation}\label{eq:U_resol} \{\bm{U}^j,\zeta\}=\{\bm{U}^j,\eta\}\{\xi,\zeta\}-\{\bm{U}^j,\xi\}\{\eta,\zeta\}+\partial_\lambda\bm{U}^j\{\lambda,\zeta\}+U_1\{{\bf u}^j,\zeta\}+U_3\in^{jcd}{\bf L}^c\{{\bf u}^d,\zeta\}+U_3\in^{jcd}{\bf u}^d\{{\bf L}^c,\zeta\}, \end{equation} where we have used that $\partial_\xi\bm{U}=\{\bm{U},\eta\}$ and $\partial_\eta\bm{U}=-\{\bm{U},\xi\}$. We now collect bounds on the Poisson brackets. From \eqref{PBXV2} it directly follows that \begin{equation}\label{PBX1} \begin{split} \vert\{\xi,\widetilde{\bf X}\}\vert\le\xi^2/q,\qquad\vert\{\lambda,\widetilde{\bf X}\}\vert=\vert\widetilde{\bf X}\vert,\qquad \vert\{\xi,\widetilde{\bf V}\}\vert\le\frac{\xi^3}{2q}\frac{1}{\vert\widetilde{\bf X}\vert^2},\qquad\vert\{\lambda,\widetilde{\bf V}\}\vert=\vert\widetilde{\bf V}\vert. \end{split} \end{equation} Furthermore, we have: \begin{lemma}\label{lem:moreXVbds} We have the following bounds for first order Poisson brackets \begin{equation}\label{PBX1'} \begin{alignedat}{3} \vert\partial_\lambda\widetilde{\X}\vert&\lesssim \xi^{-1}\kappa\ip{\kappa}^{-2}\vert\widetilde{\X}\vert,\qquad &\vert\{\widetilde{\X},\eta\}\vert &\lesssim \xi^{-1}\vert\widetilde{\X}\vert+tq\xi^{-2},\\ \vert\partial_\lambda\widetilde{\V}\vert &\lesssim q\xi^{-2}\kappa\ip{\kappa}^{-3},\qquad &\vert\{\widetilde{\V},\eta\}\vert &\lesssim q\xi^{-2}(1+t\xi \vert\widetilde{\X}\vert^{-2}), \end{alignedat} \end{equation} and for second order Poisson brackets for $\widetilde{\X}$: \begin{equation}\label{PBX1primeprime} \begin{alignedat}{3} \vert\{\{\widetilde{\X},\xi\},\xi\}\vert&\lesssim (\xi^2/q)^3\cdot\vert\widetilde{\X}\vert^{-2},\qquad& \vert\{\{\widetilde{\X},\xi\},\eta\}\vert&\lesssim \xi/q\cdot (1+t\xi\vert\widetilde{\X}\vert^{-2}),\\ \vert\{\{\widetilde{\X},\eta\},\eta\}\vert &\lesssim \xi^{-2}\vert\widetilde{\X}\vert+tq\xi^{-3}+t^2q\xi^{-2}\vert\widetilde{\X}\vert^{-2},\qquad& \vert\partial_\lambda^2\widetilde{\X}\vert &\lesssim \xi^{-2}\ip{\kappa}^{-2}\vert\widetilde{\X}\vert,\\ \vert\partial_\lambda\{\xi,\widetilde{\X}\}\vert&\lesssim\xi/q\cdot\kappa\langle\kappa\rangle^{-3},\qquad &\vert\partial_\lambda\{\widetilde{\X},\eta\}\vert &\lesssim \xi^{-2}\ip{\kappa}^{-1}(\vert\widetilde{\X}\vert+tq\xi^{-1}),\\ \end{alignedat} \end{equation} and for second order Poisson brackets for $\widetilde{\V}$: \begin{equation}\label{PBV1primeprime} \begin{alignedat}{3} \vert\{\{\widetilde{\V},\xi\},\xi\}\vert&\lesssim \xi^5/q^2\cdot\vert\widetilde{\X}\vert^{-3},\qquad& \vert\{\{\widetilde{\V},\xi\},\eta\}\vert&\lesssim \xi^{2}/q\cdot \vert\widetilde{\X}\vert^{-2}(1+tq/(\xi\vert\widetilde{\X}\vert)),\\ \vert\{\{\widetilde{\V},\eta\},\eta\}\vert &\lesssim q\xi^{-3}+tq\xi^{-2}\vert\widetilde{\X}\vert^{-2}+t^2q^2\xi^{-3}\vert\widetilde{\X}\vert^{-3},\qquad& \vert\partial_\lambda^2 \widetilde{\V}\vert&\lesssim q\ip{\kappa}^{-3}\xi^{-3},\\ \vert\partial_\lambda\{\xi,\widetilde{\V}\}\vert&\lesssim\xi^2/q\kappa\langle\kappa\rangle^{-2}\vert\widetilde{\X}\vert^{-2}, & \vert\partial_\lambda\{\widetilde{\V},\eta\}\vert &\lesssim q\xi^{-3}\langle\kappa\rangle^{-1} (\langle\kappa\rangle^{-1}+t\xi\vert\widetilde{\X}\vert^{-2}). \end{alignedat} \end{equation} \end{lemma} \begin{proof} We begin by noting that we can rewrite \eqref{XVinSIC} as \begin{equation*} \begin{split} {\bf X}&=\frac{\xi^2}{q}\left(\frac{N}{1+4\kappa^2}+\frac{4\kappa^2}{1+4\kappa^2}(\eta+\sigma+\frac{1}{2})\right){\bf u}-\frac{\xi^2}{q}\kappa\left(1+\frac{2D}{1+4\kappa^2}\right){\bf l}\times{\bf u} \end{split} \end{equation*} and therefore (e.g.\ by distinguishing whether $\kappa>1/4$ or $\kappa\leq 1/4$ and using the bound on $N$ in \eqref{BoundsDN}), \begin{equation}\label{eq:Xprelimbd} \frac{\xi^2}{q}\ip{\kappa}(1+\langle\kappa\rangle^{-2}D)\lesssim \abs{{\bf X}}, \end{equation} and in particular $\abs{X_3}\lesssim \xi^{-1}\ip{\kappa}^{-1}\abs{{\bf X}}$. Moreover, since $\kappa=\lambda/\xi$ and $\sigma$ essentially only depends on $\rho(\eta,\kappa)$ (see \eqref{def:sigma}), we have that \begin{equation} -\xi\partial_\xi\kappa=\lambda\partial_\lambda\kappa=\kappa,\qquad -\xi\partial_\xi\sigma=\lambda\partial_\lambda\sigma=\kappa\partial_\kappa\sigma, \end{equation} and thus for $D=D(\kappa,\eta+\sigma)$ we have that $-\xi\partial_\xi D=\lambda\partial_\lambda D$ and $(\xi\partial_\xi)^2 D=(\lambda\partial_\lambda)^2 D$ and \begin{equation}\label{SecondDerDN} \begin{split} \xi\partial_\lambda D&=\partial_\kappa D+\partial_\kappa\sigma\partial_yD,\\ (\xi\partial_\lambda)^2D&=\partial_\kappa^2D+2\partial_\kappa\sigma\partial_\kappa\partial_yD+(\partial_\kappa\sigma)^2\partial_y^2D+\partial_\kappa^2\sigma\partial_yD,\\ \xi\partial_\lambda N&=\partial_\kappa D+\partial_\kappa\sigma\partial_yN,\\ (\xi\partial_\lambda)^2N&=\partial_\kappa^2D+2\partial_\kappa\sigma\partial_\kappa\partial_yD+(\partial_\kappa\sigma)^2\partial_y^2N+\partial_\kappa^2\sigma\partial_yN. \end{split} \end{equation} Using \eqref{BoundsOnSigma} and \eqref{BoundsDN}-\eqref{SecondOrderDerDNFormulas}, we obtain that \begin{equation}\label{eq:dlD} \begin{split} \xi\abs{\partial_\lambda D}&\lesssim \frac{\kappa}{N}(1+\ip{\kappa}^{-2}D),\qquad\langle\kappa\rangle \vert (\xi\partial_\lambda)^2D\vert\lesssim 1, \end{split} \end{equation} and similarly \begin{equation}\label{eq:dlN} \xi\abs{\partial_\lambda N}\lesssim \kappa (\ip{\kappa}^{-2}+\frac{1}{N})\lesssim\frac{\kappa}{\ip{\kappa}},\qquad\langle\kappa\rangle \vert (\xi\partial_\lambda)^2N\vert\lesssim 1. \end{equation} {\bf Step 1}: first order derivatives. We are now ready to prove \eqref{PBX1'}. From \eqref{eq:XX1X3} we have that \begin{equation}\label{eq:dlX} \begin{aligned} \partial_\lambda X_1&=\frac{\xi}{q}\left(\partial_\kappa\sigma+\frac{\xi\partial_\lambda D}{1+4\kappa^2}-\frac{D}{1+4\kappa^2}\frac{8\kappa}{1+4\kappa^2}\right),\\ \partial_\lambda X_3&=-\frac{2}{q}\left(\frac{\xi\partial_\lambda D}{1+4\kappa^2}-\frac{D}{1+4\kappa^2}\frac{8\kappa}{1+4\kappa^2}\right), \end{aligned} \end{equation} and thus \begin{equation} \abs{\partial_\lambda X_1}+\xi\abs{\partial_\lambda X_3} \lesssim \xi^{-1}\kappa\ip{\kappa}^{-3}\abs{{\bf X}}, \end{equation} so that \begin{equation} \abs{\partial_\lambda{\bf X}}\lesssim \xi^{-1}\kappa\ip{\kappa}^{-2}\abs{{\bf X}}. \end{equation} Since \begin{equation}\label{eq:xidxiX} \xi\partial_\xi X_1=2X_1-\lambda\partial_\lambda X_1,\qquad \xi\partial_\xi X_3=X_3-\lambda\partial_\lambda X_3, \end{equation} it directly follows that \begin{equation} \abs{\xi\partial_\xi{\bf X}}\lesssim\abs{{\bf X}}. \end{equation} Moreover, we have that \begin{equation} \{\widetilde{\X},f\}=\{{\bf X},f\circ\Phi_t\}\circ\Phi_t^{-1}, \end{equation} and thus in particular, using \eqref{PBXV2}, \begin{equation}\label{eq:PBXXeta} \{\widetilde{\X},\eta\}=\{{\bf X},\eta-tq^2\xi^{-3}\}\circ\Phi_t^{-1}=\partial_\xi{\bf X}\circ\Phi_t^{-1}-3t\xi^{-1}\widetilde{\V}, \end{equation} which gives the bound \begin{equation} \vert\{\widetilde{\X},\eta\}\vert\lesssim \xi^{-1}\vert\widetilde{\X}\vert+tq\xi^{-2}. \end{equation} For ${\bf V}$ we compute that \begin{equation}\label{eq:dlV} \begin{aligned} \partial_\lambda V_1&=\frac{q}{\xi^2}\left(\frac{1}{2N}\frac{\xi\partial_\lambda N}{N}-\frac{1}{1+4\kappa^2}\frac{D}{N}\left(\frac{\xi\partial_\lambda D}{D}-\frac{\xi\partial_\lambda N}{N}-\frac{8\kappa}{1+4\kappa^2}\right)\right),\\ \partial_\lambda V_3&=\frac{q}{\xi^3}\frac{2}{1+4\kappa^2}\frac{D}{N}\left(\frac{\xi\partial_\lambda D}{D}-\frac{\xi\partial_\lambda N}{N}-\frac{8\kappa}{1+4\kappa^2}\right), \end{aligned} \end{equation} and thus, using \eqref{eq:dlD} and \eqref{eq:dlN}, \begin{equation} \abs{\partial_\lambda{\bf V}}\lesssim q\xi^{-2}\kappa\ip{\kappa}^{-3}. \end{equation} With \begin{equation}\label{DVDXi} \xi\partial_\xi V_1=-V_1-\lambda\partial_\lambda V_1,\qquad \xi\partial_\xi V_3=-2V_3-\lambda\partial_\lambda V_3, \end{equation} we obtain the bound \begin{equation}\label{AddedDerXiV} \abs{\xi\partial_\xi{\bf V}}\lesssim q\xi^{-1}. \end{equation} Using \eqref{PBXV2}, \begin{equation}\label{eq:PBVVeta} \{\widetilde{\V},\eta\}=\{{\bf V},\eta-tq^2\xi^{-3}\}\circ\Phi_t^{-1}=\partial_\xi{\bf V}\circ\Phi_t^{-1}-\frac{3}{2}tq\xi^{-1}\frac{\widetilde{\X}}{\vert\widetilde{\X}\vert^3}, \end{equation} and we deduce that \begin{equation} \vert\{\widetilde{\V},\eta\}\vert\lesssim q\xi^{-2}(1+t\xi \vert\widetilde{\X}\vert^{-2}). \end{equation} {\bf Step 2}: second order derivatives. We now turn to \eqref{PBX1primeprime}. The double Poisson brackets involving $\xi$ follow from \eqref{DoublePoissonBracketsWithXi} and \eqref{PBX1}-\eqref{PBX1'}. From \eqref{eq:dlX} we have that \begin{equation}\label{eq:dl2X} \begin{aligned} \partial_\lambda^2 X_1&=\frac{1}{q}\left(\partial^2_\kappa\sigma+\frac{\xi^2\partial^2_\lambda D}{1+4\kappa^2}-2\frac{\xi\partial_\lambda D}{1+4\kappa^2}\frac{8\kappa}{1+4\kappa^2}-\frac{D}{1+4\kappa^2}\frac{8}{1+4\kappa^2}\frac{1-12\kappa^2}{1+4\kappa^2}\right),\\ \partial^2_\lambda X_3&=-\frac{2}{q\xi}\left(\frac{\xi^2\partial^2_\lambda D}{1+4\kappa^2}-2\frac{\xi\partial_\lambda D}{1+4\kappa^2}\frac{8\kappa}{1+4\kappa^2}-\frac{D}{1+4\kappa^2}\frac{8}{1+4\kappa^2}\frac{1-12\kappa^2}{1+4\kappa^2}\right), \end{aligned} \end{equation} and using \eqref{BoundsOnSigma} and \eqref{eq:dlD}-\eqref{eq:dlN}, then \eqref{eq:Xprelimbd}, we obtain \begin{equation} \abs{\partial_\lambda^2{\bf X}}\lesssim \xi^{-2}\ip{\kappa}^{-2}\abs{{\bf X}}, \end{equation} which, deriving once more the equality in \eqref{eq:xidxiX} also implies that \begin{equation} \ip{\kappa}\abs{\partial_\lambda\partial_\xi{\bf X}}+\abs{\partial_\xi^2{\bf X}}\lesssim \xi^{-2}\abs{{\bf X}}. \end{equation} By \eqref{eq:PBXXeta} we have that \begin{equation} \begin{aligned} \{\{\widetilde{\X},\eta\},\eta\}&=\{\partial_\xi{\bf X},\eta-tq^2\xi^{-3}\}\circ\Phi_t^{-1}-\{3t\xi^{-1}\widetilde{\V},\eta\}\\ &=\partial_\xi^2{\bf X}\circ\Phi_t^{-1}-3t\xi^{-2}(2\widetilde{\V}+\xi\partial_\xi{\bf V}\circ\Phi_t^{-1})-3t\xi^{-1}\{\widetilde{\V},\eta\}, \end{aligned} \end{equation} where we have used that $\{\partial_\xi {\bf X},\xi\}=\partial_\xi\{{\bf X},\xi\}=-\partial_\xi(\xi^3q^{-2}{\bf V})$. Hence by \eqref{PBX1'} there holds that \begin{equation} \vert\{\{\widetilde{\X},\eta\},\eta\}\vert\lesssim \xi^{-2}\vert\widetilde{\X}\vert+t\xi^{-3}q+t^2\xi^{-2}q\vert\widetilde{\X}\vert^{-2}. \end{equation} From \eqref{eq:PBXXeta} it also follows that \begin{equation}\label{eq:dlPBetaX} \partial_\lambda\{\widetilde{\X},\eta\}=\partial_\lambda\partial_\xi{\bf X}\circ\Phi_t^{-1}-3t\xi^{-1}\partial_\lambda\widetilde{\V}, \end{equation} which gives the bound \begin{equation} \vert\partial_\lambda\{\widetilde{\X},\eta\}\vert\lesssim \xi^{-2}\ip{\kappa}^{-1}(\vert\widetilde{\X}\vert+tq\xi^{-1}). \end{equation} The bounds on $\widetilde{\V}$ in \eqref{PBV1primeprime} are obtained similarly. The Poisson brackets involving $\xi$ follow from \eqref{DoublePoissonBracketsWithXi} and \eqref{PBX1}-\eqref{PBX1'}. For the other bounds, we compute from \eqref{eq:dlV} that \begin{equation}\label{eq:dl2V} \begin{aligned} \partial_\lambda^2 V_1&=\frac{q}{\xi^3}\Bigg[\frac{1}{2N}\frac{\xi^2\partial_\lambda^2 N}{N}-\frac{1}{N}\left(\frac{\xi\partial_\lambda N}{N}\right)^2-\frac{1}{1+4\kappa^2}\frac{D}{N}\left(\frac{\xi\partial_\lambda D}{D}-\frac{\xi\partial_\lambda N}{N}-\frac{8\kappa}{1+4\kappa^2}\right)^2\\ &\qquad -\frac{1}{1+4\kappa^2}\frac{D}{N}\left(-\left(\frac{\xi\partial_\lambda D}{D}\right)^2+\frac{\xi^2\partial^2_\lambda D}{D}+\left(\frac{\xi\partial_\lambda N}{N}\right)^2-\frac{\xi^2\partial^2_\lambda N}{N}+\frac{8}{1+4\kappa^2}\frac{1-4\kappa^2}{1+4\kappa^2}\right) \Bigg],\\ \partial_\lambda^2 V_3&=\frac{q}{\xi^4}\frac{2}{1+4\kappa^2}\frac{D}{N}\Bigg[\left(\frac{\xi\partial_\lambda D}{D}-\frac{\xi\partial_\lambda N}{N}-\frac{8\kappa}{1+4\kappa^2}\right)^2\\ &\qquad\qquad\qquad -\left(\frac{\xi\partial_\lambda D}{D}\right)^2+\frac{\xi^2\partial_\lambda^2 D}{D}+\left(\frac{\xi\partial_\lambda N}{N}\right)^2-\frac{\xi^2\partial^2_\lambda N}{N}+\frac{8}{1+4\kappa^2}\frac{1-4\kappa^2}{1+4\kappa^2}\Bigg], \end{aligned} \end{equation} and thus, observing that the terms involving $(\xi\partial_\lambda D/D)^2$ cancel in each terms, and using \eqref{eq:dlD}-\eqref{eq:dlN}, \begin{equation} \abs{\partial_\lambda^2 {\bf V}}\lesssim q\ip{\kappa}^{-3}\xi^{-3}, \end{equation} which, deriving \eqref{DVDXi} also implies that \begin{equation} \ip{\kappa}^2\vert\partial_\lambda\partial_\xi {\bf V}\vert+\vert\partial^2_\xi {\bf V}\vert\lesssim q\xi^{-3}. \end{equation} By \eqref{eq:PBVVeta} we have that \begin{equation} \begin{aligned} \{\{\widetilde{\V},\eta\},\eta\}&=\{\partial_\xi{\bf V},\eta-tq^2\xi^{-3}\}\circ\Phi_t^{-1}-\frac{3}{2}tq\Big\{\xi^{-1}\frac{\widetilde{\X}}{\vert\widetilde{\X}\vert^3},\eta\Big\}\\ &=\partial_\xi^2{\bf V}\circ\Phi_t^{-1}-\frac{3}{2}tq\xi^{-2}\left(2\frac{\widetilde{\X}}{\vert\widetilde{\X}\vert^3}+\xi\partial_\xi\Big(\frac{{\bf X}}{\vert{\bf X}\vert^3}\Big)\circ\Phi_t^{-1}\right)-\frac{3}{2}tq\xi^{-1}\Big\{\frac{\widetilde{\X}}{\vert\widetilde{\X}\vert^3},\eta\Big\}, \end{aligned} \end{equation} and hence \begin{equation} \vert\{\{\widetilde{\V},\eta\},\eta\}\vert\lesssim q\xi^{-3}+tq\xi^{-2}\vert\widetilde{\X}\vert^{-2}+t^2q^2\xi^{-3}\vert\widetilde{\X}\vert^{-3}. \end{equation} Note that \eqref{eq:PBVVeta} also gives \begin{equation}\label{eq:dlPBetaV} \partial_\lambda\{\widetilde{\V},\eta\}=\partial_\lambda\partial_\xi{\bf V}\circ\Phi_t^{-1}-\frac{3}{2}tq\xi^{-1}\partial_\lambda\Big(\frac{\widetilde{\X}}{\vert\widetilde{\X}\vert^3}\Big), \end{equation} and thus \begin{equation} \vert\partial_\lambda\{\widetilde{\V},\eta\}\vert\lesssim q\ip{\kappa}^{-2}\xi^{-3} +tq\xi^{-2}\ip{\kappa}^{-1}\vert\widetilde{\X}\vert^{-2}. \end{equation} \end{proof} \subsection{Improvements in the outgoing direction} In addition, we can obtain an improvement in the ``bulk region'': \begin{equation}\label{eq:bulk} \mathcal{B}:=\{(\vartheta,{\bf a})\in\mathcal{P}_{\vartheta,{\bf a}}:\; a+\xi \le (q/\langle q\rangle) t^\frac{1}{4}/10,\quad \xi\abs{\eta}+\lambda\le 10^{-3}ta^2\}, \end{equation} which will be important later on. \begin{lemma}\label{ImprovedBoundsOnXBulk} In the bulk region, there holds that $\widetilde{\bf Z}:=\widetilde{\bf X}-t\widetilde{\bf V}$ and $\widetilde{\bf V}$ satisfy better bounds: \begin{equation}\label{PrecisedAsymptotBoundsZV} \begin{split} \vert\widetilde{\bf Z}\vert&\lesssim\frac{\xi^2}{q}\ln\langle t\rangle+\frac{\xi^2}{q}(1+\vert\eta\vert+\kappa),\qquad\vert\widetilde{\bf V}-{\bf a}\vert\lesssim\frac{\xi^2}{q}\frac{1}{\langle t\rangle}, \end{split} \end{equation} and in particular \begin{equation}\label{ApproxXBulk} \begin{split} \vert\widetilde{\bf X}-t{\bf a}\vert&\lesssim \frac{\xi^2}{q}\ln\langle t\rangle+\frac{\xi^2}{q}(1+\vert\eta\vert+\kappa). \end{split} \end{equation} \end{lemma} \begin{proof}[Proof of Lemma \ref{ImprovedBoundsOnXBulk}] Using \eqref{XVinSIC}, we directly express \begin{equation*} \begin{split} \widetilde{\bf Z}&=\frac{\xi^2}{q}\left(\eta+\widetilde{\sigma}+\frac{1}{2}+\frac{ta^3/q}{2N}\right){\bf u}-\frac{\xi^2}{q}\kappa({\bf l}\times{\bf u})+ \frac{\xi^2}{q}\frac{D}{1+4\kappa^2}\cdot\left(1+\frac{ta^3/q}{N}\right)\cdot \frac{2{\bf R}}{q},\\ \widetilde{\bf V}-{\bf a}&=-\frac{q}{2N}\left({\bf u}+\frac{D}{1+4\kappa^2}\frac{2{\bf R}}{q}\right)=-\frac{q}{2N}\left(\left(1+\frac{D}{1+4\kappa^2}\right){\bf u}-2\frac{\kappa D}{1+4\kappa^2}{\bf l}\times{\bf u}\right), \end{split} \end{equation*} and using Corollary \ref{CorBdsOnX}, we find that, in the bulk, \begin{equation*} \begin{split} \vert\widetilde{\bf X}\vert\lesssim ta,\qquad \vert\widetilde{\sigma}\vert\lesssim\ln\langle t\rangle,\qquad N\gtrsim tq^2/\xi^3,\qquad D\lesssim 1. \end{split} \end{equation*} Hence \eqref{PrecisedAsymptotBoundsZV} follows by direct inspection and directly implies \eqref{ApproxXBulk}. \end{proof} \begin{remark}\label{rem:XVexpansion} In fact, using the equations above, one can push the asymptotic development further \begin{equation*} \begin{split} \widetilde{\bf Z}&=\frac{\xi^2}{q}(-\frac{1}{2}\ln(ta^3/q)+\eta-\ln 2+1-\frac{1}{4}\frac{\ln(ta^3/q)}{ta^3/q}){\bf u}-\frac{\xi\lambda}{q}({\bf l}\times{\bf u})+O(\frac{\xi^2}{q}\frac{\kappa^2+\vert\eta\vert+1}{ta^3/q}), \end{split} \end{equation*} and in particular, we have the asymptotics of trajectories: \begin{equation*} \begin{split} \widetilde{\bf X}&=\left(at-\frac{1}{2}\frac{q}{a^2}\ln(ta^3/q)\right)\cdot \frac{q^2}{4a^2L^2+q^2}\left(\frac{2}{q}{\bf R}+\frac{4a}{q^2}{\bf L}\times{\bf R}\right)+O(1),\\ \widetilde{\bf V}&=a\left(1-\frac{q}{2ta^3}\right)\cdot \frac{q^2}{4a^2L^2+q^2}\left(\frac{2}{q}{\bf R}+\frac{4a}{q^2}{\bf L}\times{\bf R}\right)+O(t^{-2}). \end{split} \end{equation*} \end{remark} Along the ``future'' part of a trajectory, i.e.\ after periapsis (roughly speaking), some important improvements for the $\lambda$ derivatives in these bounds are possible: \begin{lemma}\label{lem:betterdl} In the region $\{{\bf X}\cdot{\bf V}> -\xi\ip{\kappa}\}$ we have the bounds \begin{equation}\label{eq:betterdl} \begin{aligned} \abs{X_3}+\abs{\partial_\lambda {\bf X}}&\lesssim \kappa\frac{\xi}{q}\frac{\xi^2}{q\abs{{\bf X}}}\lesssim\frac{\xi}{q}\frac{\kappa}{\ip{\kappa}},\qquad& \aabs{\partial_\lambda^2{\bf X}}&\lesssim\frac{1}{q}\frac{\xi^2}{q\abs{{\bf X}}},\\ \abs{V_3}+\abs{\partial_\lambda {\bf V}}&\lesssim \frac{\xi^2}{q}\frac{1}{\abs{{\bf X}}^2}\frac{\kappa}{\ip{\kappa}},\qquad&\aabs{\partial_\lambda^2\widetilde{\V}}&\lesssim \frac{\xi}{q}\langle\kappa\rangle^{-1}\frac{1}{\vert\widetilde{\X}\vert^2}\\ \aabs{\partial_\lambda\{{\bf X},\xi\}}&\lesssim \frac{\xi^5}{q^3}\frac{\kappa}{\langle\kappa\rangle}\vert{\bf X}\vert^{-2},\qquad&\abs{\partial_\lambda\{{\bf V},\xi\}}&\lesssim\frac{\xi^4}{q^2}\frac{\kappa}{\langle\kappa\rangle}\vert{\bf X}\vert^{-3}. \end{aligned} \end{equation} Moreover, for $t\geq 0$ we also have that, in the region $\{\widetilde{\X}\cdot\widetilde{\V}> -\xi\ip{\kappa}\}$, \begin{equation}\label{eq:betterdlPBetaX} \begin{aligned} \aabs{\partial_\lambda\{\widetilde{\X},\eta\}}&\lesssim \frac{\xi}{q\vert\widetilde{\X}\vert}\left(\frac{\xi\langle\kappa\rangle}{q}+\frac{t}{\aabs{\widetilde{\X}}}\right),\qquad \aabs{\partial_\lambda\{\widetilde{\V},\eta\}}\lesssim \frac{1}{\aabs{\widetilde{\X}}^2}\left(\frac{\xi}{q}+\frac{t}{\aabs{\widetilde{\X}}}\right). \end{aligned} \end{equation} \end{lemma} \begin{remark} Along the ``past'' part of a trajectory, i.e.\ in the region $\{{\bf X}\cdot{\bf V}< -\xi\ip{\kappa}\}$, we can use the past asymptotic actions of Section \ref{ssec:past_AA} to obtain the same, improved bounds: this follows from the fact that the expressions of ${\bf X}$ and ${\bf V}$ are identical (see \eqref{XVinSIC(-)}), and satisfy the same Poisson bracket relations (see \eqref{eq:PB_SIC(-)}). \end{remark} \begin{proof}[Proof of Lemma \ref{lem:betterdl}] We treat separately the regions $\{\abs{{\bf X}\cdot{\bf V}}\leq \ip{\kappa}\xi\}$ and $\{{\bf X}\cdot{\bf V}> \ip{\kappa}\xi\}$. \begin{enumerate}[wide] \item The region $\{\abs{{\bf X}\cdot{\bf V}}\leq \ip{\kappa}\xi\}$. Since by \eqref{eq:virials} along a trajectory of \eqref{ODE} there holds \begin{equation} \frac{d}{dt}{\bf X}\cdot{\bf V}\geq \frac{1}{2}a^2, \end{equation} a trajectory passes through the region $\{\abs{{\bf X}\cdot{\bf V}}\leq \ip{\kappa}\xi\}$ in time at most $2\xi\ip{\kappa}\cdot 2a^{-2}=4q^{-2} \xi^3\ip{\kappa}$. Letting $t_p$ denote the time of periapsis (where by \eqref{Formula|x|} there holds $\abs{{\bf X}(t_p)}\geq \frac{\xi^2}{q}\ip{\kappa}$), we thus have by \eqref{eq:virials} that in this region \begin{equation} \frac{\xi^4}{q^2}\ip{\kappa}^2\leq \abs{{\bf X}}^2\leq \abs{{\bf X}(t_p)}^2+\int_{t_p}^t\vert\frac{d}{dt}\abs{{\bf X}(s)}^2\vert ds\lesssim\frac{\xi^4}{q^2}\ip{\kappa}^2. \end{equation} The claimed bounds then follow from those in Lemma \ref{lem:moreXVbds}. \item The region $\{{\bf X}\cdot{\bf V}> \ip{\kappa}\xi\}$. Here the claim follows from the bounds on $D,N$ and their derivatives in \eqref{BoundsDN}. (Here again, we write $D=D(\kappa,\eta+\sigma)$ and $N=N(\kappa,\eta+\sigma)$ if not specified otherwise). To begin, we observe that since $\eta+\sigma=\xi^{-1}{\bf X}\cdot{\bf V}>\ip{\kappa}>0$ we can use the bound from \eqref{BoundsDN}, \begin{equation} \mathfrak{1}_{y\geq 0}D(\kappa,y)=\mathfrak{1}_{y\geq 0}\frac{\kappa^2+1/4}{\sqrt{y^2+\kappa^2+1/4}+y}\leq \frac{\kappa^2+1/4}{\sqrt{y^2+\kappa^2+1/4}}, \end{equation} to deduce from \eqref{eq:XX1X3} the bounds \begin{equation}\label{eq:X12_bds} \frac{1}{2}\frac{\xi^2}{q}N\leq \abs{X_1}\leq 2\frac{\xi^2}{q}N,\qquad \frac{\xi}{q}\leq \abs{X_3}\leq 2\frac{\xi}{q}. \end{equation} Since moreover \begin{equation} D\lesssim\frac{\ip{\kappa}^2}{N}\lesssim\ip{\kappa}, \end{equation} we can strengthen \eqref{eq:dlD} and \eqref{eq:dlN} to \begin{equation}\label{eq:X12_bds3} \xi\abs{\partial_\lambda D}\lesssim \frac{\kappa}{N},\qquad \xi\abs{\partial_\lambda N}\lesssim \frac{\kappa}{\ip{\kappa}}. \end{equation} Furthermore, we note from \eqref{def:tau} that, after periapsis, $\rho\gtrsim q\xi^{-2}\abs{{\bf X}}$ and using \eqref{ImprovedBoundsOnSigma}, we see that \begin{equation} \abs{\partial_\kappa\sigma}\lesssim\kappa \rho^{-1}\lesssim \kappa\frac{\xi^2}{q\abs{{\bf X}}}. \end{equation} From \eqref{eq:dlX} and \eqref{eq:X12_bds}-\eqref{eq:X12_bds3} it thus follows that \begin{equation}\label{eq:evenbetterdlX} \abs{\partial_\lambda X_1}\lesssim \frac{\xi}{q}\kappa\frac{\xi^2}{q\abs{{\bf X}}},\qquad \abs{\partial_\lambda X_3}\lesssim \frac{1}{q}\frac{\kappa}{\ip{\kappa}^2}\frac{1}{N}\lesssim \frac{1}{q}\frac{\kappa}{\ip{\kappa}^2}\frac{\xi^2}{q\abs{{\bf X}}}, \end{equation} and this gives the first and second bound in \eqref{eq:betterdl}. From \eqref{eq:XX1X3} we have that \begin{equation} \abs{V_1}\lesssim\frac{q}{\xi},\qquad \abs{V_3}\lesssim \frac{q}{\xi^2}\frac{1}{N^2}, \end{equation} and by \eqref{eq:dlV} we obtain \begin{equation}\label{ImprovedVDerProof} \vert\partial_\lambda{\bf V}\vert\leq\abs{\partial_\lambda V_1}+\lambda\abs{\partial_\lambda V_3}\lesssim\frac{q}{\xi^2}\frac{1}{N^2} \frac{\kappa}{\ip{\kappa}}\lesssim \frac{\xi^2}{q}\frac{1}{\abs{{\bf X}}^2}\frac{\kappa}{\ip{\kappa}}, \end{equation} where we used that $\abs{{\bf X}}\lesssim \frac{\xi^2}{q}N$ by \eqref{eq:X12_bds}. Inspecting \eqref{SecondDerDN}, \eqref{BoundsDN}-\eqref{SecondOrderDerDNFormulas} with the improved bound in \eqref{ImprovedBoundsOnSigma}, we obtain that $\abs{(\xi\partial_\lambda)^2 D}\lesssim N^{-1}$, and we obtain from \eqref{eq:dl2X} that \begin{equation} \abs{\partial_\lambda^2 X_1}\lesssim \langle\kappa\rangle^{-1}\frac{1}{q}\frac{\xi^2}{q\abs{{\bf X}}},\qquad \abs{\partial_\lambda^2 X_3}\lesssim \frac{1}{q\xi}\frac{1}{\ip{\kappa}^2}\frac{1}{N}\lesssim \frac{1}{q\xi}\frac{1}{\ip{\kappa}^2}\frac{\xi^2}{q\abs{{\bf X}}}, \end{equation} which gives $\abs{\partial_\lambda^2{\bf X}}\lesssim\frac{1}{q}\langle\kappa\rangle^{-1}\frac{\xi^2}{q\abs{{\bf X}}}$. Similarly, deriving \eqref{eq:xidxiX}, we obtain that \begin{equation*} \aabs{\partial_\lambda\partial_\xi{\bf X}}\lesssim \frac{\langle\kappa\rangle}{q}\frac{\xi^2}{q\vert{\bf X}\vert}, \end{equation*} and using \eqref{eq:dlPBetaX} and \eqref{ImprovedVDerProof}, we obtain \eqref{eq:betterdlPBetaX}. Similarly, starting from \eqref{eq:dl2V}, we obtain the improved bound \begin{equation*} \begin{split} \vert\partial_\lambda^2{\bf V}\vert&\lesssim\frac{q}{\xi^3}\langle\kappa\rangle^{-1}\frac{1}{N^2}\lesssim\frac{\xi}{q}\langle\kappa\rangle^{-1}\frac{1}{\vert{\bf X}\vert^2} \end{split} \end{equation*} and deriving \eqref{DVDXi} and using \eqref{ImprovedVDerProof}, we find that \begin{equation}\label{eq:betterdlxiV} \abs{\partial_\xi\partial_\lambda {\bf V}}\lesssim \frac{\xi}{q}\frac{1}{\abs{{\bf X}}^2}\frac{\kappa}{\ip{\kappa}}, \end{equation} and thus the second bound in \eqref{eq:betterdlPBetaX} follows with \eqref{eq:dlPBetaV}. \end{enumerate} \end{proof} We can now compute some bounds which will be important later. We have some simple Poisson brackets \begin{corollary}\label{CorPBa} We have the general bounds \begin{equation*} \begin{split} \vert\{\widetilde{\bf X},{\bf a}\}\vert&\lesssim\frac{q}{\xi^2}\vert\widetilde{\bf X}\vert,\qquad\vert\{\widetilde{\bf V},{\bf a}\}\vert\lesssim\frac{q^2}{\xi^3\langle\kappa\rangle^2}, \end{split} \end{equation*} while on the bulk, we have the stronger bounds \begin{equation*} \begin{split} \mathfrak{1}_{\mathcal{B}}\vert\{\widetilde{\bf X},{\bf a}\}\vert&\lesssim1,\qquad\mathfrak{1}_{\mathcal{B}}\vert\{\widetilde{\bf V},{\bf a}\}\vert\lesssim\frac{\xi^3}{q^2}\langle t\rangle^{-2},\qquad\mathfrak{1}_{\mathcal{B}}\vert\{\widetilde{\bf V},\eta\}\vert\lesssim\frac{q}{\xi^2}, \end{split} \end{equation*} and the precised formula \begin{equation}\label{DerXiX} \begin{split} \mathfrak{1}_{\mathcal{B}}\vert\partial_\xi\widetilde{\bf X}-2\widetilde{\bf X}\vert&\le \frac{\xi^2}{q}\kappa\le\vert\widetilde{\bf X}\vert. \end{split} \end{equation} \end{corollary} \begin{proof}[Proof of Corollary \ref{CorPBa}] Using \eqref{PBXV2} and \eqref{PBSIC}, we compute that \begin{equation*} \begin{split} \{\widetilde{\bf X}^j,{\bf a}^k\}&=\{\widetilde{\bf X}^j,a\}{\bf u}^k+a\widetilde{X}_3\in^{jmn}\{{\bf L}^m,{\bf u}^k\}{\bf u}^n=a^{-1}\widetilde{\bf V}^j{\bf u}^k-a\widetilde{X}_3(\delta^{jk}-{\bf u}^j{\bf u}^k),\\ \{\widetilde{\bf V}^j,{\bf a}^k\}&=\{\widetilde{\bf V}^j,a\}{\bf u}^k+a\widetilde{V}_3\in^{jmn}\{{\bf L}^m,{\bf u}^k\}{\bf u}^n=\frac{\xi}{2}\frac{\widetilde{\bf X}^j}{\vert\widetilde{\bf X}\vert^3}{\bf u}^k-a\widetilde{V}_3(\delta^{jk}-{\bf u}^j{\bf u}^k). \end{split} \end{equation*} The first four bounds follow by inspection using \eqref{eq:XX1X3} and \eqref{eq:Xprelimbd}. The bound on $\{\widetilde{\bf V},\eta\}$ follows using \eqref{AddedDerXiV} and \eqref{eq:PBVVeta}. For \eqref{DerXiX}, we observe that by \eqref{eq:xidxiX} we have $\xi\partial_\xi\widetilde{\X}=2\widetilde{\X}-\lambda\partial_\lambda\widetilde{X}_1{\bf u}-(\widetilde{X}_3+\lambda\partial_\lambda\widetilde{X}_3){\bf L}\times{\bf u}$, which we can estimate in the bulk $\mathcal{B}$ with \eqref{eq:X12_bds} and \eqref{eq:evenbetterdlX} from the proof of Lemma \ref{lem:betterdl}. \end{proof} We will also need some derivative bounds. \begin{lemma}\label{ZPBFormula} We have the general derivative bounds \begin{equation}\label{GeneralBoundZBracket} \begin{split} \vert\{\widetilde{\bf X},\gamma\}\vert+\vert\{\widetilde{\bf Z},\gamma\}\vert&\lesssim_q \left[t(\xi^{-1}+\xi^{-3})+1+\xi^4+\vert\eta\vert^2+\lambda^2\right]\cdot\sum_{f\in\textnormal{SIC}}\mathfrak{w}_f\vert\{f,\gamma\}\vert,\\ \vert\{{\bf a},\gamma\}\vert+\vert\{\widetilde{\V},\gamma\}\vert&\lesssim_q\left[1+\xi^{-3}+(t+\xi^{-1})/\vert\widetilde{\X}\vert\right]\cdot\sum_{f\in\textnormal{SIC}}\mathfrak{w}_f\vert\{f,\gamma\}\vert, \end{split} \end{equation} and in the bulk we have the more precise bounds \begin{equation}\label{PrecisedBoundZBracketBulk} \begin{split} \mathfrak{1}_{\mathcal{B}}\vert \{\widetilde{\bf Z},\gamma\}\vert&\lesssim \ip{\xi}^2\ln\ip{t}\cdot\sum_{f\in\textnormal{SIC}}\mathfrak{w}_f\vert\{f,\gamma\}\vert,\\ \mathfrak{1}_{\mathcal{B}}\vert \{\widetilde{\bf V}-{\bf a},\gamma\}\vert&\lesssim\frac{\xi+\xi^{3}}{t}\cdot\sum_{f\in\textnormal{SIC}}\mathfrak{w}_f\vert\{f,\gamma\}\vert. \end{split} \end{equation} \end{lemma} \begin{proof}[Proof of Lemma \ref{ZPBFormula}] For control of $\{\widetilde{\bf X},\gamma\}$, we use \eqref{eq:U_resol} to get \begin{equation*} \begin{split} \{\widetilde{\X},\gamma\}&=\frac{\xi^2\{\widetilde{\X}^j,\eta\}}{1+\xi}\mathfrak{w}_\xi\{\xi,\gamma\}-\frac{(1+\xi)\{\widetilde{\X}^j,\xi\}}{\xi^2}\mathfrak{w}_\eta\{\eta,\gamma\}+(1+\xi^{-1})\partial_\lambda\widetilde{\X}^j{\bf l}^k\cdot\mathfrak{w}_{\bf L}\{{\bf L}^k,\gamma\}+\widetilde{X}_1\mathfrak{w}_{\bf u}\{{\bf u}^j,\gamma\}\\ &\quad+\widetilde{X}_3\in^{jcd}({\bf L}^c\cdot\mathfrak{w}_{\bf u}\{{\bf u}^d,\gamma\}+{\bf u}^d(1+\xi^{-1})\cdot\mathfrak{w}_{{\bf L}}\{{\bf L}^c,\gamma\}) \end{split} \end{equation*} and using \eqref{PBX1} and \eqref{PBX1'} with \eqref{BoundsX}, this gives the bound on $\{\widetilde{\X},\gamma\}$ in \eqref{GeneralBoundZBracket}. The general bounds for $\{\widetilde{\bf V},\gamma\}$ follows similarly, while the improved bounds follow from Corollary \ref{CorPBa}. In addition, \begin{equation}\label{eq:veca_PB} \begin{split} \{{\bf a},\gamma\}&=-\frac{q}{\xi^2}\{\xi,\gamma\}{\bf u}+\frac{q}{\xi}\{{\bf u},\gamma\}. \end{split} \end{equation} The estimates of $\{\widetilde{\bf Z},\gamma\}$ follow along similar lines. Starting with \begin{equation*} \begin{split} \widetilde{\bf Z}&=\widetilde{Z}_1{\bf u}+\widetilde{Z}_3{\bf L}\times{\bf u},\\ \widetilde{Z}_1&=\frac{\xi^2}{q}\left(\eta+\widetilde{\sigma}+\left(\frac{1}{2}+\frac{\widetilde{D}}{1+4\kappa^2}\right)\left(1+\frac{tq^2}{\xi^3\widetilde{N}}\right)\right),\qquad \widetilde{Z}_3=-\frac{2\xi}{q}\left(\frac{1}{2}+\frac{\widetilde{D}}{1+4\kappa^2}\right)\left(1+\frac{tq^2}{\xi^3\widetilde{N}}\right), \end{split} \end{equation*} where $\widetilde{D}=D(\kappa,\eta+tq^2/\xi^3)$ and similarly for $\widetilde{\sigma}$ and $\widetilde{N}$. We compute that \begin{equation*} \begin{split} \{\widetilde{Z}_1{\bf u},\gamma\}&=\frac{\{\xi,\gamma\}}{\xi}\cdot{\bf u}\cdot\Big[2\widetilde{Z}_1-\frac{\xi^2}{q}\Big(\kappa\partial_\kappa\widetilde{\sigma}+\frac{3tq^2}{\xi^3}\partial_\eta\widetilde{\sigma}+\left(\frac{1}{2}+\frac{\widetilde{D}}{1+4\kappa^2}\right)\frac{tq^2}{\xi^3\widetilde{N}}(3-\frac{\kappa\partial_\kappa\widetilde{N}}{\widetilde{N}}-\frac{3tq^2}{\xi^3}\frac{\partial_\eta\widetilde{N}}{\widetilde{N}})\\ &\quad+\frac{1}{1+4\kappa^2}\left(1+\frac{tq^2}{\xi^3\widetilde{N}}\right)\left(\kappa\partial_\kappa\widetilde{D}+\frac{3tq^2}{\xi^3}\partial_\eta\widetilde{D}-\frac{8\kappa^2}{1+4\kappa^2}\widetilde{D}\right)\Big)\Big]\\ &\quad+\{\eta,\gamma\}{\bf u}\frac{\xi^2}{q}\left(1+\partial_\eta\widetilde{\sigma}-\left(\frac{1}{2}+\frac{\widetilde{D}}{1+4\kappa^2}\right)\partial_\eta\widetilde{N}+\frac{1}{1+4\kappa^2}\left(1+\frac{tq^2}{\xi^3\widetilde{N}}\right)\partial_\eta\widetilde{D}\right)\\ &\quad+\frac{\{\lambda,\gamma\}}{\xi}{\bf u}\frac{\xi^2}{q}\left(\partial_\kappa\widetilde{\sigma}-\left(\frac{1}{2}+\frac{\widetilde{D}}{1+4\kappa^2}\right)\frac{tq^2}{\xi^3\widetilde{N}}\frac{\partial_\kappa\widetilde{N}}{\widetilde{N}}+\frac{1}{1+4\kappa^2}\left(1+\frac{tq^2}{\xi^3\widetilde{N}}\right)(\partial_\kappa\widetilde{D}-\frac{8\kappa}{1+4\kappa^2}\widetilde{D})\right)\\ &\quad+\widetilde{Z}_1\{{\bf u},\gamma\}, \end{split} \end{equation*} and similarly \begin{equation*} \begin{split} \{Z_3,\gamma\}&=\frac{\{\xi,\gamma\}}{\xi}\cdot\Big[-\widetilde{Z}_3+\frac{2\xi}{q}\frac{1}{1+4\kappa^2}\left(1+\frac{tq^2}{\xi^3\widetilde{N}}\right)\left(\kappa\partial_\kappa\widetilde{D}-\frac{8\kappa^2}{1+4\kappa^2}\widetilde{D}+\frac{3tq^2}{\xi^3}\partial_\eta\widetilde{D}\right)\\ &\quad-\frac{2\xi}{q}\left(\frac{1}{2}+\frac{\widetilde{D}}{1+4\kappa^2}\right)\frac{tq^2}{\xi^3\widetilde{N}}(-3+\frac{3tq^2}{\xi^3}\frac{\partial_\eta\widetilde{N}}{\widetilde{N}}+\frac{\kappa\partial_\kappa\widetilde{N}}{\widetilde{N}})\Big]\\ &\quad+\{\eta,\gamma\}\cdot\left(-\frac{2\xi}{q}\left(1+\frac{tq^2}{\xi^3\widetilde{N}}\right)\partial_\eta\widetilde{D}+\frac{2\xi}{q}\left(\frac{1}{2}+\frac{\widetilde{D}}{1+4\kappa^2}\right)\frac{tq^2}{\xi^3\widetilde{N}}\frac{\partial_\eta\widetilde{N}}{\widetilde{N}}\right)\\ &\quad+\frac{\{\lambda,\gamma\}}{\xi}\cdot\frac{2\xi}{q}\Big(\frac{1}{1+4\kappa^2}\left(1+\frac{tq^2}{\xi^3\widetilde{N}}\right)\left(-\kappa\partial_\kappa\widetilde{D}+\frac{8\kappa}{1+4\kappa^2}\widetilde{D}\right)+\left(\frac{1}{2}+\frac{\widetilde{D}}{1+4\kappa^2}\right)\frac{tq^2}{\xi^3\widetilde{N}}\frac{\partial_\kappa\widetilde{N}}{\widetilde{N}}\Big). \end{split} \end{equation*} In general, we have that \begin{equation}\label{GeneralBounds} \begin{split} \vert\widetilde{Z}_1\vert\lesssim\frac{\xi^2}{q}(1+\vert\eta\vert)+\frac{tq}{\xi},\qquad\vert\widetilde{Z}_3\vert&\lesssim\frac{\xi}{q}(1+\vert\eta\vert)+\frac{tq}{\xi^2},\qquad\vert\kappa\partial_\kappa\widetilde{\sigma}\vert\lesssim 1,\qquad\vert\partial_\eta\widetilde{\sigma}\vert\lesssim\langle\kappa\rangle^{-1}, \end{split} \end{equation} and we obtain the bound \begin{equation*} \begin{split} \vert\{\widetilde{\bf Z},\gamma\}\vert&\lesssim \frac{1}{1+\xi}\left(\frac{\xi^3}{q}(\vert\eta\vert+1)+tq\right)\cdot\mathfrak{w}_\xi\vert\{\xi,\gamma\}\vert+\frac{1+\xi}{q}\left(1+\frac{\vert\eta\vert+tq^2\xi^{-3}}{1+\kappa^2}\right)\cdot\mathfrak{w}_\eta\vert\{\eta,\gamma\}\vert\\ &\quad+\left(t\frac{q}{\xi}+\frac{\xi^2}{q}(\vert\eta\vert+\kappa)\right)\cdot\mathfrak{w}_{\bf u}\vert\{{\bf u},\gamma\}\vert+\frac{1+\xi}{q}\left(1+\frac{\vert\eta\vert+tq^2\xi^{-3}}{1+\kappa^2}\right)\cdot\mathfrak{w}_{\bf L}\vert\{{\bf L},\gamma\}\vert, \end{split} \end{equation*} from which we deduce \eqref{GeneralBoundZBracket}. In the bulk $\mathcal{B}$, we have that \begin{equation*} \begin{split} \vert\widetilde{Z}_1\vert\lesssim\frac{\xi^2}{q}\ln (2+ t),\qquad\vert\widetilde{Z}_3\vert\lesssim \xi/q,\qquad \vert\partial_\kappa\widetilde{\sigma}\vert\lesssim\frac{\xi^6\kappa}{t^2q^4},\qquad \vert\partial_\eta\widetilde{\sigma}\vert\lesssim\frac{\xi^3}{tq^2},\\ \frac{tq^2}{\xi^3\widetilde{N}}\lesssim 1,\qquad\widetilde{D}\lesssim\frac{\xi^3(1+\kappa^2)}{tq^2},\qquad\vert\partial_\eta\widetilde{D}\vert\lesssim\frac{\xi^6(1+\kappa^2)}{t^2q^4},\qquad\vert\partial_\kappa\widetilde{N}\vert=\vert \partial_\kappa\widetilde{D}\vert\lesssim\frac{\xi^3\kappa}{tq^2},\qquad\vert\partial_\eta\widetilde{N}\vert\lesssim 1, \end{split} \end{equation*} and this gives that \begin{equation*} \begin{split} \mathfrak{1}_{\mathcal{B}}\vert \{\widetilde{\bf Z},\gamma\}\vert&\lesssim \frac{\xi^3}{q(1+\xi)}\ln\ip{t}\cdot\mathfrak{w}_\xi\vert\{\xi,\gamma\}\vert+\frac{1+\xi}{q}\cdot\mathfrak{w}_\eta\vert\{\eta,\gamma\}\vert+\frac{\xi^2}{q}\ln\langle t\rangle\cdot\mathfrak{m}_{\bf u}\vert\{{\bf u},\gamma\}\vert+\mathfrak{w}_{\bf L}\vert\{{\bf L},\gamma\}\vert, \end{split} \end{equation*} from which \eqref{PrecisedBoundZBracketBulk} follows. Similarly, we compute that \begin{equation*} \begin{split} \{\widetilde{\V}^j-{\bf a}^j,\gamma\}&=-\frac{\widetilde{\V}^j-{\bf a}^j}{\widetilde{N}}\{\widetilde{N},\gamma\}+(\widetilde{V}_1-1)\{{\bf u}^j,\gamma\}+\widetilde{V}_3\in^{jkl}\{{\bf L}^k,\gamma\}{\bf u}^l+\widetilde{V}_3\in^{jkl}{\bf L}^k\{{\bf u}^l,\gamma\}\\ &\quad-\frac{q}{2\widetilde{N}}\frac{1}{1+4\kappa^2}\left[\{\widetilde{D},\gamma\}-\frac{8\kappa \widetilde{D}}{1+4\kappa^2}\{\kappa,\gamma\}+\frac{2\widetilde{D}}{\xi^2}\{\xi,\gamma\}\right]({\bf u}^j-2\kappa({\bf l}\times{\bf u})^j). \end{split} \end{equation*} Since in the bulk we have also \begin{equation*} \begin{split} \vert\widetilde{\V}-{\bf a}\vert+\vert\widetilde{V}_1\vert\lesssim\frac{\xi^3}{tq},\qquad\vert\widetilde{V}_3\vert\lesssim \frac{\xi^6}{t^2q^3}\kappa, \end{split} \end{equation*} we deduce that \begin{equation*} \begin{split} \vert\{\widetilde{\V}-{\bf a},\gamma\}\vert&\lesssim_q \frac{\xi^3}{t}\mathfrak{w}_\xi\vert\{\xi,\gamma\}\vert+\frac{\xi^4\langle\xi\rangle}{t^2}\mathfrak{w}_\eta\vert\{\eta,\gamma\}\vert+\frac{\xi^3}{t}\mathfrak{w}_{\bf u}\vert\{{\bf u},\gamma\}\vert+\frac{\langle\xi\rangle\xi}{t}\mathfrak{w}_{\bf L}\vert\{{\bf L},\gamma\}\vert. \end{split} \end{equation*} \end{proof} \begin{remark} In view of the terms arising in the nonlinearity, for the sake of completeness we also compute that \begin{equation} \{\xi,V_3\}=\lambda^{-1}\{\xi,{\bf V}\}\cdot({\bf l}\times{\bf u})=\lambda^{-1}\frac{\xi^3}{2q}\frac{{\bf X}\cdot({\bf l}\times{\bf u})}{\abs{{\bf X}}^3}=\frac{\xi^3}{2q}\frac{X_3}{\abs{{\bf X}}^3}. \end{equation} In the region $\{{\bf X}\cdot{\bf V}>-\ip{\kappa}\xi\}$, as in the above lemma we obtain with \eqref{eq:X12_bds} that \begin{equation} \aabs{\{\xi,\widetilde{V}_3\}}\lesssim \frac{\xi^2}{q}\frac{1}{\aabs{\widetilde{\X}}^2}\cdot\frac{\xi^2}{q\aabs{\widetilde{\X}}},\qquad \abs{\{\eta,V_3\}}\lesssim \frac{\xi}{q\abs{{\bf X}}^{2}}, \end{equation} and thus also \begin{equation}\label{eq:PBxiV3} \aabs{\{\eta,\widetilde{V}_3\}}\lesssim\aabs{\{\eta,V_3\} \circ\Phi_t^{-1}}+\aabs{3t\xi^{-4}q^2\{\xi,\widetilde{V}_3\}}\lesssim \frac{\xi}{q\aabs{\widetilde{\X}}^{2}}+\frac{t}{\aabs{\widetilde{\X}}^3}. \end{equation} Similarly, \begin{equation}\label{eq:PBxiX3} \aabs{\{\xi,X_3\}}=\aabs{\frac{\xi^3}{q^2}V_3}\lesssim \frac{\xi}{q}\frac{1}{N^2}\lesssim \frac{\xi^5}{q^3}\frac{1}{\abs{{\bf X}}^2},\qquad\vert\{\eta,X_3\}\vert\lesssim\frac{1}{q},\qquad \vert\{\eta,\widetilde{X}_3\}\vert\lesssim\frac{1}{q}(1+\frac{t\xi}{\abs{{\bf X}}^2}). \end{equation} \end{remark} \subsection{Transition maps}\label{ssec:transitions} Since we propagate derivative control through Poisson brackets with the super-integrable coordinates, we will need to understand the relation between Poisson brackets in past versus future asymptotic actions, and compare the close and far formulations. \subsubsection*{Transition from past to future asymptotic actions} By construction, the past asymptotic actions are anchored at the periapsis in \eqref{eq:def_SIC(-)} in such a way that the Poisson bracket relations \eqref{PBSIC} remain almost unchanged: Using that \begin{equation*} \begin{split} \{\eta,\kappa\}=\frac{\kappa}{\xi},\qquad\{\kappa,{\bf u}\}=-\frac{1}{\xi}({\bf l}\times{\bf u}),\qquad\{\kappa,{\bf l}\times{\bf u}\}=\frac{1}{\xi}{\bf u}, \end{split} \end{equation*} we can verify with \eqref{eq:def_SIC(-)} that \begin{equation}\label{eq:PB_SIC(-)} \begin{split} \{\xi^{(-)},\eta^{(-)}\}=1,\qquad 0&=\{\xi^{(-)},\lambda^{(-)}\}=\{\eta^{(-)},\lambda^{(-)}\},\\ \{\xi^{(-)},{\bf u}^{(-)}\}=\{\eta^{(-)},{\bf u}^{(-)}\}=0&=\{\xi^{(-)},{\bf L}^{(-)}\}=\{\eta^{(-)},{\bf L}^{(-)}\}=\{\lambda^{(-)},{\bf L}^{(-)}\},\\ \{\lambda^{(-)},{\bf u}^{(-)}\}&={\bf l}^{(-)}\times{\bf u}^{(-)},\quad\{{\bf u}^{(-),j},{\bf u}^{(-),k}\}=0,\\ \{{\bf L}^{(-),j},{\bf u}^{(-),k}\}&=\in^{jka}{\bf u}^{(-),a},\quad \{{\bf L}^{(-),j},{\bf L}^{(-),k}\}=\in^{jka}{\bf L}^{(-),a}. \end{split} \end{equation} The relation between Poisson brackets in past and future asymptotic actions is now easily established: for a scalar function $\zeta$ we have \begin{equation}\label{eq:inout_PBtrans} \begin{split} \{\xi^{(-)},\zeta\}&=-\{\xi,\zeta\},\qquad\{\eta^{(-)},\zeta\}=-\frac{1}{\xi}\frac{4\kappa^2}{4\kappa^2+1}\{\xi,\zeta\}-\{\eta,\zeta\}+\frac{1}{\xi}\frac{4\kappa}{4\kappa^2+1}\{\lambda,\zeta\},\\ \{\lambda^{(-)},\zeta\}&=\{\lambda,\zeta\},\qquad \{{\bf L}^{(-)},\zeta\}=\{{\bf L},\zeta\},\\ \{{\bf u}^{(-)},\zeta\}&=\frac{1}{\xi}\left[-\frac{16\kappa^2}{(4\kappa^2+1)^2}{\bf u}+\frac{4\kappa(1-4\kappa^2)}{(4\kappa^2+1)^2}{\bf l}\times{\bf u}\right]\{\xi,\zeta\}+\frac{1}{\xi}\left[\frac{16\kappa^2}{(4\kappa^2+1)^2}{\bf u}+\frac{32\kappa^2}{(4\kappa^2+1)^2}{\bf l}\times{\bf u}\right]\{\lambda,\zeta\}\\ &\quad+\frac{1-4\kappa^2}{4\kappa^2+1}\{{\bf u},\zeta\}-\frac{1}{\xi}\frac{4}{4\kappa^2+1}\{{\bf L}\times{\bf u},\zeta\}. \end{split} \end{equation} In particular, we highlight that the transition from past to future asymptotic actions (at e.g.\ the periapsis) can be carried out along a given trajectory. \subsubsection*{Transition between far and close formulations} With the notation of Section \ref{ssec:farclose}, we recall that the transition between the close and far formulations is given by the canonical diffeomorphism $\mathcal{M}_t$ of \eqref{eq:defM_t}. In particular, for Poisson brackets with a scalar function $\mathfrak{w}$ on phase space we have \begin{equation}\label{eq:PBrel'} \{\mathfrak{w},\gamma'\}=\{\mathfrak{w},\gamma\circ\mathcal{M}_t\}=\{\mathfrak{w}',\gamma\}\circ\mathcal{M}_t,\qquad \mathfrak{w}':=\mathfrak{w}\circ\mathcal{M}_t^{-1}. \end{equation} The derivative control of Section \ref{sec:derivs_prop} will be given in terms of a collection of Poisson brackets of the unknown $\gamma$ with the functions $f\in\{\xi,\eta,{\bf L},{\bf u}\}$, weighted by $\mathfrak{w}_f\in\mathbb{R}$, collected as \begin{equation} \mathcal{D}(\vartheta,{\bf a};t):=\sum_{f\in\{\xi,\eta,{\bf L},{\bf u}\}}\mathfrak{w}_f\aabs{\{f,\gamma\}}. \end{equation} The corresponding transition maps are collected below in Lemma \ref{lem:trans_short} resp.\ Lemma \ref{lem:trans_full}. \section{Bounds on the electric field}\label{sec:efield} From here on, we stop tracking the homogeneity of the quantities since the magnitude of electric quantities will be compared to powers of $\sqrt{t^2+\vert{\bf y}\vert^2}$ which is not homogeneous. \subsection{Electric field, localized electric field and effective field} Given a density $\gamma$, we define the electric field and its derivative as \begin{equation}\label{DefEF} \begin{split} \mathcal{E}_j({\bf y},t):=\frac{1}{4\pi}\iint {\bf U}_j({\bf y}-\widetilde{\bf X}(\vartheta,{\bf a} ))\cdot \gamma^2(\vartheta,{\bf a},t)d\vartheta d{\bf a} ,\\ \mathcal{F}_{jk}({\bf y},t):=\frac{1}{4\pi}\iint\mathcal{M}_{jk}({\bf y}-\widetilde{\bf X}(\vartheta,{\bf a} ))\cdot\gamma^2(\vartheta,{\bf a} ,t)d\vartheta d{\bf a} , \end{split} \end{equation} where \begin{equation*} \begin{split} {\bf U}_j({\bf y}):=\partial_{{\bf y}^j}\frac{-1}{\vert {\bf y}\vert}=\frac{{\bf y}^j}{\vert {\bf y}\vert^3},\qquad \mathcal{M}_{jk}({\bf y}):=\partial_{y^j}\partial_{y^k}\frac{-1}{\vert {\bf y}\vert}=\frac{1}{\vert {\bf y}\vert^3}\left(\delta_{jk}-3\frac{{\bf y}^j{\bf y}^k}{\vert{\bf y}\vert^2}\right). \end{split} \end{equation*} Here we show how to bound the electric field and its derivative using moments on the unknown $\gamma$. Towards this, we define the \emph{bulk zone} as in \eqref{eq:bulk} \begin{equation*} \mathcal{B}:=\{(\vartheta,{\bf a})\in\mathcal{P}_{\vartheta,{\bf a}}:\; a+\xi \le (q/\langle q\rangle) t^\frac{1}{4}/10,\quad \xi\abs{\eta}+\lambda\le 10^{-3}ta^2\}, \end{equation*} and we notice from Corollary \ref{CorBdsOnX} resp.\ Lemma \ref{ImprovedBoundsOnXBulk} that if $(\vartheta,{\bf a})\in\mathcal{B}$, then $(\vartheta+t{\bf a},{\bf a})\in\Omega_+$ for $t>0$, and we have the simple bounds \begin{equation}\label{eq:bulk-bds} 2ta^3/q\geq\eta(\vartheta+t{\bf a},{\bf a})\ge \frac{1}{2}ta^3/q,\qquad \rho(\vartheta+t{\bf a},{\bf a})\ge \frac{1}{8}ta^3/q,\qquad 10^{-3}ta\le \vert\widetilde{\bf X}\vert\le 10^3ta. \end{equation} In the complement $\mathcal{B}^c$ we can trade moments for decay in the sense that for all $k\geq 0$ \begin{equation}\label{eq:nonbulk-bds} \mathfrak{1}_{\mathcal{B}^c}\lesssim_k t^{-k}\left[\xi^2\abs{\eta}^2+\lambda^2+a^4+\xi^4\right]^k\mathfrak{1}_{\mathcal{B}^c}. \end{equation} In order to obtain refined bounds on the electric field, we decompose it onto scales. Let $\varphi\in C^\infty_c(\mathbb{R}^3)$ be a standard, radial cutoff function with $\text{supp}(\varphi)\subset\{1/2\le \vert {\bf x}\vert\le 2\}$ and $\int_{\mathbb{R}^3}\varphi({\bf x})/\vert{\bf x}\vert^2d{\bf x}=1$. Note that since \begin{equation}\label{IntroducingVarphi} \frac{1}{4\pi\abs{{\bf x}}}=\int_{R=0}^\infty\varphi_R({\bf x})\frac{dR}{R^2},\qquad\varphi_R({\bf x})=\varphi(R^{-1}{\bf x}), \end{equation} we can decompose \begin{equation}\label{eq:defER} \mathcal{E}_j({\bf y},t)=\int_{R=0}^\infty \mathcal{E}_R({\bf y},t)\frac{dR}{R},\qquad \mathcal{E}_{j,R}({\bf y},t)=-\frac{1}{R^2}\iint\partial_j\varphi_R({\bf y}-\widetilde{\X}(\vartheta,{\bf a}))\gamma^2(\vartheta,{\bf a},t)d\vartheta d{\bf a}. \end{equation} We also introduce the effective electric field \begin{equation*} \begin{split} \mathcal{E}^{eff}_j({\bf y},t)=\int_{R=0}^\infty\mathcal{E}^{eff}_{j,R}({\bf y},t)\frac{dR}{R},\qquad \mathcal{E}^{eff}_{j,R}({\bf y},t)&=-R^{-2}\iint \partial_j\varphi_R({\bf y}-t{\bf a})\gamma^2(\vartheta,{\bf a},t)d\vartheta d{\bf a}. \end{split} \end{equation*} We proceed similarly with the derivative of the electric field \begin{equation*} \begin{split} \mathcal{F}_{jk}({\bf y},t)&=\int_{R=0}^\infty \mathcal{F}_{jk,R}({\bf y},t)\frac{dR}{R},\qquad \mathcal{F}_{jk,R}({\bf y},t):=-R^{-3}\iint\partial_j\partial_k\varphi_R({\bf y}-\widetilde{\bf X}(\vartheta,{\bf a} ))\cdot\gamma^2(\vartheta,{\bf a} ,t)d\vartheta d{\bf a},\\ \mathcal{F}_{jk}^{eff}({\bf y},t)&=\int_{R=0}^\infty \mathcal{F}^{eff}_{jk,R}({\bf y},t)\frac{dR}{R},\qquad \mathcal{F}^{eff}_{jk,R}({\bf y},t):=-R^{-3}\iint\partial_j\partial_k\varphi_R({\bf y}-t{\bf a} )\cdot\gamma^2(\vartheta,{\bf a} ,t)d\vartheta d{\bf a} . \end{split} \end{equation*} When $R$ is too small, volume bounds are not enough to overcome the singularity at $R=0$ and we rewrite, using \eqref{ZeroIntegralPB}, and the constant of motion ${\bf z}$ for the asymptotic equation: \begin{equation}\label{FjkAndPB2} \begin{split} \mathcal{F}_{jk,R}({\bf y},t)&=R^{-2}\iint \{\partial_j\varphi_R({\bf y}-{\bf x}),{\bf v}^k\}\cdot\mu^2({\bf x},{\bf v} ,t)d{\bf x} d{\bf v},\\ &=-R^{-2}t^{-1}\iint \{\partial_j\varphi_R({\bf y}-{\bf x}),{\bf x}^k-t{\bf v}^k\}\cdot\mu^2({\bf x},{\bf v} ,t)d{\bf x} d{\bf v}\\ &=-R^{-2}t^{-1}\iint\{\partial_j\varphi_R({\bf y}-\widetilde{\bf X}(\vartheta,{\bf a} )),\widetilde{\bf Z}^k\}\cdot\gamma^2(\vartheta,{\bf a} ,t)d\vartheta d{\bf a} \\ &=-2R^{-2}t^{-1}\iint\partial_j\varphi_R({\bf y}-\widetilde{\bf X}(\vartheta,{\bf a} ))\{\widetilde{\bf Z}^k,\gamma\}\cdot\gamma(\vartheta,{\bf a} ,t)d\vartheta d{\bf a}, \end{split} \end{equation} which has a similar structure as $\mathcal{E}_j$, except that we have replaced one copy of $\gamma$ with a derivative. \subsection{Approximating the electric field by the effective electric field} Assuming only bounds on the moments, we can obtain good bounds on the electric field, and assuming control on Poisson brackets leads to control on the derivatives of the electric field. \begin{proposition}\label{PropControlEF} The electric field is well approximated by the effective field: \begin{equation}\label{ControlEMom} \begin{split} \left[t^2+\vert{\bf y}\vert^2\right]\cdot\vert \mathcal{E}({\bf y},t)-\mathcal{E}^{eff}({\bf y},t)\vert&\lesssim t^{-\frac{1}{5}}N_1,\\ \left[t^2+\vert{\bf y}\vert^2\right]^\frac{3}{2}\cdot\vert \mathcal{F}({\bf y},t)-\mathcal{F}^{eff}({\bf y},t)\vert&\lesssim t^{-\frac{1}{5}}N_2, \end{split} \end{equation} with \begin{equation*} \begin{split} N_1&:=\norm{\left[\xi^2+a^3+\vert\eta\vert^2+\lambda\right]\gamma}_{L^2_{\vartheta,{\bf a}}}^2+\norm{\left[\xi^4+a^{10}+\vert\eta\vert^{10}+\lambda^5\right]\gamma}_{L^\infty_{\vartheta,{\bf a}}}^2,\\ N_2&:=N_1+\norm{\left[\xi^2+a^2+\vert\eta\vert^{2}+\lambda^2\right]^{12}\gamma}_{L^\infty_{\vartheta,{\bf a}}}^2+\sum_{f\in \textnormal{SIC}}\Vert \mathfrak{w}_f\{f,\gamma\}\Vert_{L^\infty}^2. \end{split} \end{equation*} \end{proposition} \begin{proof} We first observe that $\mathcal{E}_R$ satisfies the simple bound \begin{equation}\label{eq:ERsimplebd} R^2 \abs{\mathcal{E}_R({\bf y},t)}+R^2 \aabs{\mathcal{E}^{eff}_R({\bf y},t)}+R^3\vert\mathcal{F}_R({\bf y},t)\vert+R^3\vert\mathcal{F}^{eff}_R({\bf y},t)\vert\lesssim \norm{\gamma}_{L^2_{\vartheta,{\bf a}}}^2. \end{equation} If $t\lesssim_q1$, the bounds follow by simple estimates on the convolution kernel. In what follows, we assume that $t\gg_q1$. {\bf A}) The electric field. We prove the first bound in \eqref{ControlEMom}. {\bf (A1) Large scales}: $R\ge (t^2+\vert{\bf y}\vert^2)^\frac{3}{8}/100$. We first compare at each scale \begin{equation*} \begin{split} \mathcal{E}_{j,R}({\bf y},t)-\mathcal{E}^{eff}_{j,R}({\bf y},t)&= R^{-2}\iint_{\mathcal{B}} \left[\partial_j\varphi_R({\bf y}-\widetilde{\bf X}(\vartheta,{\bf a}))-\partial_j\varphi_R({\bf y}-t{\bf a})\right]\gamma^2(\vartheta,{\bf a},t)d\vartheta d{\bf a}\\ &\quad+R^{-2}\iint_{\mathcal{B}^c} \left[\partial_j\varphi_R({\bf y}-\widetilde{\bf X}(\vartheta,{\bf a}))-\partial_j\varphi_R({\bf y}-t{\bf a})\right]\gamma^2(\vartheta,{\bf a},t)d\vartheta d{\bf a}=I_{\mathcal{B}}+I_{\mathcal{B}^c}. \end{split} \end{equation*} In the bulk, we can use \eqref{ApproxXBulk} to get \begin{equation*} \begin{split} \vert I_{\mathcal{B}}\vert&\lesssim R^{-3}\iint_{\mathcal{B}} \vert\widetilde{\bf X}(\vartheta,{\bf a})-t{\bf a}\vert \gamma^2(\vartheta,{\bf a},t)d\vartheta d{\bf a}\lesssim R^{-3}\langle \ln t\rangle\iint \left[1+\xi^4+\eta^4+\lambda^2\right]\gamma^2(\vartheta,{\bf a},t)d\vartheta d{\bf a}. \end{split} \end{equation*} Outside the bulk, we estimate each term separately $\vert I_{\mathcal{B}^c}\vert\le \vert I^1_{\mathcal{B}^c}\vert+\vert I^2_{\mathcal{B}^c}\vert$. If $\vert {\bf y}\vert\le 10t$, we can use \eqref{eq:nonbulk-bds} to deduce that \begin{equation} \begin{aligned} \vert I^1_{\mathcal{B}^c}\vert+ \vert I^2_{\mathcal{B}^c}\vert&\lesssim R^{-2}\iint t^{-k}\left[\xi^{2k}\abs{\eta}^{2k}+\lambda^{2k}+a^{4k}+\xi^{4k}\right]\gamma^2(\vartheta,{\bf a},t)d\vartheta d{\bf a}\\ &\lesssim (t^2+\vert{\bf y}\vert^2)^{-k/2}R^{-2}\Vert \left[\xi^{2k}+a^{2k}+\vert\eta\vert^{2k}+\lambda^k\right]\gamma\Vert_{L^2_{\vartheta,{\bf a}}}^2. \end{aligned} \end{equation} If $\vert{\bf y}\vert\ge 10t$, and $R\ge \vert{\bf y}\vert/4$, the same bound gives \begin{equation} \begin{aligned} \vert I^1_{\mathcal{B}^c}\vert+ \vert I^2_{\mathcal{B}^c}\vert&\lesssim R^{-2}\langle t\rangle^{-k}\mathfrak{1}_{\{R\ge (t^2+\vert{\bf y}\vert^2)^\frac{1}{2}/10\}}\cdot\Vert \left[\xi^{2k}+a^{2k}+\vert\eta\vert^{2k}+\lambda^k\right]\gamma\Vert_{L^2_{\vartheta,{\bf a}}}^2. \end{aligned} \end{equation} Else, we see that, on the support of $\partial_j\varphi_R({\bf y}-t{\bf a})$, we must have that $ a\ge \vert {\bf y}\vert/(2t)$, and we can modify the bound above to bound the effective field \begin{equation*} \begin{aligned} \vert I_{\mathcal{B}^c}^2\vert&:= \vert \iint_{\mathcal{B}^c}\partial_j\varphi_R({\bf y}-t{\bf a})\gamma^2(\vartheta,{\bf a},t)d\vartheta d{\bf a}\vert\\ &\lesssim \vert{\bf y}\vert^{-k}R^{-2}\iint \xi^{-k}\left[\xi^{2k}\abs{\eta}^{2k}+\lambda^{2k}+a^{4k}+\xi^{4k}\right]\gamma^2(\vartheta,{\bf a},t)d\vartheta d{\bf a}\\ &\lesssim \left(t^2+\vert y\vert^2\right)^{-k/2}R^{-2}\Vert \left[\xi^{2k}+a^{3k}+\vert\eta\vert^{2k}+\lambda^{2k}\right]\gamma\Vert_{L^2_{\vartheta,{\bf a}}}^2. \end{aligned} \end{equation*} On the support of $\partial_j\varphi_R({\bf y}-\widetilde{\bf X}(\vartheta,{\bf a}))$, we have that $\vert\widetilde{X}(\vartheta,{\bf a})\vert\ge\vert{\bf y}\vert/2$ and therefore \begin{equation}\label{UpperBoundytEF} \begin{split} t^2+\vert{\bf y}\vert^2&\le4(t^2+\vert\widetilde{\bf X}\vert^2)\lesssim t^2(1+a^2)+\vert\eta\vert^4+\frac{\xi^8}{q^4}+\frac{\lambda^4}{q^2}, \end{split} \end{equation} and we can use variation of the previous argument: \begin{equation*} \begin{aligned} \vert I_{\mathcal{B}^c}^1\vert&:= \vert \iint_{\mathcal{B}^c}\partial_j\varphi_R({\bf y}-\widetilde{\bf X}(\vartheta,{\bf a}))\gamma^2(\vartheta,{\bf a},t)d\vartheta d{\bf a}\vert\\ &\lesssim \left(t^2+\vert y\vert^2\right)^{-k/2}R^{-2}\Vert \left[\xi^{2k}+a^{2k}+\vert\eta\vert^{2k}+\lambda^{k}\right]\gamma\Vert_{L^2_{\vartheta,{\bf a}}}^2. \end{aligned} \end{equation*} Taking $k=1$ and integrating over $R\ge(t^2+\vert{\bf y}\vert^2)^\frac{3}{8}$, we obtain an acceptable contribution to the first line in \eqref{ControlEMom} since \begin{equation*} \begin{split} \left[t^2+\vert{\bf y}\vert^2\right]\int_{R=(t^2+\vert{\bf y}\vert^2)^\frac{3}{8}}^\infty\left[\left[t^2+\vert{\bf y}\vert^2\right]^{-\frac{1}{2}}+\langle t\rangle^{-1}\mathfrak{1}_{\{R\ge\sqrt{t^2+\vert{\bf y}\vert^2}\}}+\frac{\langle\ln t\rangle}{R}\right]\frac{dR}{R^3}\lesssim \langle t\rangle^{-\frac{1}{5}}. \end{split} \end{equation*} {\bf (A2) Small scales}: $R\le (t^2+\vert {\bf y}\vert^2)^\frac{3}{8}/100$, contributions of the electric field. In this case, we again observe that \eqref{UpperBoundytEF} continues to hold on the support of $\partial_j\varphi_R({\bf y}-\widetilde{\bf X}(\vartheta,{\bf a}))$ (this is clear if $\vert{\bf y}\vert\le 10t$, while if $\vert {\bf y}\vert\ge 10t$, then the bound on $R$ forces $\widetilde{\bf X}$ to have a similar size). Using also that $\vert\widetilde{\bf V}\vert\le a$, we can bound the contribution outside the bulk \begin{equation}\label{MainVolumeGainBC} \begin{split} &(t^2+\vert{\bf y}\vert^2)^{\frac{3}{2}}\vert\mathcal{E}_{R,\mathcal{B}^c}\vert\\ \lesssim& R^{-2}\iint \left\vert\partial_j\varphi_R({\bf y}-\widetilde{\bf X}(\vartheta,{\bf a}))\right\vert\cdot\langle a\rangle^3\left(\eta^{12}+\lambda^6+a^{12}+\xi^{12}\right)\cdot \frac{a^4}{\langle\widetilde{\bf V}\rangle^4}\cdot\gamma^2(\vartheta,{\bf a},t)d\vartheta d{\bf a}\\ \lesssim& R^{-2}N_1\cdot\iint \left\vert\partial_j\varphi_R({\bf y}-\widetilde{\bf X}(\vartheta,{\bf a}))\right\vert\cdot \frac{d\vartheta d{\bf a}}{\langle\widetilde{\bf V}\rangle^4}\\ \lesssim& R^{-2}N_1\cdot\iint \left\vert\partial_j\varphi_R({\bf y}-{\bf x})\right\vert\cdot \frac{d{\bf x} d{\bf v}}{\langle{\bf v}\rangle^4}\lesssim N_1R, \end{split} \end{equation} where in the last line, we have used the fact that $(\vartheta,{\bf a})\mapsto(\widetilde{\X},\widetilde{\V})$ is canonical. In the bulk, since $ta\ge t^\frac{3}{4}/10$, and using \eqref{ApproxXBulk}, we have that \begin{equation}\label{EstimInBulk333} \begin{split} \vert {\bf y}-ta\vert\le \vert{\bf y}\vert/2,\qquad t^2+\vert{\bf y}\vert^2\lesssim_q\min\{\vert{\bf y}\vert^2\langle\xi\rangle^2,t^2\langle a\rangle^2\}, \end{split} \end{equation} and we can use Lemma \ref{LemVolume} to get \begin{equation*} \begin{split} \left[t^2+\vert{\bf y}\vert^2\right]^\frac{3}{2}\vert\mathcal{E}_{R,\mathcal{B}}\vert&\lesssim (\vert{\bf y}\vert^3/R^2)\iint_{\mathcal{B}} \left|\partial_j\varphi_R({\bf y}-\widetilde{\X}(\vartheta,{\bf a}))\right|\cdot\langle \xi\rangle^3\gamma^2(\vartheta,{\bf a},t)d\vartheta d{\bf a}\\ &\lesssim R\norm{[\langle\xi\rangle^{6}+\ip{\eta}^3+\ip{\lambda}^3]\gamma}_{L^\infty_{\vartheta,{\bf a}}}^2, \end{split} \end{equation*} and we can deduce a control over the small scales: \begin{equation*} \begin{split} \left[t^2+\vert{\bf y}\vert^2\right]^\frac{3}{2}\int_{R=0}^{(t^2+\vert{\bf y}\vert^2)^\frac{3}{8}}\vert\mathcal{E}_{R,\mathcal{B}}\vert\frac{dR}{R}\lesssim (t^2+\vert{\bf y}\vert^2)^\frac{3}{8}\cdot \norm{[\langle\xi\rangle^{6}+\ip{\eta}^3+\ip{\lambda}^3]\gamma}_{L^\infty_{\vartheta,{\bf a}}}^2. \end{split} \end{equation*} {\bf (A3) Small scales}: $R\le (t^2+\vert {\bf y}\vert^2)^\frac{3}{8}/100$, contributions of the effective field. Using that, on the support of integration, \begin{equation*} \begin{split} \langle\vartheta\rangle&\lesssim \frac{\xi^2}{q}\vert\eta\vert+\frac{\xi\lambda}{q},\qquad \vert{\bf y}\vert/t\lesssim\langle a\rangle, \end{split} \end{equation*} which follows from \eqref{TransitionFromSIC} for the first inequality and direct inspection (separating the case $\vert {\bf y}\vert\le t$ and $\vert{\bf y}\vert\ge t$) for the second. A simple rescaling gives \begin{equation}\label{ControlEEffSmallR} \begin{split} \left[t^2+\vert{\bf y}\vert^2\right]^\frac{3}{2}\vert\mathcal{E}^{eff}_R\vert&\lesssim R^{-2} t^3\iint \vert\partial_j\varphi_R({\bf y}-t{\bf a})\vert \langle a\rangle^3 \gamma^2(\vartheta,{\bf a},t)d\vartheta d{\bf a}\lesssim R\Vert\langle\vartheta\rangle^2\langle a\rangle^{\frac{3}{2}}\gamma\Vert_{L^\infty_{\vartheta,{\bf a}}}^2\\ &\lesssim R \Vert \left[a^3+\xi^3+\vert\eta\vert^4+\lambda^4\right]\gamma\Vert_{L^\infty_{\vartheta,{\bf a}}}^2. \end{split} \end{equation} We conclude that the small scales give an acceptable contribution to the first line of \eqref{ControlEMom} since \begin{equation*} \begin{split} \left[t^2+\vert{\bf y}\vert^2\right]\cdot \int_{R=0}^{(t^2+\vert{\bf y}\vert^2)^\frac{3}{8}} \left[t^2+\vert {\bf y}\vert^2\right]^{-\frac{3}{2}} dR\lesssim \left[t^2+\vert {\bf y}\vert^2\right]^{-\frac{1}{10}}. \end{split} \end{equation*} {\bf B}) Derivatives of the electric field. The bound on $\mathcal{F}$ follows similar lines, with a variation on small scales, where we make use of \eqref{FjkAndPB2} to improve the summability as $R\to0$. For large scales, we compare similarly \begin{equation*} \begin{split} \mathcal{F}_{jk,R}({\bf y},t)-\mathcal{F}^{eff}_{jk,R}({\bf y},t)&= R^{-3}\iint_{\mathcal{B}} \left[\partial_j\partial_k\varphi_R({\bf y}-\widetilde{\bf X}(\vartheta,{\bf a}))-\partial_j\partial_k\varphi_R({\bf y}-t{\bf a})\right]\gamma^2(\vartheta,{\bf a},t)d\vartheta d{\bf a}\\ &\quad+R^{-3}\iint_{\mathcal{B}^c} \left[\partial_j\partial_k\varphi_R({\bf y}-\widetilde{\bf X}(\vartheta,{\bf a}))-\partial_j\partial_k\varphi_R({\bf y}-t{\bf a})\right]\gamma^2(\vartheta,{\bf a},t)d\vartheta d{\bf a}\\ &=II_{\mathcal{B}}+II_{\mathcal{B}^c}, \end{split} \end{equation*} and again \begin{equation*} \begin{split} \vert II_{\mathcal{B}}\vert&\lesssim R^{-4}\langle \ln t\rangle\Vert \left[1+\xi^4+\eta^4+\lambda^2\right]\gamma\Vert_{L^2_{\vartheta,{\bf a}}}^2,\\ \vert II^1_{\mathcal{B}^c}\vert+\vert II^2_{\mathcal{B}^c}\vert&\lesssim R^{-3}\left(\left[t^2+\vert{\bf y}\vert^2\right]^{-1}+\mathfrak{1}_{\{R\ge\sqrt{t^2+\vert{\bf y}\vert^2}\}}t^{-1}\right)\Vert\left[\xi^{2}+a^{2}+\vert\eta\vert^{2}+\lambda\right]\gamma\Vert_{L^2_{\vartheta,{\bf a}}}^2, \end{split} \end{equation*} and we conclude similarly. For small scales, we make use of \eqref{FjkAndPB2} and adapt the above computations, computing the contribution of each term separately. Outside of the bulk, we use \eqref{UpperBoundytEF} and \eqref{eq:nonbulk-bds} together with the bounds \eqref{GeneralBoundZBracket} to get \begin{equation}\label{SmallScaleFOutsidBulk} \begin{split} \left[t^2+\vert{\bf y}\vert^2\right]^2\vert\mathcal{F}_{R,\mathcal{B}}\vert &\lesssim t^{-1}R^{-2}\iint_{\mathcal{B}^c} \left|\partial_j\varphi_R({\bf y}-\widetilde{\X}(\vartheta,{\bf a}))\right|\cdot\left[\xi^{16}+ a^{10}+\vert\eta\vert^{8}+\lambda^{8}\right]\vert \gamma \{\widetilde{\bf Z},\gamma\}\vert \frac{\langle a\rangle^4}{\langle\widetilde{\bf V}\rangle^4}d\vartheta d{\bf a}\\ &\lesssim t^{-1}R^{-2}\cdot\iint \left|\partial_j\varphi_R({\bf y}-\widetilde{\X}(\vartheta,{\bf a}))\right|\cdot\frac{d\vartheta d{\bf a}}{\langle\widetilde{\bf V}\rangle^4}\cdot N_2^2\lesssim (R/t)\cdot N_2^2. \end{split} \end{equation} In the bulk, we use \eqref{FjkAndPB2} and \eqref{EstimInBulk333}, we can estimate \begin{equation*} \begin{split} \left[t^2+\vert{\bf y}\vert^2\right]^2\vert\mathcal{F}_{R,\mathcal{B}}\vert&\lesssim R(\vert{\bf y}\vert/R)^3\iint_{\mathcal{B}} \left|\partial_j\varphi_R({\bf y}-\widetilde{\X}(\vartheta,{\bf a}))\right|\cdot\langle a\rangle\langle\xi\rangle^3\vert \gamma \{\widetilde{\bf Z},\gamma\}\vert d\vartheta d{\bf a} \end{split} \end{equation*} and using \eqref{PrecisedBoundZBracketBulk} with Lemma \ref{LemVolume}, we obtain the bound \begin{equation*} \begin{split} \left[t^2+\vert{\bf y}\vert^2\right]^2\vert\mathcal{F}_{R,\mathcal{B}}\vert&\lesssim R\langle\ln t\rangle N_2^2, \end{split} \end{equation*} and this leads to an acceptable contribution. Finally, the bound on the effective field is treated similarly using \eqref{TransitionFromSIC} to get bounds on the Poisson bracket with $\vartheta$: \begin{equation*} \begin{split} \mathcal{F}^{eff}_{jk,R}({\bf y},t)&=R^{-2}t^{-1}\iint\{\vartheta^k,\partial_j\varphi_R({\bf y}- t{\bf a} )\}\cdot\gamma^2(\vartheta,{\bf a} ,t)d\vartheta d{\bf a}\\ &=-2R^{-2}t^{-1}\iint \partial_j\varphi_R({\bf y}- t{\bf a} )\cdot \{\vartheta^k,\gamma\}\cdot\gamma(\vartheta,{\bf a} ,t)d\vartheta d{\bf a} . \end{split} \end{equation*} Therefore, in the bulk, using Lemma \ref{LemVolume}, \begin{equation*} \begin{split} \left[t^2+\vert{\bf y}\vert^2\right]^2\vert \mathcal{F}^{eff}_{jk,R}({\bf y},t)\vert&\lesssim R^{-2}t^3\iint \vert\partial_j\varphi_R({\bf y}- t{\bf a} )\vert\cdot \langle a\rangle^4\langle\vartheta\rangle^4\vert\{\vartheta^k,\gamma\}\vert\cdot\gamma(\vartheta,{\bf a} ,t)\frac{d\vartheta d{\bf a}}{\langle\vartheta\rangle^4}\lesssim R N_2^2, \end{split} \end{equation*} and we can proceed similarly outside the bulk using \eqref{SmallScaleFOutsidBulk} instead. \end{proof} \begin{lemma}\label{LemVolume} Let $\tilde \varphi\in C^\infty_c(\mathbb{R})$ be a radial cutoff function, $\text{supp}(\tilde\varphi)\subset [0,3]$, and $\tilde\varphi> 0$ on $[0,2]$. Then on $\mathcal{B}$ we have the following bound: \begin{equation*} \begin{split} \iint_{\mathcal{B}}\tilde\varphi(R^{-1}\vert\widetilde{\bf X}-{\bf y}\vert)f(\vartheta,{\bf a},t)\, d\vartheta d{\bf a} &\lesssim \norm{\langle\xi\rangle\langle\eta\rangle^2\langle\lambda\rangle^2f}_{L^\infty_{\vartheta,{\bf a}}}(R/\vert {\bf y}\vert)^3. \end{split} \end{equation*} \end{lemma} \begin{proof} For any unit vector ${\bf e}$ with $\widetilde{\bf X}\cdot{\bf e}=0$, we observe that, \begin{equation*} \begin{split} \vert\widetilde{\bf X}-{\bf y}\vert^2=({\bf e}\cdot{\bf y})^2+\vert \widetilde{\bf X}-{\bf y}_\perp\vert^2,\qquad{\bf y_\perp}=-{\bf e}\times({\bf e}\times{\bf y}), \end{split} \end{equation*} so that \begin{equation*} \begin{split} \tilde\varphi(R^{-1}\vert\widetilde{\bf X}-{\bf y}\vert)&\lesssim \tilde\varphi(R^{-1}|{\bf e}\cdot{\bf y}|)\tilde\varphi(R^{-1}\vert \widetilde{\bf X}-{\bf y}_\perp\vert). \end{split} \end{equation*} We can consider the {\it symplectic disintegration} of the Liouville measure associated to the mapping $AMom:(\vartheta,{\bf a})\mapsto {\bf l}={\bf L}/L\in\mathbb{S}^2$ and we obtain accordingly \begin{equation*} \begin{split} \iint d{\vartheta}d{\bf {\bf a}}=\int_{{\bf l}\in\mathbb{S}^2}d\nu({\bf l})\iint_{\mathcal{P}_{\bf l}} d\bar{\vartheta} d\bar{{\bf a}},\qquad\mathcal{P}_{{\bf l}}=\{\vartheta\cdot{\bf l}=0={\bf a}\cdot{\bf l}\} \end{split} \end{equation*} Indeed, both sides are invariant under joint rotations $(\vartheta,{\bf a})\mapsto (R\vartheta,R{\bf a})$ and their restriction to a plane agrees\footnote{Alternatively, one may observe that \begin{equation*} \omega_{\mathcal{P}_{\vartheta,{\bf a}}}=AMom^\ast\omega_{\mathbb{S}^2}+\omega_{\mathcal{P}_{\bf l}}, \end{equation*} where $\omega_Y$ stands for the natural symplectic form on $Y$. We thank P.\ G\'erard for this observation.}. Thus it suffices to consider the planar case. When ${\bf l}=(0,0,1)$, we choose coordinates $\xi,\eta,\lambda,\varphi$ such that \begin{equation*} {\bf a}=\frac{q}{\xi}(\cos\varphi,\sin\varphi,0),\qquad\varphi=\arctan({\bf a}^2/{\bf a}^1) \end{equation*} which are in involution: \begin{equation*} \begin{split} \{\xi,\eta\}=1=\{\varphi,\lambda\},\qquad\{\xi,\lambda\}=\{\eta,\lambda\}=0=\{\xi,\varphi\}=\{\eta,\varphi\}. \end{split} \end{equation*} On $\mathcal{P}_{\bf l}\cap\mathcal{B}$, the mapping ${\bf Y}:\,(\xi,\varphi)\mapsto\widetilde{\bf X}(\xi,\eta,\lambda,\varphi)$ is an embedding\footnote{This can be seen since changing $\varphi$ amounts to rotate about $0$, while $\partial_\xi\vert\widetilde{\bf X}\vert^2>0$.} $I\times\mathbb{S}^1\to\mathbb{R}^2\setminus\{0\}$ for some interval $I(\eta,\lambda,t)$, and, using \eqref{PBXV} and \eqref{DerXiX}, we see that, on the support of integration, we have the bound on the Jacobian \begin{equation*} \xi\vert\det\frac{\partial{\bf Y}}{\partial(\xi,\varphi)}\vert\simeq \vert \widetilde{\bf X}\vert^2\simeq \vert{\bf y}\vert^2, \end{equation*} and we deduce that \begin{equation*} \begin{split} \iint_{\mathcal{B}}\tilde\varphi(R^{-1}\vert\widetilde{\bf X}-{\bf y}\vert)f(\vartheta,{\bf a},t)\, d\vartheta d{\bf a} &\lesssim \int_{\mathbb{S}^2}\tilde\varphi(R^{-1}|{\bf l}\cdot{\bf y}|)\left(\iint_{\mathcal{P}_{\bf l}\cap\mathcal{B}}\tilde\varphi(R^{-1}\vert \widetilde{\bf X}-{\bf y}_\perp\vert)f\, d\bar{\vartheta} d\bar{{\bf a}}\right)d\nu({\bf l})\\ &\lesssim \norm{\langle\xi\rangle\ip{\eta}^2\ip{\lambda}^2f}_{L^\infty_{\vartheta,{\bf a}}}(R/\vert {\bf y}\vert)^3. \end{split} \end{equation*} \end{proof} \subsection{The effective fields} In this section, we complement Proposition \ref{PropControlEF} by obtaining bounds on the effective fields for particle distributions following the evolution equation \eqref{NewNLEq}. These follow from variations on the continuity equation. To control various terms, we introduce an appropriate weighted envelope function. For $\phi\in C^\infty_c(\mathbb{R}^3)$, define \begin{equation*} \begin{split} M_R({\bf y},t;\phi)&:=\iint \phi(R^{-1}({\bf y}-{\bf a}))\gamma^2(\vartheta,{\bf a},t)\, d\vartheta d{\bf a}. \end{split} \end{equation*} Bounds on the effective electric potential, field and derivative will be related to the convergence properties of $M_R$ in various norms (see \eqref{BesovBoundMF1} and \eqref{BesovBoundMF2}). In particular, they allow to obtain global bounds on the initial data. For conciseness, we introduce the near identity \begin{equation}\label{NearID} I_k({\bf y},R):=1+\left[1+\vert{\bf y}\vert^2\right]^\frac{k}{2}\cdot\mathfrak{1}_{\{\vert{\bf y}\vert\le 10R\}}, \end{equation} and we can state our bounds in terms of $I_{k}({\bf y},R)$. \begin{lemma}\label{BoundIEFLem} We have bounds for the initial data \begin{equation*} \begin{split} \left[1+\vert{\bf y}\vert^2\right]^\frac{k}{2}\cdot M_R({\bf y},0;\phi)&\lesssim I_k({\bf y},R)\cdot\min\{1,R^3\}\cdot\left[\Vert \langle a\rangle^{k/2}\gamma_0\Vert_{L^2_{\vartheta,{\bf a}}}^2+\Vert \langle\vartheta\rangle^2\langle a\rangle^{k/2}\gamma_0\Vert_{L^\infty_{\vartheta,{\bf a}}}^2\right], \end{split} \end{equation*} and in particular \begin{equation*} \begin{split} \left[1+\vert{\bf y}\vert^2\right]\cdot\vert\mathcal{E}^{eff}({\bf y},0)\vert&\lesssim \Vert \langle a\rangle\gamma_0\Vert_{L^2_{\vartheta,{\bf a}}}^2+\Vert \langle a\rangle\langle\vartheta\rangle^2\gamma_0\Vert_{L^\infty_{\vartheta,{\bf a}}}^2,\\ \left[1+\vert{\bf y}\vert^2\right]^\frac{3}{2}\cdot\vert\mathcal{F}^{eff}({\bf y},0)\vert&\lesssim \Vert \langle a\rangle^\frac{3}{2}\gamma_0\Vert_{L^2_{\vartheta,{\bf a}}}^2+\Vert \langle a\rangle^\frac{3}{2}\langle\vartheta\rangle^2\gamma_0\Vert_{L^\infty_{\vartheta,{\bf a}}}^2. \end{split} \end{equation*} \end{lemma} \begin{proof} Indeed \begin{equation*} \begin{split} M_R({\bf y},0;\phi)&\lesssim\Vert \gamma\Vert_{L^2_{\vartheta,{\bf a}}}^2,\qquad \left[1+\vert{\bf y}\vert^2\right]^\frac{k}{2}\cdot M_R({\bf y},0;\phi)\mathfrak{1}_{\{\vert{\bf y}\vert\ge 10R\}}\lesssim \Vert \langle a\rangle^{k/2}\gamma\Vert_{L^2_{\vartheta,{\bf a}}}^2, \end{split} \end{equation*} and when $R\lesssim 1$, \begin{equation*} \begin{split} M_R({\bf y},0;\phi)&\lesssim R^3\Vert \langle\vartheta\rangle^2\gamma\Vert_{L^\infty_{\vartheta,{\bf a}}}^2,\qquad \left[1+\vert{\bf y}\vert^2\right]^\frac{k}{2}\cdot M_R({\bf y},0;\phi)\mathfrak{1}_{\{\vert{\bf y}\vert\ge 10R\}}\lesssim R^3\Vert \langle\vartheta\rangle^2\langle a\rangle^{k/2}\gamma\Vert_{L^\infty_{\vartheta,{\bf a}}}^2. \end{split} \end{equation*} The bounds on $\mathcal{E}^{eff}$ and $\mathcal{F}^{eff}$ follow by direct integrations decomposing into $\{\vert{\bf y}\vert\le 10R\}$ and $\{\vert{\bf y}\vert\ge 10R\}$. \end{proof} Assuming only moment bounds, we can obtain almost sharp decay for the effective electric field, and assuming moments and Poisson brackets, we can obtain sharp decay for the effective electric and almost sharp decay for its derivatives. \begin{proposition}\label{PropEeff} $(i)$ Assume that $\gamma$ satisfies \eqref{NewNLEq} and the bounded moment bootstrap assumptions \eqref{eq:mom_btstrap_assump}. For any fixed $\phi\in C^\infty_c(\mathbb{R}^3)$ and any $0\le s\le t\le T$, there holds that, uniformly in $R,{\bf y}$ and $0\le k\le 3$, \begin{equation}\label{BesovBoundMF1} \begin{split} \left[1+\vert{\bf y}\vert^2\right]^\frac{k}{2}\vert M_R({\bf y},t;\phi)-M_R({\bf y},s;\phi)\vert\lesssim_\phi\varepsilon_1^4I_k({\bf y},R)\cdot\min\{R^2,R^{-1}\}\int_s^t\langle u\rangle^{-\frac{3}{2}}du. \end{split} \end{equation} In particular, $M_R({\bf y},t;\phi)$ converges uniformly to a limit $M_\infty({\bf y};\phi)$. This implies almost optimal decay on the effective electric field \begin{equation}\label{BoundEff1} \begin{split} \left[t^2+\vert{\bf y}\vert^2\right]\cdot\vert\mathcal{E}^{eff}({\bf y},t)\vert&\lesssim \varepsilon_1^2 \ln\ip{t}. \end{split} \end{equation} $(ii)$ Assume in addition that $\gamma$ satisfies the stronger bootstrap assumption \eqref{eq:deriv_bstrap_assump2}, then we can strengthen \eqref{BesovBoundMF1} to \begin{equation}\label{BesovBoundMF2} \begin{split} \left[1+\vert{\bf y}\vert^2\right]^\frac{k}{2}\vert M_R({\bf y},t;\phi)-M_R({\bf y},s;\phi)\vert\lesssim_\phi\varepsilon_1^4I_k({\bf y},R)\cdot\min\{R^3,R^{-1}\}\int_s^t\langle u\rangle^{-\frac{6}{5}}du, \end{split} \end{equation} so that \begin{equation}\label{BoundEeff2} \begin{split} \left[t^2+\vert{\bf y}\vert^2\right]\cdot\vert\mathcal{E}^{eff}({\bf y},t)\vert&\lesssim \varepsilon_1^2,\\ \left[t^2+\vert{\bf y}\vert^2\right]^\frac{3}{2}\cdot\vert\mathcal{F}^{eff}({\bf y},t)\vert&\lesssim \varepsilon_1^2\ln\ip{t}. \end{split} \end{equation} $(iii)$ In addition, under the hypothesis of $(ii)$, there exists $\Psi^\infty$ and $\mathcal{E}_j^\infty=\partial_j\Psi^\infty$ such that \begin{equation}\label{AsymptoticEFCCL} \begin{split} \Vert \langle{\bf a}\rangle(t\Psi^{eff}(t{\bf a},t)-\Psi^\infty({\bf a}))\Vert_{L^\infty}+\Vert \langle {\bf a}\rangle^2(t^2\mathcal{E}^{eff}_j(t{\bf a},t)-\mathcal{E}^\infty_j({\bf a}))\Vert_{L^\infty}&\lesssim\varepsilon_1^2\langle t\rangle^{-\frac{1}{10}}, \end{split} \end{equation} and consequently \begin{equation}\label{AsymptoticEFCCL2} \begin{split} \Vert t\Psi(\widetilde{\bf X}(\vartheta,{\bf a}),t)-\Psi^\infty({\bf a})\Vert_{L^\infty(\mathcal{B})}+\Vert t^2\mathcal{E}_j(\widetilde{\bf X}(\vartheta,{\bf a}),t)-\mathcal{E}_j^\infty({\bf a})\Vert_{L^\infty(\mathcal{B})}\lesssim \varepsilon_1^2 t^{-\frac{1}{10}}. \end{split} \end{equation} \end{proposition} \begin{proof}[Proof of Proposition \ref{PropEeff}] Let $\phi_R({\bf x}):=\phi(R^{-1}{\bf x})$. Using \eqref{NewNLEq}, we compute that \begin{equation*} \begin{split} \partial_tM_R({\bf y},t;\phi)&=\iint \phi_R({\bf y}-{\bf a})\{\mathbb{H}_4,\gamma^2\}\, d\vartheta d{\bf a}=-\iint \gamma^2\{\mathbb{H}_4,\phi_R({\bf y}-{\bf a})\}\, d\vartheta d{\bf a}\\ &=R^{-1}\iint_{\mathcal{B}} \gamma^2\left[\mathcal{E}_j\{\widetilde{\bf X}^j,{\bf a}^l\}+\mathcal{W}_j\{\widetilde{\bf V}^j,{\bf a}^l\}\right](\partial_l\phi_R)({\bf y}-{\bf a})\, d\vartheta d{\bf a}\\ &\quad+R^{-1}\iint_{\mathcal{B}^c} \gamma^2\left[\mathcal{E}_j\{\widetilde{\bf X}^j,{\bf a}^l\}+\mathcal{W}_j\{\widetilde{\bf V}^j,{\bf a}^l\}\right](\partial_l\phi_R)({\bf y}-{\bf a})\, d\vartheta d{\bf a}, \end{split} \end{equation*} and, using Corollary \ref{CorPBa}, \eqref{TransitionFromSIC} and a crude estimate, we can get a good bound in the bulk: \begin{equation*} \begin{split} \vert\partial_tM_{R,\mathcal{B}}({\bf y},t)\vert&\lesssim R^{-1}\left\vert \iint_{\mathcal{B}} \gamma^2\left[\mathcal{E}_j\{\widetilde{\bf X}^j,{\bf a}^l\}+\mathcal{W}_j\{\widetilde{\bf V}^j,{\bf a}^l\}\right](\partial_l\phi_R)({\bf y}-{\bf a})\, d\vartheta d{\bf a}\right\vert\\ &\lesssim \left[\Vert\mathcal{E}\Vert_{L^\infty}+\langle t\rangle^{-1}\vert\mathcal{W}\vert\right]\cdot\min\{R^2,R^{-1}\}\cdot\left[\Vert\gamma\Vert_{L^2_{\vartheta,{\bf a}}}^2+\Vert \langle\vartheta\rangle^2\gamma\Vert_{L^\infty_{\vartheta,{\bf a}}}^2\right], \end{split} \end{equation*} and assuming that $\vert{\bf y}\vert\ge 10R$, \begin{equation*} \begin{split} \left[1+\vert{\bf y}\vert^2\right]^\frac{k}{2}\vert\partial_tM_{R,\mathcal{B}}({\bf y},t)\vert \mathfrak{1}_{\{\vert{\bf y}\vert\ge 10R\}}&\lesssim R^{-1}\langle a\rangle^\frac{k}{2}\left\vert \iint_{\mathcal{B}} \gamma^2\left[\mathcal{E}_j\{\widetilde{\bf X}^j,{\bf a}^l\}+\mathcal{W}_j\{\widetilde{\bf V}^j,{\bf a}^l\}\right](\partial_l\phi_R)({\bf y}-{\bf a})\, d\vartheta d{\bf a}\right\vert\\ &\lesssim \left[\Vert\mathcal{E}\Vert_{L^\infty}+\langle t\rangle^{-1}\vert\mathcal{W}\vert\right]\cdot\min\{R^2,R^{-1}\}\cdot\left[\Vert\langle a\rangle^\frac{k}{2}\gamma\Vert_{L^2_{\vartheta,{\bf a}}}^2+\Vert\langle a\rangle^\frac{k}{2} \langle\vartheta\rangle^2\gamma\Vert_{L^\infty_{\vartheta,{\bf a}}}^2\right]. \end{split} \end{equation*} Outside the bulk we also use \eqref{eq:nonbulk-bds} to get \begin{equation*} \begin{split} \vert\partial_tM_{R,\mathcal{B}^c}({\bf y},t)\vert&\lesssim R^{-1}\left\vert \iint_{\mathcal{B}^c} \gamma^2\left[\mathcal{E}_j\{\widetilde{\bf X}^j,{\bf a}^l\}+\mathcal{W}_j\{\widetilde{\bf V}^j,{\bf a}^l\}\right](\partial_l\phi_R)({\bf y}-{\bf a})\, d\vartheta d{\bf a}\right\vert\\ &\lesssim R^{-1}\left[\Vert\vert{\bf x}\vert\mathcal{E}\Vert_{L^\infty}+\vert\mathcal{W}\vert\right]\cdot\langle t\rangle^{-1}\iint_{\mathcal{B}^c}\frac{q}{\xi^2}\langle a\rangle\left[a^4+\xi^4+\eta^4+\lambda^2\right]\gamma^2\cdot\vert (\partial_l\phi_R)({\bf y}-{\bf a})\vert d\vartheta d{\bf a}\\ &\lesssim \varepsilon_1^2\langle t\rangle^{-\frac{7}{4}}\min\{R^2,R^{-1}\}N_1^2, \end{split} \end{equation*} and similarly \begin{equation*} \begin{split} &\left[1+\vert{\bf y}\vert^2\right]^\frac{k}{2}\vert\partial_tM_{R,\mathcal{B}^c}({\bf y},t)\vert\mathfrak{1}_{\{\vert{\bf y}\vert\ge 10R\}}\\ \lesssim &\,R^{-1}\left\vert \iint_{\mathcal{B}^c} \gamma^2\left[\mathcal{E}_j\{\widetilde{\bf X}^j,{\bf a}^l\}+\mathcal{W}_j\{\widetilde{\bf V}^j,{\bf a}^l\}\right](\partial_l\phi_R)({\bf y}-{\bf a})\, d\vartheta d{\bf a}\right\vert\\ \lesssim &\,R^{-1}\left[\Vert\vert{\bf x}\vert\mathcal{E}\Vert_{L^\infty}+\vert\mathcal{W}\vert\right]\cdot\langle t\rangle^{-1}\iint_{\mathcal{B}^c}\frac{q}{\xi^2}\langle a\rangle^{k+1}\left[a^4+\xi^4+\eta^4+\lambda^2\right]\gamma^2\cdot\vert (\partial_l\phi_R)({\bf y}-{\bf a})\vert d\vartheta d{\bf a}\\ \lesssim &\,\varepsilon_1^2\langle t\rangle^{-\frac{7}{4}}\min\{R^2,R^{-1}\}N_1^2. \end{split} \end{equation*} Adding the two lines above, we get that, for $k\le 3$, \begin{equation}\label{BounddMdt1} \begin{split} \vert \partial_tM_R({\bf y},t;\phi)\vert&\lesssim \varepsilon_1^4\langle t\rangle^{-\frac{7}{4}}N_1^2\cdot\min\{R^2,R^{-1}\},\\ \left[1+\vert{\bf y}\vert^2\right]^\frac{k}{2}\cdot \vert \partial_tM_R({\bf y},t;\phi)\vert\mathfrak{1}_{\{\vert{\bf y}\vert\ge10R\}}&\lesssim \varepsilon_1^4\langle t\rangle^{-\frac{7}{4}}N_1^2\cdot\min\{R^2,R^{-1}\}, \end{split} \end{equation} which gives \eqref{BesovBoundMF1}. We can now fix $\phi\ge 0$ such that $\phi\varphi=\varphi$ (with $\varphi$ from \eqref{IntroducingVarphi}) and let $M_R^{(2)}({\bf y},t):=M_R({\bf y},t;\phi)$. Letting $R_1=\min\{t,(t^2+\vert{\bf y}\vert^2)^\frac{3}{8}\}$, we see that \begin{equation*} \begin{split} \left[t^2+\vert{\bf y}\vert^2\right]\cdot\mathcal{E}^{eff}_{j}({\bf y},t)&=\left[t^2+\vert{\bf y}\vert^2\right]\cdot\int_{R=0}^{\infty}\iint\partial_j\varphi_R({\bf y}-t{\bf a})\gamma^2(\vartheta,{\bf a},t)\,d\vartheta d{\bf a}\, \frac{dR}{R^3}\\ &=\left[1+\vert{\bf y}/t\vert^2\right]\cdot \int_{r=0}^{\infty} \frac{1}{r^2}\iint(\partial_j\varphi)(r^{-1}({\bf y}/t-{\bf a}))\gamma^2(\vartheta,{\bf a},t)\,d\vartheta d{\bf a}\, \frac{dr}{r}, \end{split} \end{equation*} where $r=R/t$. To go further, we observe the simple bounds as in Lemma \ref{BoundIEFLem}, \begin{equation}\label{SimpleBoundsMR} \begin{split} \langle{\bf y}\rangle^k\cdot M_R({\bf y},t;\phi)&\lesssim I_k({\bf y},R)\cdot\min\{1,R^3\}\cdot\left[\Vert \langle a\rangle^{k/2}\gamma(t)\Vert_{L^2_{\vartheta,{\bf a}}}^2+\Vert \langle\vartheta\rangle^2\langle a\rangle^{k/2}\gamma(t)\Vert_{L^\infty_{\vartheta,{\bf a}}}^2\right], \end{split} \end{equation} and therefore we can estimate the case of low scales $0\le r\le r_1:=R_1/t\le 1$: \begin{equation*} \begin{split} I_{low}&:=\left[1+\vert{\bf y}/t\vert^2\right]\cdot \left\vert \int_{r=0}^{r_1} \frac{1}{r^2}\iint(\partial_j\varphi)(r^{-1}({\bf y}/t-{\bf a}))\gamma^2(\vartheta,{\bf a},t)\,d\vartheta d{\bf a}\, \frac{dr}{r}\right\vert\\ &\lesssim\langle {\bf y}/t\rangle^2 \int_{r=0}^{r_1} M^{(2)}_r({\bf y}/t,t)\, \frac{dr}{r^3}\lesssim\varepsilon_1^2 N_1^2r_1, \end{split} \end{equation*} while for the higher scale, we integrate in time using \eqref{SimpleBoundsMR} at time $t=0$ and \eqref{BounddMdt1}: \begin{equation*} \begin{split} I_{high}&:=\left[1+\vert{\bf y}/t\vert^2\right]\cdot \left\vert \int_{r=r_1}^{\infty} \frac{1}{r^2}\iint(\partial_j\varphi)(r^{-1}({\bf y}/t-{\bf a}))\gamma^2(\vartheta,{\bf a},t)\,d\vartheta d{\bf a}\, \frac{dr}{r}\right\vert\\ &\lesssim\langle {\bf y}/t\rangle^2 \int_{r=r_1}^\infty\vert M^{(2)}_r({\bf y}/t,t)-M^{(2)}_r({\bf y}/t,0)\vert \, \frac{dr}{r^3}+\langle {\bf y}/t\rangle^2 \int_{r=r_1}^\infty M^{(2)}_r({\bf y}/t,0)\, \frac{dr}{r^3}\\ &\lesssim \varepsilon_0^2 N_1^2+\varepsilon_1^4N_1^2\int_{r=r_1}^\infty\frac{dr}{r\langle r\rangle^\frac{1}{2}}\lesssim\varepsilon_0^2+\varepsilon_1^4\langle \ln r_1\rangle, \end{split} \end{equation*} which gives \eqref{BoundEff1}. In case we also have control of some derivative, we can alter the first step to get that, for $k\le 3$, \begin{equation}\label{BounddMdt1New} \begin{split} \left[1+\vert{\bf y}\vert^2\right]^\frac{k}{2}\cdot R^{-3}\vert \partial_tM_R({\bf y},t;\phi)\vert&\lesssim \varepsilon_1^4I_k({\bf y},R)\cdot\langle t\rangle^{-\frac{5}{4}}N_2^2. \end{split} \end{equation} For $R\ge t^{-\frac{1}{2}}$, we can still use \eqref{BounddMdt1}. In case $R\le t^{-\frac{1}{2}}$, we compute that \begin{equation*} \begin{split} \partial_tM_R({\bf y},t;\phi)&=\iint \phi_R({\bf y}-{\bf a})\{\mathbb{H}_4,\gamma^2\}\, d\vartheta d{\bf a}=2\iint \phi_R({\bf y}-{\bf a})\left[\mathcal{E}_j\{\widetilde{\bf X},\gamma\}+\mathcal{W}_j\{\widetilde{\bf V},\gamma\}\right]\gamma\, d\vartheta d{\bf a}, \end{split} \end{equation*} and therefore, using Lemma \ref{ZPBFormula} and Lemma \ref{LemVolume} inside the bulk, and Lemma \ref{ZPBFormula}, \eqref{eq:nonbulk-bds} and \eqref{MainVolumeGainBC} outside the bulk, we obtain that \begin{equation*} \begin{split} R^{-3}\left[1+\vert{\bf y}\vert^2\right]^\frac{k}{2}\vert \partial_tM_R\vert&\lesssim I_k({\bf y},R)\cdot\left[t\Vert \mathcal{E}\Vert_{L^\infty}+t\vert\mathcal{W}\vert\right]\cdot N_2^2, \end{split} \end{equation*} which is enough to get \eqref{BesovBoundMF2}. Proceeding as above, but using \eqref{BesovBoundMF2} instead of \eqref{BesovBoundMF1}, we also get \eqref{BoundEeff2}. This also implies \eqref{AsymptoticEFCCL} since \begin{equation*} \begin{split} \langle a\rangle^2\vert t^2\mathcal{E}^{eff}_j(t{\bf a},t)-s^2\mathcal{E}^{eff}_j(s{\bf a},s)\vert&\lesssim \langle a\rangle^2\int_{r=0}^{\infty}\left\vert\iint\partial_j\varphi_{r}({\bf a}-\alpha)\left[\gamma^2(\theta,\alpha,t)-\gamma^2(\theta,\alpha,s)\right]\, d\theta d\alpha\right\vert \frac{dr}{r^3}\\ &\lesssim \int_{r=0}^\infty \langle a\rangle^2\vert M^{(2)}_r({\bf a},t)-M^{(2)}_r({\bf a},s)\vert \frac{dr}{r^3}, \end{split} \end{equation*} which gives \eqref{AsymptoticEFCCL}. Finally \eqref{AsymptoticEFCCL2} follows from \eqref{BoundEeff2}, \eqref{AsymptoticEFCCL}, Proposition \ref{PropControlEF} and \eqref{ApproxXBulk} once we observe that, for $(\vartheta,{\bf a})\in\mathcal{B}$, \begin{equation*} \begin{split} \vert \mathcal{E}_j(\widetilde{\bf X}(\vartheta,{\bf a}),t)-\mathcal{E}_j(t{\bf a},t)\vert&\lesssim \Vert\mathcal{F}\Vert_{L^\infty}\cdot\vert\widetilde{\bf X}(\vartheta,{\bf a})-t{\bf a}\vert\lesssim\varepsilon_1^2\langle t\rangle^{-5/2}. \end{split} \end{equation*} \end{proof} \section{Nonlinear analysis: bootstrap propagation}\label{SecBootstrap} In this section we will establish moment and derivative control in the nonlinear dynamics. In our terminology, ``linear'' will henceforth refer to features of the linearized equations. In particular, the linear characteristics are the solutions of the linearized equations, i.e.\ of the Kepler problem \eqref{ODE}, and thus by no means straight lines. \subsection{Nonlinear unknowns}\label{Sec:NLUnknowns} We now switch to a new unknown adapted to the study of nonlinear asymptotic dynamics. We fix the (forward) asymptotic action map $\mathcal{T}:({\bf x},{\bf v})\mapsto (\vartheta,{\bf a})$ of Proposition \ref{PropAA} and we define \begin{equation} \gamma:=\nu\circ\mathcal{T}^{-1}\circ\Phi_t^{-1}, \end{equation} where $\Phi_t$ is the flow of the linear characteristics of $\nu$, i.e.\ of the Kepler problem \eqref{ODE}. More explicitly, we have \begin{equation}\label{NewNLUnknown} \begin{split} \gamma(\vartheta,{\bf a},t)&=\nu({\bf X}(\vartheta+t{\bf a},{\bf a}),{\bf V}(\vartheta+t{\bf a},{\bf a}),t),\\ \nu(q,p,t)&=\gamma(\Theta(q,p)-t\mathcal{A}(q,p),\mathcal{A}(q,p),t), \end{split} \end{equation} and since $\Phi_t^{-1}\circ\mathcal{T}^{-1}$ is a canonical transformation (Proposition \ref{PropAA}) that filters out the linear flow, we observe that $\gamma$ evolves under the purely nonlinear Hamiltonian $\mathbb{H}_4\circ\Phi_t^{-1}\circ\mathcal{T}^{-1}$ of \eqref{Hamiltonians}: with a slight abuse of notation we have \begin{equation}\label{NewNLEq} \begin{split} \partial_t\gamma+\{\mathbb{H}_4,\gamma\}=0,\qquad \gamma(t=0)=\nu_0,\qquad\mathbb{H}_4=Q\psi(\widetilde{\bf X})+\mathcal{W}\cdot \widetilde{\bf V},\\ \psi({\bf y},t)=-\frac{1}{4\pi}\iint \frac{1}{\vert {\bf y}-\widetilde{\bf X}(\theta,\alpha)\vert}\gamma^2(\theta,\alpha,t)\, d\theta d\alpha,\qquad\dot{\mathcal{W}}=\mathcal{Q}\nabla_x\psi(0,t),\qquad\mathcal{W}(t)\to_{t\to T^\ast}0. \end{split} \end{equation} This is the equation we will focus on. \subsection{Moment propagation}\label{sec:moments_prop} In this section we will show that control of moments and (almost sharp) decay of the electric field can be obtained independently of derivative bounds. \begin{theorem}\label{thm:global_moments} Let $m\ge 30$, and assume that the initial density $\nu_0$ satisfies \begin{equation}\label{eq:mom_id} \begin{split} \Vert \langle a\rangle^{2m}\nu_0\Vert_{L^r}+\Vert \langle \xi\rangle^{2m}\nu_0\Vert_{L^r}+\Vert \langle \lambda\rangle^{2m}\nu_0\Vert_{L^r}+\Vert \langle\eta\rangle^m\nu_0\Vert_{L^\infty}&\le \varepsilon_0,\qquad r\in\{2,\infty\}. \end{split} \end{equation} Then there exists a global solution $\gamma$ to \eqref{NewNLEq} that satisfies the bounds for $r\in\{2,\infty\}$ \begin{equation}\label{eq:gl_mom_bds} \begin{split} \Vert \langle a\rangle^{2m}\gamma(t)\Vert_{L^r}+\Vert \langle \xi\rangle^{2m}\gamma(t)\Vert_{L^r}&\le 2\varepsilon_0,\\ \Vert \langle \lambda\rangle^{2m}\gamma(t)\Vert_{L^r}&\le 2\varepsilon_0\ln^{2m}\ip{t},\\ \Vert \langle\eta\rangle^m\gamma(t)\Vert_{L^\infty}&\le 2\varepsilon_0 \ln^{2m}\ip{t}. \end{split} \end{equation} In particular, $T^\ast=\infty$, $\mathcal{W}(t)\to 0$ as $t\to\infty$, and the electric field decays as \begin{equation*} \begin{split} \Vert \mathcal{E}(t)\Vert_{L^\infty_x}\lesssim \varepsilon_0^2\ln\ip{t}/\langle t\rangle^2. \end{split} \end{equation*} \end{theorem} By standard local existence theory, in order to establish Theorem \ref{thm:global_moments}, it suffices to prove the following result concerning the propagation of moments: \begin{proposition}\label{prop:moments_prop} Let $m\ge 30$, and assume that the initial density $\nu_0$ satisfies \eqref{eq:mom_id}. If $(\gamma,\mathcal{W})$ is a solution of \eqref{NewNLEq} on $[0,T^\ast]$ satisfying \begin{equation}\label{eq:mom_btstrap_assump} \begin{split} \Vert \langle a\rangle^{2m}\gamma(t)\Vert_{L^r}+\Vert \langle \xi\rangle^{2m}\gamma(t)\Vert_{L^r}+\Vert \langle \lambda\rangle^{2m}\gamma(t)\Vert_{L^r}+\Vert \langle\eta\rangle^m\gamma(t)\Vert_{L^\infty}&\leq \varepsilon_1\langle \ln(2+t)\rangle^{4m},\quad r\in\{2,\infty\},\\ \sqrt{1+t^2+\vert{\bf y}\vert^2}\vert\mathcal{E}({\bf y},t)\vert&\leq\varepsilon_1^2\langle 2+t\rangle^{-\frac{1}{2}}, \end{split} \end{equation} then we have the almost optimal bound for $0\le t\le T^\ast$: \begin{equation}\label{AlmostSharpDecayEF} \left[1+t^2+\vert{\bf y}\vert^2\right]\vert \mathcal{E}({\bf y},t)\vert\lesssim \varepsilon_1^2\ip{\ln(2+t)}, \end{equation} and in fact we have the improved moment bounds for $r\in\{2,\infty\}$: there exists $C>0$ such that, for $0\le t\le T^\ast$: \begin{align} \Vert \ip{a}^{2m}\gamma(t)\Vert_{L^r}+\Vert \ip{\xi}^{2m}\gamma(t)\Vert_{L^r}&\le \varepsilon_0 +C\varepsilon_1^3,\label{eq:mom_btstrap_claim1}\\ \norm{\lambda^n\gamma(t)}_{L^r}&\le \varepsilon_0+C\varepsilon_1^3\langle \ln(2+t)\rangle^{2n},\qquad 1\leq n\leq 2m,\label{eq:mom_btstrap_claim2}\\ \norm{\eta^k\gamma(t)}_{L^\infty}&\le \varepsilon_0+C\varepsilon_1^3\langle \ln(2+t)\rangle^{2k},\qquad 1\leq k\leq m.\label{eq:mom_btstrap_claim3} \end{align} \end{proposition} \begin{remark}\label{RemarkPropositionMoments} \begin{enumerate} \item Moments in $a$ are relevant when $a$ is large, and thus when the trajectories of the linearized problem closely approach the point charge. In this setting, it is more favorable to work with the close formulation of Section \ref{ssec:pinnedframe}, where the microscopic velocities are centered around that of the point charge. \item Moments in $a$ and $\xi$ can be propagated by themselves, and we only make use of one moment in $L$ resp.\ $\eta$ to obtain a uniform in time (rather than logarithmically growing) bound for $\xi$ moments. \item Moments in $L$ and $\eta$ lead to a slow logarithmic growth. Here $\eta$ is not conserved under the linear flow, and we are only able to propagate fewer of the associated moments (and only in $L^\infty$) -- see \eqref{eq:mom_btstrap_claim3}. \end{enumerate} \end{remark} The proof of this Proposition relies on the observation that for any weight function $\mathfrak{w}:\mathcal{P}_{{\bf x},{\bf v}}\to\mathbb{R}$ or $\mathfrak{w}:\mathcal{P}_{\vartheta,{\bf a}}\to\mathbb{R}$ there holds that \begin{equation}\label{eq:mom-prop} \begin{split} \partial_t(\mathfrak{w}\nu)-\{\mathbb{H},(\mathfrak{w}\nu)\}=-\nu\{\mathbb{H},\mathfrak{w}\},\qquad \partial_t(\mathfrak{w}\gamma)+\{\mathbb{H}_4,(\mathfrak{w}\gamma)\}=\gamma\{\mathbb{H}_4,\mathfrak{w}\}. \end{split} \end{equation} \begin{proof}[Proof of Proposition \ref{prop:moments_prop}] Since $m\ge 30$, we get from \eqref{eq:mom_btstrap_assump} that \begin{equation*} N_1\lesssim \varepsilon_1\langle\ln(2+t)\rangle^{100}. \end{equation*} The electric field decay \eqref{AlmostSharpDecayEF} follows by combining Proposition \ref{PropControlEF} and Proposition \ref{PropEeff}. From this and \eqref{NewNLEq} it directly follows that \begin{equation}\label{eq:Wdecay} \abs{\mathcal{W}(t)}\lesssim \int_t^{T^\ast}\abs{\mathcal{E}(s)}ds\lesssim \varepsilon_1^2 \ip{\ln(2+t)}\ip{t}^{-1}. \end{equation} \paragraph{\textbf{Proof of \eqref{eq:mom_btstrap_claim1}}} \paragraph{Moments in $a$} As explained in Remark \ref{RemarkPropositionMoments}, we use the close formulation. Using \eqref{ComparableMomentNorms}, it suffices to prove the bound for $\gamma^\prime$ for which we can use \eqref{NewNLEq'}. Using \eqref{eq:mom-prop}, we obtain that \begin{equation*} \begin{split} \partial_t(\langle a\rangle^n\gamma^\prime)+\{\mathbb{H}_4^\prime,\langle a\rangle^n\gamma^\prime\}&=\gamma^\prime\{\mathbb{H}_4^\prime,\langle a\rangle^n\}=\gamma^\prime\cdot na\langle a\rangle^{n-2}\left[Q\mathcal{E}_j(\widetilde{\bf X})-\dot{\mathcal{W}}_j\right]\{\widetilde{\bf X}^j,a\}, \end{split} \end{equation*} and using \eqref{PBX1}, \eqref{AlmostSharpDecayEF} and \eqref{eq:Wdecay}, we deduce that \begin{equation} \frac{d}{dt}\norm{\ip{a}^n\gamma'}_{L^r}\lesssim \varepsilon_1^2 \ip{t}^{-2}\ip{\ln(2+t)}\cdot \norm{\ip{a}^{n-1}\gamma'}_{L^r}, \end{equation} which gives the uniform bounds for moments in $a$ in \eqref{eq:mom_btstrap_claim1}. \paragraph{Moments in $\xi$} Letting $1\leq n\leq 2m$ we have that \begin{equation}\label{PBMomXi1} \{\mathbb{H}_4,\ip{\xi}^{n}\}=n\ip{\xi}^{n-2}\xi\left[\mathcal{E}_j(\widetilde{\bf X},t)\cdot\{\widetilde{\X}^j,\xi\}+\mathcal{W}_j\{\widetilde{\V}^j,\xi\}\right], \end{equation} and using \eqref{PBX1} followed by \eqref{CrudeBoundX} then \eqref{AlmostSharpDecayEF}, we get that \begin{equation}\label{PBMomXi2} \begin{split} \langle\xi\rangle^{-2}\xi\vert\mathcal{E}_j(\widetilde{\bf X},t)\cdot\{\widetilde{\X}^j,\xi\}\vert\lesssim \xi\vert\mathcal{E}(\widetilde{\bf X},t)\vert\lesssim \vert\widetilde{\bf X}\vert^\frac{1}{2}\vert\mathcal{E}(\widetilde{\bf X},t)\vert\lesssim \varepsilon_1^2\langle t\rangle^{-9/8}. \end{split} \end{equation} For the Poisson bracket with $\widetilde{\V}$ we split into bulk and non-bulk regions. In the bulk, using \eqref{eq:bulk-bds}, we get \begin{equation*} \vert\{\widetilde{\V},\xi\}\vert\mathfrak{1}_{\mathcal{B}}\lesssim \frac{\xi^3}{q\vert\widetilde{\X}\vert^2}\mathfrak{1}_{\mathcal{B}}\lesssim \frac{\xi^5}{t^2q^3}\mathfrak{1}_{\mathcal{B}}\lesssim\ip{t}^{-1/2}, \end{equation*} while outside the bulk, we use \eqref{CrudeBoundX} to get \begin{equation*} \vert\{\widetilde{\V},\xi\}\vert\mathfrak{1}_{\mathcal{B}^c}\lesssim \frac{\xi^3}{q\vert\widetilde{\X}\vert^2}\mathfrak{1}_{\mathcal{B}^c}\lesssim \frac{q\xi}{\xi^2+\lambda^2}\cdot \ip{t}^{-1/4n}(\eta^4+\lambda^2+a^4+\xi^4 )^{1/4n}, \end{equation*} and thus \begin{equation}\label{PBMomXi3} \langle\xi\rangle^{n-2}\xi\vert\mathcal{W}\cdot\{\widetilde{\V},\xi\}\vert\lesssim \abs{\mathcal{W}(t)}\left[\ip{t}^{-1/2}\langle\xi\rangle^{n-1}+\ip{t}^{-1/4n}(\ip{\xi}^{n-1}+\eta+\lambda)\right]. \end{equation} Using \eqref{eq:mom-prop}, \eqref{PBMomXi1}, \eqref{PBMomXi2} and \eqref{PBMomXi3}, we deduce that \begin{equation*} \begin{split} \frac{d}{dt}\Vert\langle\xi\rangle^n\gamma\Vert_{L^r}&\lesssim\varepsilon_1^2\langle t\rangle^{-9/8}\Vert\langle\xi\rangle^n\gamma\Vert_{L^r}+\langle t\rangle^{-1/4n}\vert\mathcal{W}(t)\vert\cdot\left[\Vert\langle\xi\rangle^{n-1}\gamma\Vert_{L^r}+\Vert \eta\gamma\Vert_{L^r}+\Vert \lambda\gamma\Vert_{L^r}\right], \end{split} \end{equation*} and using \eqref{eq:Wdecay} and the bootstrap hypothesis \eqref{eq:mom_btstrap_assump}, we obtain a uniform bound. \paragraph{\textbf{Proof of \eqref{eq:mom_btstrap_claim2}}} For $1\leq n\leq 2 m$ we compute with \eqref{PBXV} that \begin{equation} \{\mathbb{H}_4,\lambda^{n}\}=n\lambda^{n-1}\{Q\psi(\widetilde{\X},t)+\mathcal{W}(t)\cdot\widetilde{\V},\lambda\}=n\lambda^{n-1}\left(Q\mathcal{E}(\widetilde{\X},t)\cdot ({\bf l}\times\widetilde{\X})-\mathcal{W}(t)\cdot({\bf l}\times\widetilde{\V})\right), \end{equation} and thus via \eqref{eq:mom-prop}, the decay estimates \eqref{eq:Wdecay} and \eqref{AlmostSharpDecayEF} as well as the bootstrap assumptions \eqref{eq:mom_btstrap_assump} and the uniform bounds \eqref{eq:mom_btstrap_claim1} for moments in $a$ just established, there holds that \begin{equation} \begin{aligned} \frac{d}{dt}\norm{\lambda\gamma}_{L^r}&\lesssim \nnorm{\vert\widetilde{\X}\vert\mathcal{E}(\widetilde{\bf X},t)}_{L^\infty}\norm{\gamma}_{L^r}+\abs{\mathcal{W}(t)}\norm{a\gamma}_{L^r}\lesssim\varepsilon_1^3\ip{t}^{-1}\ip{\ln(2+t)}, \end{aligned} \end{equation} and for $n\geq 2$ \begin{equation} \begin{aligned} \frac{d}{dt}\norm{\lambda^n\gamma}_{L^r}&\lesssim \nnorm{\vert\widetilde{\X}\vert\mathcal{E}(\widetilde{\bf X},t)}_{L^\infty}\norm{\lambda^{n-1}\gamma}_{L^r}+\abs{\mathcal{W}(t)}\norm{\lambda^{n-1}a\gamma}_{L^r}\\ &\lesssim \nnorm{\vert\widetilde{\X}\vert\mathcal{E}(\widetilde{\bf X},t)}_{L^\infty}\norm{\lambda^{n-1}\gamma}_{L^r}+\abs{\mathcal{W}(t)}\cdot\norm{\lambda^{n}\gamma}_{L^r}^\frac{n-1}{n}\Vert a^n\gamma\Vert_{L^r}^\frac{1}{n}\\ &\lesssim \varepsilon_1^3\ip{t}^{-1}\ip{\ln(2+t)}\cdot\ip{\ln(2+t)}^{2n-2}. \end{aligned} \end{equation} \paragraph{\textbf{Proof of \eqref{eq:mom_btstrap_claim3}}} Note that here we only propagate $L^\infty$ bounds. This is because the Poisson bracket bounds of \eqref{PBX1'} favor the formulation in terms of $\gamma$ on $\Omega_t^{far}$, while on $\Omega_t^{cl}$ the alternative formulation in terms of $\gamma'$ is advantageous. To this end, we note that \begin{equation} \begin{aligned} \norm{\eta^k\gamma}_{L^\infty}&\leq \norm{\eta^k\gamma}_{L^\infty(\Omega_t^{far})}+\norm{\eta^k\gamma}_{L^\infty(\Omega_t^{cl})}, \end{aligned} \end{equation} where by Lemma \ref{lem:prime_bds} and \eqref{eq:Wdecay} \begin{equation} \abs{\eta^k-(\eta')^k}\lesssim \varepsilon_1^2\cdot \left[\ip{\ln(2+t)}^k(1+a^2)^k+\min\{\eta,\eta'\}^k\right], \end{equation} and thus there holds that \begin{equation} \begin{aligned} \norm{\eta^k\gamma}_{L^\infty(\Omega_t^{cl})}&\leq \norm{[\eta^k-(\eta')^k]\gamma}_{L^\infty(\Omega_t^{cl})}+\norm{(\eta')^k\gamma}_{L^\infty(\Omega_t^{cl})}\\ &\lesssim \varepsilon_1^2\ip{\ln(2+t)}^k\norm{(1+a^2)^k\gamma}_{L^\infty(\Omega_t^{cl})}+2\norm{(\eta')^k\gamma}_{L^\infty(\Omega_t^{cl})}. \end{aligned} \end{equation} Furthermore, by invariance of $\Omega_t^{cl}$ under $\mathcal{M}_t$ (see \eqref{eq:XMinv}) and \eqref{eq:weightprimes} we have that \begin{equation}\label{eq:eta'gamma'} \norm{(\eta')^k\gamma}_{L^\infty(\Omega_t^{cl})}=\norm{((\eta')^k\gamma)\circ\mathcal{M}_t}_{L^\infty(\Omega_t^{cl})}=\norm{\eta^k\gamma'}_{L^\infty(\Omega_t^{cl})}. \end{equation} Similarly, the bootstrap assumptions \eqref{eq:mom_btstrap_assump} imply $\eta$ moment bounds also on $\gamma'$, namely \begin{equation}\label{eq:eta'gamma} \norm{(\eta')^k\gamma}_{L^\infty(\Omega_t^{cl})}\lesssim \varepsilon_1^2\ip{\ln(2+t)}^k\norm{(1+a^2)^k\gamma}_{L^\infty(\Omega_t^{cl})}+2\norm{\eta^k\gamma}_{L^\infty(\Omega_t^{cl})}. \end{equation} In conclusion, since trajectories can enter/exit the close/far regions at most once (Lemma \ref{lem:traj_cl_far}) and $\Omega_t^{far}\cap\Omega_t^{cl}=\{\ip{t}\leq \vert\widetilde{\X}\vert\leq 10\ip{t}\}$, to establish the claim it suffices to propagate $\eta$ moments on $\gamma'$ for trajectories in $\Omega_t^{cl}$, while for trajectories in $\Omega_t^{far}$ we work with $\gamma$ directly. \subsubsection*{On $\Omega_t^{far}$} With the Poisson bracket bounds \eqref{PBX1'}, we have for $1\leq k\leq m$ \begin{equation*} \begin{aligned} \abs{\{\mathbb{H}_4,\eta^{k}\}}&=k\eta^{k-1}\vert\{Q\psi(\widetilde{\X},t)+\mathcal{W}(t)\cdot\widetilde{\V},\eta\}\vert\\ &\lesssim \eta^{k-1}\big[\vert\mathcal{E}(\widetilde{\bf X},t)\vert(\xi^{-1}\vert\widetilde{\X}\vert+tq\xi^{-2})+\abs{\mathcal{W}(t)}(q\xi^{-2}+tq/(\xi\vert\widetilde{\X}\vert^2))\big]\\ &\lesssim \eta^{k-1}[a+a^2]\cdot\left[(t+\vert\widetilde{\bf X}\vert)\vert\mathcal{E}(\widetilde{\bf X},t)\vert+\vert\mathcal{W}(t)\vert\right], \end{aligned} \end{equation*} and using \eqref{AlmostSharpDecayEF}, \eqref{eq:Wdecay} and \eqref{eq:mom_btstrap_claim1} with the bootstrap assumption \eqref{eq:mom_btstrap_assump}, we see that on $\Omega^{far}_t$ \begin{equation}\label{eq:eta-f2} \begin{split} \frac{d}{dt}\vert \eta^k\gamma(t)\vert\lesssim \varepsilon_1^2\langle t\rangle^{-1}\langle\ln(2+t)\rangle\cdot \Vert\eta^k\gamma(t)\Vert_{L^\infty}^\frac{k-1}{k}\cdot\Vert \langle a\rangle^{2k}\gamma(t)\Vert_{L^\infty}^\frac{1}{k}&\lesssim\varepsilon_1^3\langle t\rangle^{-1}\langle\ln(2+t)\rangle^{2k-1}. \end{split} \end{equation} \subsubsection*{On $\Omega_t^{cl}$} On the other hand, we have that \begin{equation*} \begin{aligned} \abs{\{\mathbb{H}_4',\eta^{k}\}}&=k\eta^{k-1}\vert\{Q\psi(\widetilde{\bf X},t)-\dot{\mathcal{W}}(t)\cdot \widetilde{\X},\eta\}\vert \lesssim \eta^{k-1}(\abs{\mathcal{E}(t)}+\vert\dot{\mathcal{W}}(t)\vert)(\xi^{-1}\vert\widetilde{\X}\vert+tq\xi^{-2}), \end{aligned} \end{equation*} which will be used when $\vert\widetilde{\X}\vert\lesssim \ip{t}$. This gives as before \begin{equation}\label{eq:eta-c2} \begin{aligned} \frac{d}{dt}\abs{\eta^k\gamma'}&\lesssim \ip{t}[\nnorm{\mathcal{E}(t)}_{L^\infty}+\vert\dot{\mathcal{W}}(t)\vert]\cdot\Vert \eta^k\gamma^\prime(t)\Vert_{L^\infty(\Omega^{cl}_t)}^\frac{k-1}{k}\norm{\langle a\rangle^{2k}\gamma'(t)}^\frac{1}{k}_{L^\infty(\Omega^{cl}_t)}\\ &\lesssim \varepsilon_1^3\ip{t}^{-1}\ip{\ln(2+t)}^{2k-1}, \end{aligned} \end{equation} where we have also used \eqref{ComparableMomentNorms} and \eqref{eq:eta'gamma} to bound the norms of $\gamma^\prime$ in terms of bounds on $\gamma$. Finally, combining \eqref{eq:eta-f2} and \eqref{eq:eta-c2} gives the claim. \end{proof} \subsection{Derivative Propagation}\label{sec:derivs_prop} In this section we will propagate derivative control on our nonlinear unknowns. To exploit the symplectic structure, we do this by means of Poisson brackets with a suitable ``spanning'' set of functions $S$: \begin{definition} Let $S\subset C^1(\mathcal{P}_{\vartheta,{\bf a}},\mathbb{R})$ be a set of smooth, scalar functions on phase space. Then $S$ is a \emph{spanning} set, if at all points of $\mathcal{P}_{\vartheta,{\bf a}}$ the collection of associated symplectic gradients $\{f,\cdot\}$, $f\in S$, spans the cotangent space of $\mathcal{P}_{\vartheta,{\bf a}}$. \end{definition} We note that the functions in a spanning set $S$ need not be independent. By composition with the canonical diffeomorphism $\mathcal{T}^{-1}$ of Proposition \ref{PropAA}, we obtain an equivalent description in terms of the ``physical'' phase space $\mathcal{P}_{{\bf x},{\bf v}}$. Guided by the super-integrable coordinates (Section \ref{ssec:SIC}, see also equations \eqref{eq:deriv} and \eqref{eq:deriv'} below), we choose the collection $\textnormal{SIC}:=\{\xi,\eta,{\bf u},{\bf L}\}$ as spanning set. For a suitable choice of weight functions $\mathfrak{w}_f>0$, $f\in \textnormal{SIC}$, defined in \eqref{eq:def_weights} below, we will then propagate derivative control through a functional of the form \begin{equation}\label{eq:defP} \mathcal{D}[\zeta](\vartheta,{\bf a};t):=\sum_{f\in \textnormal{SIC}}\mathfrak{w}_f\aabs{\{f,\zeta\}}(\vartheta,{\bf a}),\qquad \zeta\in\{\gamma,\gamma'\}. \end{equation} Due to the motion of the point charge, as in Section \ref{sec:moments_prop} we will work with both ``close'' and ``far'' formulations of our equations, and thus consider \begin{equation} \mathcal{D}(\vartheta,{\bf a};t):= \begin{cases} \mathcal{D}[\gamma](\vartheta,{\bf a};t),& (\vartheta,{\bf a})\in\Omega_t^{far},\\ \mathcal{D}[\gamma'](\vartheta,{\bf a};t),& (\vartheta,{\bf a})\in\Omega_t^{cl}. \end{cases} \end{equation} The remainder of this section then establishes the following result: \begin{proposition}\label{prop:global_derivs} There exists $\varepsilon_0>0$ such that the following holds: Let $\gamma(0)=\nu_0$ be given, satisfying the hypothesis \eqref{eq:mom_id} of Theorem \ref{thm:global_moments}, and let \begin{equation}\label{eq:def_weights} \mathfrak{w}_\xi=\frac{1+\xi}{\xi^2},\qquad \mathfrak{w}_\eta=\frac{\xi^2}{1+\xi}, \qquad \mathfrak{w}_{\bf u}=1,\qquad \mathfrak{w}_{\bf L}=\frac{\xi}{1+\xi}. \end{equation} Then there holds that if \begin{equation}\label{eq:deriv_btstrap_id} \norm{(\xi^{5}+\xi^{-8})\mathcal{D}(\vartheta,{\bf a};0)}_{L^\infty_{\vartheta,{\bf a}}}\lesssim \varepsilon_0, \end{equation} the global solution $\gamma$ of Theorem \ref{thm:global_moments} satisfies \begin{equation}\label{eq:global_derivs} \norm{\mathcal{D}(\vartheta,{\bf a};t)}_{L^\infty_{\vartheta,{\bf a}}}\lesssim \varepsilon_0\ln^2\ip{t}. \end{equation} \end{proposition} \begin{remark} The proof proceeds through control of $\mathcal{D}(\vartheta,{\bf a};t)$ along trajectories of \eqref{ODE}. \begin{enumerate} \item We note that all weights in \eqref{eq:def_weights} are functions exclusively of $\xi$, which is conserved along such trajectories. Consequently, additional weights in $\xi$ on $\mathcal{D}(\vartheta,{\bf a};t)$ can be propagated as well. \item The proof shows that a slightly weaker assumption than \eqref{eq:deriv_btstrap_id} (expressed in terms of another $\xi$-dependent weight function \eqref{eq:defUps}) suffices, and in fact gives a slightly stronger conclusion. \end{enumerate} \end{remark} \subsubsection{Proof setup} To understand the evolution of symplectic gradients on our nonlinear unknowns, we begin by observing that by the chain rule one has that for any functions $g_j:\mathcal{P}\to\mathbb{R}$, $1\leq j\leq 3$ and $G:\mathbb{R}^2\to\mathbb{R}$ there holds that \begin{equation} \{G(g_1,g_2),g_3\}=\partial_1 G\cdot\{g_1,g_3\}+\partial_2 G\cdot\{g_2,g_3\}. \end{equation} Using the Jacobi identity \eqref{Jacobi} and the equations \eqref{NewNLEq}, we thus find \begin{equation}\label{eq:deriv} \begin{split} \partial_t\{f,\gamma\}+\{\mathbb{H}_4,\{f,\gamma\}\}&=-\{\{f,\mathbb{H}_4\},\gamma\}=-Q\{\{f,\psi(\widetilde{\X})\},\gamma\}-\mathcal{W}_j\{\{f,\widetilde{\bf V}^j\},\gamma\}\\ &=-Q\mathcal{F}_{jk}\{\widetilde{\bf X}^j,\gamma\}\{f,\widetilde{\bf X}^k\}-Q\mathcal{E}_j\{\{f,\widetilde{\bf X}^j\},\gamma\}-\mathcal{W}_j\{\{f,\widetilde{\bf V}^j\},\gamma\}. \end{split} \end{equation} Alternatively, if we choose the formulation \eqref{NewNLEq'} with velocities centered on the point charge, we obtain \begin{equation}\label{eq:deriv'} \begin{split} \partial_t\{f,\gamma'\}+\{\mathbb{H}_4',\{f,\gamma'\}\}&=-\{\{f,\mathbb{H}_4'\},\gamma'\}=-Q\{\{f,\psi(\widetilde{\X})\},\gamma'\}+\dot{\mathcal{W}}_j\{\{f,\widetilde{\bf X}^j\},\gamma'\}\\ &=-Q\mathcal{F}_{jk}\{f,\widetilde{\bf X}^k\}\{\widetilde{\bf X}^j,\gamma'\}-\left(Q\mathcal{E}_j-\dot{\mathcal{W}}_j\right)\{\{f,\widetilde{\bf X}^j\},\gamma'\}. \end{split} \end{equation} Using \eqref{eq:U_resol}, we can then express \eqref{eq:deriv} as \begin{equation}\label{eq:deriv2} \partial_t\{f,\gamma\}+\{\mathbb{H}_4,\{f,\gamma\}\}=\sum_{g\in \textnormal{SIC}}\mathfrak{m}_{fg}\{g,\gamma\}, \end{equation} for a suitable coefficient matrix $(\mathfrak{m}_{fg})_{f,g\in \textnormal{SIC}}$, and similarly for \eqref{eq:deriv'}: \begin{equation}\label{eq:deriv'2} \partial_t\{f,\gamma'\}+\{\mathbb{H}'_4,\{f,\gamma'\}\}=\sum_{g\in \textnormal{SIC}}\mathfrak{m}'_{fg}\{g,\gamma'\}. \end{equation} In addition to the distinction between ``far'' and ``close'' dynamics and the corresponding formulations \eqref{eq:deriv2} and \eqref{eq:deriv'2}, we will separate ``incoming'' from ``outgoing'' dynamics through the decomposition $\mathcal{P}_{\vartheta,{\bf a}}=\mathcal{I}_t\cup\mathcal{O}_t$ with \begin{equation}\label{eq:inoutdomains} \mathcal{I}_t:=\{(\vartheta,{\bf a}):\widetilde{\X}\cdot\widetilde{\V}\le (L^2+\frac{q^2}{a^2})^{1/2}\},\qquad \mathcal{O}_t:=\{(\vartheta,{\bf a}):\widetilde{\X}\cdot\widetilde{\V}\ge -(L^2+\frac{q^2}{a^2})^{1/2}\}. \end{equation} This reflects the fact that along a trajectory of \eqref{ODE} the asymptotic velocities at $\pm\infty$ may differ drastically in direction. As a consequence, on $\mathcal{I}_t$ their location $\widetilde{\X}$ is better approximated by the \emph{past} asymptotic action, and there we will thus work with the \emph{past} angle-action coordinates of Section \ref{ssec:past_AA}. Moreover, the splitting is chosen such that periapsis occurs in $\mathcal{I}_t\cap\mathcal{O}_t$: recall that the time of periapsis $t_p(\vartheta_0,{\bf a}_0)\in\mathbb{R}$ of a trajectory of \eqref{ODE} starting at a given $(\vartheta_0,{\bf a}_0)$ is such that \begin{equation} \widetilde{\X}\cdot\widetilde{\V}\,(\vartheta_0,{\bf a}_0)=0\quad\hbox{ when }t=t_p(\vartheta_0,{\bf a}_0). \end{equation} We thus have four dynamically relevant regions of phase space $\mathcal{P}_{\vartheta,{\bf a}}=\mathcal{I}_t\cup\mathcal{O}_t=\Omega_t^{far}\cup\Omega_t^{cl}$: \begin{equation}\label{eq:regions} \mathcal{I}_t^{far}:=\mathcal{I}_t\cap\Omega_t^{far},\quad \mathcal{O}_t^{far}:=\mathcal{O}_t\cap\Omega_t^{far},\quad \mathcal{I}_t^{cl}:=\mathcal{I}_t\cap\Omega_t^{cl},\quad \mathcal{O}_t^{cl}:=\mathcal{O}_t\cap\Omega_t^{cl}. \end{equation} \begin{remark}\label{rem:regions} \begin{enumerate} \item The four regions of \eqref{eq:regions} have overlap. \item General trajectories of \eqref{ODE} can visit all four dynamically distinct regions: starting far away from periapsis and incoming in $\mathcal{I}_t^{far}$, a trajectory with sufficiently large velocity $a\gg 1$ (i.e.\ $\xi\ll 1$) will proceed to $\mathcal{I}_t^{cl}$ and through $\mathcal{O}_t^{cl}$ to end up in $\mathcal{O}_t^{far}$. \item If at initial time our unknown has compact support in phase space, we can drastically simplify the proof (at the cost of making our assumptions dependent on the radius of the initial support) and work with only one formulation: For the forward in time evolution it then suffices to consider future asymptotic actions, and if we redefine the close resp.\ far regions to account for the largest and smallest asymptotic actions of the initial data, we can reduce to the case where all trajectories lie in either $\mathcal{O}_t^{far}$ or $\mathcal{O}_t^{cl}$. \end{enumerate} \end{remark} \begin{proof}[Proof of Proposition \ref{prop:global_derivs}] To establish Proposition \ref{prop:global_derivs}, we control $\mathcal{D}(\vartheta,{\bf a};t)$ pointwise along trajectories of \eqref{ODE}. For a given trajectory, we do this separately in the four phase space regions defined in \eqref{eq:regions}, and combine with bounds for the transition between the coordinates/formulations: \begin{itemize} \item Incoming to outgoing (from $\mathcal{I}_t$ to $\mathcal{O}_t$): We work with incoming asymptotic actions on $\mathcal{I}_t=\mathcal{I}_t^{cl}\cup\mathcal{I}_t^{far}$, and outgoing asymptotic actions on $\mathcal{O}_t=\mathcal{O}_t^{cl}\cup\mathcal{O}_t^{far}$. If a trajectory passes through periapsis (which it can do at most once), we transition between coordinates there. Hereby, we recall from \eqref{eq:def_SIC(-)} that $\xi^{(-)}=-\xi$ and $\lambda^{(-)}=\lambda$, so that in particular the splitting in \eqref{eq:inoutdomains} is well-defined. By Lemma \ref{lem:trans_inout} below we have that \begin{equation} \mathcal{D}[\zeta](\vartheta,{\bf a};t_p(\vartheta,{\bf a}))\lesssim (\xi^2+\xi^{-2}) \mathcal{D}[\zeta](\vartheta^{(-)},{\bf a}^{(-)};t_p(\vartheta,{\bf a})),\qquad \zeta\in\{\gamma,\gamma'\}. \end{equation} \item Far to close, or close to far (between $\Omega_t^{far}$ and $\Omega_t^{cl}$): We recall that since $\gamma'=\gamma\circ\mathcal{M}_t$ are related by the canonical diffeomorphism $\mathcal{M}_t$ of \eqref{eq:defM_t}, their Poisson brackets are related by \eqref{eq:PBrel'}, and we have in particular that \begin{equation}\label{eq:defP'} \mathcal{D}[\gamma']=\mathcal{D}'[\gamma]\circ\mathcal{M}_t,\qquad \mathcal{D}'[\gamma]:=\sum_{f\in \textnormal{SIC}}\mathfrak{w}_{f'}\aabs{\{f',\gamma\}}, \end{equation} where we used the notation of \eqref{eq:weightprimes} to denote $f':=f\circ\mathcal{M}_t^{-1}$. We recall further that $\Omega_t^{cl}$, $\Omega_t^{far}$ and the transition region $\Omega_t^{cl}\cap\Omega_t^{far}$ are left invariant by $\mathcal{M}_t$, and by the below Lemma \ref{lem:trans_short} the weights $\xi$ and $\xi'$ are comparable on $\Omega_t^{cl}$, and there holds that \begin{equation} \mathcal{D}[\gamma](t)\lesssim (\xi +\xi^{-3})\mathcal{D}'[\gamma](t)\lesssim (\xi +\xi^{-3})^2\mathcal{D}[\gamma](t) \quad \textnormal{on }\Omega_t^{cl}\cap\Omega_t^{far}. \end{equation} We recall also that by Lemma \ref{lem:traj_cl_far}, per trajectory at most two transitions between close and far regions are possible. \end{itemize} We highlight that the weight functions $\mathfrak{w}_f$, $f\in \textnormal{SIC}$ depend solely on $\xi$, which is constant along a trajectory of \eqref{ODE}. The above ``losses'' in $\xi$ are thus only incurred at the transitions, and propagated along the trajectories otherwise. To properly account for this, we introduce the following function that tracks the $\xi$ weights that will be used for transition: \begin{equation}\label{eq:defUps} \Upsilon:\mathcal{P}_{\vartheta,{\bf a}}\to\mathbb{R},\quad (\vartheta,{\bf a})\mapsto \ip{\xi}\cdot \begin{cases} (\xi^2+\xi^{-2})(\xi+\xi^{-3})^2,\qquad &\textnormal{on }\mathcal{I}_t^{far},\\ (\xi^2+\xi^{-2})(\xi+\xi^{-3}),\qquad &\textnormal{on }\mathcal{I}_t^{cl},\\ (\xi+\xi^{-3}),\qquad &\textnormal{on }\mathcal{O}_t^{cl},\\ 1,\qquad &\textnormal{on }\mathcal{O}_t^{far}. \end{cases} \end{equation} Since \eqref{eq:deriv_btstrap_id} implies that $\nnorm{\Upsilon\mathcal{D}(0)}_{L^\infty}\lesssim\varepsilon_0$, to establish Proposition \ref{prop:global_derivs} it then suffices to show the following bootstrap argument: If for $0\leq T_1<T_2$ we have that for some $0<\delta\ll 1$ there holds that \begin{equation}\label{eq:deriv_btstrap_assump} \norm{\Upsilon\mathcal{D}(t)}_{L^\infty_{\vartheta,{\bf a}}}\lesssim \varepsilon\ip{t}^\delta, \end{equation} then in fact \begin{equation}\label{eq:deriv_btstrap_concl} \norm{\Upsilon\mathcal{D}(t)}_{L^\infty_{\vartheta,{\bf a}}}\lesssim \varepsilon+\varepsilon^2\ln(t/T_1). \end{equation} We note that while by construction \eqref{eq:deriv_btstrap_assump} only gives control of $\mathcal{D}[\gamma]$ on $\Omega_t^{far}$ resp.\ $\mathcal{D}[\gamma']$ on $\Omega_t^{cl}$, under the assumption \eqref{eq:deriv_btstrap_assump} we obtain from Lemma \ref{lem:trans_full} via \eqref{eq:PBcl_byfar} resp.\ \eqref{eq:PBcl_byfar_bulk} bounds for $\mathcal{D}[\gamma]$ across \emph{both} far and close regions: \begin{equation}\label{eq:deriv_bstrap_assump2} \mathfrak{1}_{\mathcal{B}}\mathcal{D}[\gamma]\lesssim \varepsilon\ip{t}^\delta,\qquad \mathfrak{1}_{\mathcal{B}^c}\mathcal{D}[\gamma]\lesssim \varepsilon (1+\ip{t}q\xi^{-2})\ip{t}^\delta. \end{equation} In particular, by Proposition \ref{PropEeff} $(ii)$ it then holds that \begin{equation}\label{eq:EFdecay0} \sqrt{t^2+\abs{{\bf y}}^2}\abs{\mathcal{E}({\bf y},t)}\lesssim \varepsilon^2\ip{t}^{-1},\quad \sup_{{\bf y}\in\mathbb{R}^3}\left[t^2+\abs{{\bf y}}^2\right]\abs{\mathcal{F}({\bf y},t)}\lesssim \varepsilon^2\ln\ip{t}\cdot\ip{t}^{-1}. \end{equation} We note that the expressions for the kinematically relevant quantities $\widetilde{\X},\widetilde{\V}$ (and thus also for their derivatives) agree up to signs in both incoming and outgoing asymptotic actions (see \eqref{XVinSIC(-)}), and can thus be treated analogously. We will henceforth focus on the case of outgoing asymptotic actions; the diligent reader can easily trace the necessary adaptations for the incoming asymptotic actions. To complete the proof it thus suffices to provide a bootstrap argument showing that if a trajectory stays in $\mathcal{O}_t^{far}$ resp.\ $\mathcal{O}_t^{cl}$ on a time interval $t\in[T_1,T_2]$, its weighted symplectic gradients $\mathcal{D}[\gamma]$ resp.\ $\mathcal{D}'[\gamma]$ grow by at most a factor of $1+\ln^2(t/T_1)$. This is done in Lemmas \ref{lem:farprop} resp.\ \ref{lem:clprop} below. Once these are established (see Sections \ref{sec:prop_far} and \ref{sec:prop_cl} below), the proof of Proposition \ref{prop:global_derivs} is thus complete. \end{proof} As a result of direct computation (see Section \ref{ssec:transitions}), we have that \begin{lemma}[Transition from incoming to outgoing]\label{lem:trans_inout} Consider a trajectory of \eqref{ODE} starting at $(\vartheta,{\bf a})$, with periapsis $t_p(\vartheta,{\bf a})$. Then \begin{equation} \mathcal{D}[\zeta](\vartheta,{\bf a};t_p(\vartheta,{\bf a}))\lesssim (\xi^2+\xi^{-2})\mathcal{D}[\zeta](\vartheta^{(-)},{\bf a}^{(-)};t_p(\vartheta,{\bf a})) ,\qquad \zeta\in\{\gamma,\gamma'\}. \end{equation} \end{lemma} \begin{proof} This follows directly by collecting the terms in \eqref{eq:inout_PBtrans}. \end{proof} \begin{lemma}[Transition between close and far]\label{lem:trans_short} Under the assumptions of Proposition \ref{prop:global_derivs}, consider outgoing (resp.\ incoming) asymptotic actions on $\mathcal{O}_t$ (resp.\ $\mathcal{I}_t$). Then on $\Omega_t^{cl}$ there holds that for some $C>0$ \begin{equation}\label{eq:xi-xi'-comp} C^{-1}\xi'\leq\xi\leq C\xi', \end{equation} and on $\Omega_t^{cl}\cap\Omega_t^{far}$ we have that \begin{equation}\label{eq:deriv-deriv-PB} \mathcal{D}'[\gamma](t)\lesssim (\xi +\xi^{-3})\mathcal{D}[\gamma](t),\qquad \mathcal{D}[\gamma](t)\lesssim (\xi +\xi^{-3})\mathcal{D}'[\gamma](t). \end{equation} \end{lemma} \begin{proof} We note that by construction the expressions for $\gamma$ in terms of $\gamma'$ and vice versa are analogous (see \eqref{eq:defM_t} and observe also that $\gamma=\gamma'\circ\mathcal{M}_t^{-1}$, where $\mathcal{M}_t^{-1}=\mathcal{T}\circ\Phi_t\circ\Sigma_{-\mathcal{W}(t)}\circ\Phi_t^{-1}\circ\mathcal{T}^{-1}$) and thus once \eqref{eq:xi-xi'-comp} is established, by symmetry it suffices to prove the first estimate of \eqref{eq:deriv-deriv-PB}. This follows from Lemma \ref{lem:trans_full} of Appendix \ref{sec:appdx_trans} (see also $(1)$ of Remark \ref{rem:transition_bds}), where we prove more precise estimates for the relation between Poisson brackets in the close and far formulation. \end{proof} \subsubsection{The coefficients $\mathfrak{m}_{fg}$}\label{ssec:mfg} Since the kinematic quantities $\widetilde{\X}$ and $\widetilde{\V}$ are naturally expressed in terms of the slightly larger spanning set $\textnormal{SIC}\cup\{\lambda\}=:\textnormal{SIC}_+$, which we also used in the computation of Poisson bracket bounds in Section \ref{sec:PB_bounds}, it is convenient to carry on using $\lambda$ explicitly for computations. We note from Remark \ref{PBWithLambdaRmk} that Poisson brackets with $\lambda$ follow directly from those with ${\bf L}$. Using \eqref{eq:U_resol} we express the right hand side of \eqref{eq:deriv} in the form \eqref{eq:deriv2}. This yields that for $f\in\textnormal{SIC}$ we have that \begin{equation} \begin{aligned} -\mathfrak{m}_{f\xi}&=Q\mathcal{F}_{jk}\{\widetilde{\X}^j,\eta\}\{f,\widetilde{\X}^k\}+Q\mathcal{E}_j\{\{f,\widetilde{\X}^j\},\eta\}+\mathcal{W}_j\{\{f,\widetilde{\V}^j\},\eta\},\\ \mathfrak{m}_{f\eta}&=Q\mathcal{F}_{jk}\{\widetilde{\X}^j,\xi\}\{f,\widetilde{\X}^k\}+Q\mathcal{E}_j\{\{f,\widetilde{\X}^j\},\xi\}+\mathcal{W}_j\{\{f,\widetilde{\V}^j\},\xi\},\\ -\mathfrak{m}_{f\lambda}&=Q\mathcal{F}_{jk}(\partial_\lambda\widetilde{\X}^j)\{f,\widetilde{\X}^k\}+Q\mathcal{E}_j\partial_\lambda(\{f,\widetilde{\X}^j\})+\mathcal{W}_j\partial_\lambda(\{f,\widetilde{\V}^j\}),\\ -\mathfrak{m}_{f{\bf u}^a}&=Q\mathcal{F}_{jk}(\mathbb{P}_{\bf u}^a\widetilde{\bf X}^j)\{f,\widetilde{\X}^k\}+Q\mathcal{E}_j\mathbb{P}_{{\bf u}}^a\{f,\widetilde{\X}^j\}+\mathcal{W}_j\mathbb{P}_{{\bf u}}^a\{f,\widetilde{\V}^j\},\\ -\mathfrak{m}_{f{\bf L}^a}&=Q\mathcal{F}_{jk}(\mathbb{P}^a_{\bf L}\widetilde{\bf X}^j)\{f,\widetilde{\X}^k\}+Q\mathcal{E}_j\cdot\mathbb{P}_{{\bf L}}^a\{f,\widetilde{\X}^j\}+\mathcal{W}_j\mathbb{P}_{{\bf L}}^a\{f,\widetilde{\V}^j\}, \end{aligned} \end{equation} where we used the notation $\mathbb{P}_{\bm{U}}^a$ to denote the $a$-th component of the projection onto a vector\footnote{With the understanding that $\mathbb{P}_{\bf u}^a(X_3\in^{jcd}{\bf L}^c{\bf u}^d)=X_3\in^{jcd}{\bf L}^c\delta^{ad}$ and $\mathbb{P}_{\bf L}^a(X_3\in^{jcd}{\bf L}^c{\bf u}^d)=X_3\in^{jcd}{\bf u}^d\delta^{ac}$.} $\bm{U}\in\mathbb{R}^3$. We thus obtain the following expressions for the multipliers: \begin{itemize}[wide] \item In $\xi$: \begin{equation}\label{eq:mult_xi} \begin{aligned} -\mathfrak{m}_{\xi\xi}&=Q\mathcal{F}_{jk}\{\widetilde{\X}^j,\eta\}\{\xi,\widetilde{\X}^k\}+Q\mathcal{E}_j\{\{\xi,\widetilde{\X}^j\},\eta\}+\mathcal{W}_j\{\{\xi,\widetilde{\V}^j\},\eta\},\\ \mathfrak{m}_{\xi\eta}&=Q\mathcal{F}_{jk}\{\widetilde{\X}^j,\xi\}\{\xi,\widetilde{\X}^k\}+Q\mathcal{E}_j\{\{\xi,\widetilde{\X}^j\},\xi\}+\mathcal{W}_j\{\{\xi,\widetilde{\V}^j\},\xi\},\\ -\mathfrak{m}_{\xi\lambda}&=Q\mathcal{F}_{jk}(\partial_\lambda\widetilde{\X}^j)\{\xi,\widetilde{\X}^k\}+Q\mathcal{E}_j\partial_\lambda(\{\xi,\widetilde{\X}^j\})+\mathcal{W}_j\partial_\lambda(\{\xi,\widetilde{\V}^j\}),\\ -\mathfrak{m}_{\xi{\bf u}^a}&=Q[\mathcal{F}_{ak}\widetilde{X}_1+\mathcal{F}_{jk}\widetilde{X}_3\in^{jca}{\bf L}^c]\{\xi,\widetilde{\X}^k\}+Q[\mathcal{E}_a\{\xi,\widetilde{X}_1\}+\mathcal{E}_j\{\xi,\widetilde{X}_3\}\in^{jca}{\bf L}^c]\\ &\qquad+[\mathcal{W}_a\{\xi,\widetilde{V}_1\}+\mathcal{W}_j\{\xi,\widetilde{V}_3\}\in^{jca}{\bf L}^c],\\ -\mathfrak{m}_{\xi{\bf L}^a}&=Q\mathcal{F}_{jk}\widetilde{X}_3\in^{jad}{\bf u}^d\{\xi,\widetilde{\X}^k\}+Q\mathcal{E}_j\{\xi,\widetilde{X}_3\}\in^{jad}{\bf u}^d+\mathcal{W}_j\{\xi,\widetilde{V}_3\}\in^{jad}{\bf u}^d. \end{aligned} \end{equation} \item In $\eta$: \begin{equation}\label{eq:mult_eta} \begin{aligned} - \mathfrak{m}_{\eta\xi}&=Q\mathcal{F}_{jk}\{\widetilde{\X}^j,\eta\}\{\eta,\widetilde{\X}^k\}+Q\mathcal{E}_j\{\{\eta,\widetilde{\X}^j\},\eta\}+\mathcal{W}_j\{\{\eta,\widetilde{\V}^j\},\eta\},\\ \mathfrak{m}_{\eta\eta}&=Q\mathcal{F}_{jk}\{\widetilde{\X}^j,\xi\}\{\eta,\widetilde{\X}^k\}+Q\mathcal{E}_j\{\{\eta,\widetilde{\X}^j\},\xi\}+\mathcal{W}_j\{\{\eta,\widetilde{\V}^j\},\xi\}=-\mathfrak{m}_{\xi\xi},\\ -\mathfrak{m}_{\eta\lambda}&=Q\mathcal{F}_{jk}(\partial_\lambda\widetilde{\X}^j)\{\eta,\widetilde{\X}^k\}+Q\mathcal{E}_j\partial_\lambda(\{\eta,\widetilde{\X}^j\})+\mathcal{W}_j\partial_\lambda(\{\eta,\widetilde{\V}^j\}),\\ -\mathfrak{m}_{\eta{\bf u}^a}&=Q[\mathcal{F}_{ak}\widetilde{X}_1+\mathcal{F}_{jk}\widetilde{X}_3\in^{jca}{\bf L}^c]\{\eta,\widetilde{\X}^k\}+Q[\mathcal{E}_a\{\eta,\widetilde{X}_1\}+\mathcal{E}_j\{\eta,\widetilde{X}_3\}\in^{jca}{\bf L}^c]\\ &\qquad+[\mathcal{W}_a\{\eta,\widetilde{V}_1\}+\mathcal{W}_j\{\eta,\widetilde{V}_3\}\in^{jca}{\bf L}^c],\\ -\mathfrak{m}_{\eta{\bf L}^a}&=Q\mathcal{F}_{jk}\widetilde{X}_3\in^{jad}{\bf u}^d\{\eta,\widetilde{\X}^k\}+Q\mathcal{E}_j\{\eta,\widetilde{X}_3\}\in^{jad}{\bf u}^d+\mathcal{W}_j\{\eta,\widetilde{V}_3\}\in^{jad}{\bf u}^d. \end{aligned} \end{equation} \item In ${\bf u}$: Noting that by \eqref{eq:U_resol} and \eqref{PBSIC}, we have that \begin{equation}\label{eq:PBuX} \begin{aligned} \{{\bf u}^k,\widetilde{\X}^j\}&=\partial_\lambda\widetilde{\X}^j\in^{krs}{\bf l}^r{\bf u}^s+\widetilde{X}_3(\delta^{jk}-{\bf u}^j{\bf u}^k)\\ &=\partial_\lambda\widetilde{X}_1{\bf u}^j\in^{krs}{\bf l}^r{\bf u}^s+\lambda\partial_\lambda\widetilde{X}_3\in^{jpq}\in^{krs}{\bf l}^p{\bf u}^q{\bf l}^r{\bf u}^s+\widetilde{X}_3(\delta^{jk}-{\bf u}^j{\bf u}^k), \end{aligned} \end{equation} there holds that \begin{equation}\label{eq:mult_u} \begin{aligned} -\mathfrak{m}_{{\bf u}^a\xi}&=Q\mathcal{F}_{jk}\{\widetilde{\X}^j,\eta\}\{{\bf u}^a,\widetilde{\X}^k\}+Q\mathcal{E}_j\{\{{\bf u}^a,\widetilde{\X}^j\},\eta\}+\mathcal{W}_j\{\{{\bf u}^a,\widetilde{\V}^j\},\eta\},\\ \mathfrak{m}_{{\bf u}^a\eta}&=Q\mathcal{F}_{jk}\{\widetilde{\X}^j,\xi\}\{{\bf u}^a,\widetilde{\X}^k\}+Q\mathcal{E}_j\{\{{\bf u}^a,\widetilde{\X}^j\},\xi\}+\mathcal{W}_j\{\{{\bf u}^a,\widetilde{\V}^j\},\xi\},\\ -\mathfrak{m}_{{\bf u}^a\lambda}&=Q\mathcal{F}_{jk}(\partial_\lambda\widetilde{\X}^j)\{{\bf u}^a,\widetilde{\X}^k\}+Q\mathcal{E}_j\partial_\lambda(\{{\bf u}^a,\widetilde{\X}^j\})+\mathcal{W}_j\partial_\lambda(\{{\bf u}^a,\widetilde{\V}^j\}),\\ -\mathfrak{m}_{{\bf u}^{a}{\bf u}^b}&=Q[\mathcal{F}_{bk}\widetilde{X}_1+\mathcal{F}_{jk}\widetilde{X}_3\in^{jcb}{\bf L}^c]\{{\bf u}^a,\widetilde{\X}^k\}\\ &\qquad+Q[\mathcal{E}_b\partial_\lambda\widetilde{X}_1\in^{ars}{\bf l}^r{\bf u}^s+\mathcal{E}_j\partial_\lambda\widetilde{X}_1{\bf u}^j\in^{arb}{\bf l}^r]\\ &\qquad+Q\mathcal{E}_j\lambda\partial_\lambda\widetilde{X}_3{\bf l}^p{\bf l}^r(\in^{jpb}\in^{ars}{\bf u}^s+\in^{jpq}\in^{arb}{\bf u}^q)-Q\widetilde{X}_3(\mathcal{E}_b{\bf u}^a+\mathcal{E}_j{\bf u}^j\delta^{ab})\\ &\qquad+[\mathcal{W}_b\partial_\lambda\widetilde{V}_1\in^{ars}{\bf l}^r{\bf u}^s+\mathcal{W}_j\partial_\lambda\widetilde{V}_1{\bf u}^j\in^{arb}{\bf l}^r]\\ &\qquad+\mathcal{W}_j\lambda\partial_\lambda\widetilde{V}_3{\bf l}^p{\bf l}^r(\in^{jpb}\in^{ars}{\bf u}^s+\in^{jpq}\in^{arb}{\bf u}^q)-\widetilde{V}_3(\mathcal{W}_b{\bf u}^a+\mathcal{W}_j{\bf u}^j\delta^{ab}),\\ -\mathfrak{m}_{{\bf u}^{a}{\bf L}^b}&=Q\mathcal{F}_{jk}\widetilde{X}_3\in^{jbd}{\bf u}^d\{{\bf u}^a,\widetilde{\X}^k\}+Q\mathcal{E}_j\lambda^{-1}\partial_\lambda\widetilde{X}_1{\bf u}^j\in^{abs}{\bf u}^s\\ &\qquad +Q\mathcal{E}_j\partial_\lambda\widetilde{X}_3{\bf u}^q{\bf u}^s(\in^{jbq}\in^{ars}{\bf l}^r+\in^{jpq}\in^{abs}{\bf l}^p)\\ &\qquad +\mathcal{W}_j\lambda^{-1}\partial_\lambda\widetilde{V}_1{\bf u}^j\in^{abs}{\bf u}^s\\ &\qquad +\mathcal{W}_j\partial_\lambda\widetilde{V}_3{\bf u}^q{\bf u}^s(\in^{jbq}\in^{ars}{\bf l}^r+\in^{jpq}\in^{abs}{\bf l}^p). \end{aligned} \end{equation} \item In ${\bf L}$: Using that $\{{\bf L}^k,\widetilde{\X}^j\}=\in^{kjp}\widetilde{\X}^p$ and $\{{\bf L}^k,\widetilde{\V}^j\}=\in^{kjp}\widetilde{\V}^p$ we have that \begin{equation}\label{eq:mult_L} \begin{aligned} - \mathfrak{m}_{{\bf L}^a\xi}&=Q\mathcal{F}_{jk}\{\widetilde{\X}^j,\eta\}\in^{akp}\widetilde{\X}^p+Q\mathcal{E}_j\in^{ajp}\{\widetilde{\X}^p,\eta\}+\mathcal{W}_j\in^{ajp}\{\widetilde{\V}^p,\eta\}, \\ \mathfrak{m}_{{\bf L}^a\eta}&=Q\mathcal{F}_{jk}\{\widetilde{\X}^j,\xi\}\in^{akp}\widetilde{\X}^p+Q\mathcal{E}_j\in^{ajp}\{\widetilde{\X}^p,\xi\}+\mathcal{W}_j\in^{ajp}\{\widetilde{\V}^p,\xi\}, \\ - \mathfrak{m}_{{\bf L}^a\lambda}&=Q\mathcal{F}_{jk}\partial_\lambda\widetilde{\X}^j\in^{akp}\widetilde{\X}^p+Q\mathcal{E}_j\in^{ajp}\partial_\lambda\widetilde{\X}^p+\mathcal{W}_j\in^{ajp}\partial_\lambda\widetilde{\V}^p,\\ -\mathfrak{m}_{{\bf L}^a{\bf u}^b}&=Q[\mathcal{F}_{bk}\widetilde{X}_1+\mathcal{F}_{jk}\widetilde{X}_3\in^{jcb}{\bf L}^c]\in^{akp}\widetilde{\X}^p\\ &\qquad + Q\mathcal{E}_j[\in^{ajb}\widetilde{X}_1+\in^{ajp}\widetilde{X}_3\in^{pcb}{\bf L}^c]+ \mathcal{W}_j[\in^{ajb}\widetilde{V}_1+\in^{ajp}\widetilde{V}_3\in^{pcb}{\bf L}^c],\\ -\mathfrak{m}_{{\bf L}^a{\bf L}^b}&=Q\mathcal{F}_{jk}\widetilde{X}_3\in^{jbd}{\bf u}^d\in^{akp}\widetilde{\X}^p+Q\widetilde{X}_3(\delta^{ab}\mathcal{E}_j{\bf u}^j-\mathcal{E}_b{\bf u}^a)+\widetilde{V}_3(\delta^{ab}\mathcal{W}_j{\bf u}^j-\mathcal{W}_b{\bf u}^a). \end{aligned} \end{equation} \end{itemize} \paragraph{The coefficients $\mathfrak{m}'_{fg}$} Inspecting \eqref{eq:deriv'} and comparing with \eqref{eq:deriv}, we see that the coefficients $\mathfrak{m}'_{fg}$ can be read off from $\mathfrak{m}_{fg}$ by ignoring the terms in $\mathcal{W}$ and replacing $Q\mathcal{E}_j$ by $Q\mathcal{E}_j-\dot{\mathcal{W}}_j$. \subsubsection{Propagation in the far formulation}\label{sec:prop_far} We now demonstrate how derivative control as in \eqref{eq:defP} can be propagated in the far region $\Omega_t^{far}=\{\aabs{\widetilde{\X}}\geq \ip{t}\}$. As it turns out, Poisson brackets with respect to $\xi, {\bf u}$ have more favorable bounds than those with respect to $\eta,{\bf L}$. We thus define \begin{equation}\label{SplittingGUDer} \begin{split} \mathcal{G}(\vartheta,{\bf a};t)&:=\mathfrak{w}_\xi\aabs{\{\xi,\gamma\}}(\vartheta,{\bf a};t)+\mathfrak{w}_{\bf u}\aabs{\{{\bf u},\gamma\}}(\vartheta,{\bf a};t),\\ \mathcal{U}(\vartheta,{\bf a};t)&:=\mathfrak{w}_\eta\aabs{\{\eta,\gamma\}}(\vartheta,{\bf a};t)+\mathfrak{w}_{\bf L}\aabs{\{{\bf L},\gamma\}}(\vartheta,{\bf a};t), \end{split} \end{equation} for the weight functions $\mathfrak{w}_f$, $f\in \textnormal{SIC}$, defined in \eqref{eq:def_weights}. We then have: \begin{lemma}\label{lem:farprop} Assume that $(\vartheta,{\bf a})$ is such that for some $0\leq T_1< T_2$ we have $(\vartheta,{\bf a})\in\mathcal{O}_t^{far}$ for $T_1\leq t\leq T_2$. Assume also that $\gamma$ is a solution of \eqref{eq:deriv2} defined on the time interval $[T_1,T_2]$ satisfying for some $0<\delta\ll 1$ \begin{equation} \mathcal{D}[\gamma](\vartheta,{\bf a};t)\lesssim\varepsilon\ip{t}^\delta, \end{equation} and the decay estimate \eqref{eq:EFdecay0} holds, i.e. \begin{equation}\label{eq:EFdecay} \sqrt{t^2+\abs{{\bf y}}^2}\abs{\mathcal{E}({\bf y},t)}\lesssim \varepsilon^2\ip{t}^{-1},\quad \sup_{{\bf y}\in\mathbb{R}^3}\left[t^2+\abs{{\bf y}}^2\right]\abs{\mathcal{F}({\bf y},t)}\lesssim \varepsilon^2\ln\ip{t}\cdot\ip{t}^{-1}. \end{equation} Then, with $\mathcal{G}$, $\mathcal{U}$ defined in \eqref{SplittingGUDer}, \begin{equation}\label{eq:dtBG_far} \begin{aligned} &\frac{d}{dt} \mathcal{G}(\vartheta,{\bf a};t)\lesssim \varepsilon^2 \ip{t}^{-5/4}[\mathcal{G}(\vartheta,{\bf a};t)+\mathcal{U}(\vartheta,{\bf a};t)],\\ &\frac{d}{dt} \mathcal{U}(\vartheta,{\bf a};t)\lesssim \varepsilon^2 \ln\ip{t}\cdot \ip{t}^{-1}\mathcal{G}(\vartheta,{\bf a};t)+\varepsilon^2 \ip{t}^{-5/4}\mathcal{U}(\vartheta,{\bf a};t), \end{aligned} \end{equation} and thus \begin{equation} \mathcal{G}(\vartheta,{\bf a};t)\lesssim\varepsilon,\qquad \mathcal{U}(\vartheta,{\bf a};t)\lesssim\varepsilon+\varepsilon^2 \ln^2(t/T_1). \end{equation} \begin{proof} By construction, we have that \begin{equation} \frac{d}{dt}\mathfrak{w}_f\{f,\gamma\}=\sum_{f,g\in\textnormal{SIC}}\mathfrak{m}_{fg}\frac{\mathfrak{w}_f}{\mathfrak{w}_g}\cdot\mathfrak{w}_g\{g,\gamma\}, \end{equation} and to prove the claim it thus suffices to suitably bound the expressions $\mathfrak{m}_{fg}\cdot (\mathfrak{w}_f/\mathfrak{w}_g)$. For this we will use the Poisson bracket bounds on $\widetilde{\X}$ and $\widetilde{\V}$ established in Section \ref{sec:PB_bounds}, in particular also the refinements in outgoing asymptotic actions (see Lemma \ref{lem:betterdl}), the decay estimates \eqref{eq:EFdecay} and the fact that $\aabs{\widetilde{\X}}\gtrsim\ip{t}$. From \eqref{eq:mult_xi}, \eqref{PBX1primeprime} and \eqref{eq:betterdl}, we have the bounds \begin{equation} \begin{aligned} \abs{\mathfrak{m}_{\xi\xi}}&\lesssim \abs{\mathcal{F}}\cdot\xi/q[\vert\widetilde{\X}\vert+tq\xi^{-1}]+ \abs{\mathcal{E}}\xi/q\cdot[1+t\xi\vert\widetilde{\X}\vert^{-2}]+\vert \mathcal{W}\vert\vert\widetilde{\X}\vert^{-2} \xi^2/q[1+tq\xi^{-1}\aabs{\widetilde{\X}}^{-1}]\\ &\lesssim \left[\abs{\mathcal{F}}\vert\widetilde{\X}\vert+\abs{\mathcal{E}}\right]\left[\aabs{\widetilde{\X}}^{1/2}q^{-1/2}+t\aabs{\widetilde{\X}}^{-1}\right]+\abs{\mathcal{W}}\aabs{\widetilde{\X}}^{-1}\left[1+tq^{1/2}\aabs{\widetilde{\X}}^{-3/2}\right],\\ \abs{\mathfrak{m}_{\xi\eta}}&\lesssim \xi^4/q^2\left[\abs{\mathcal{F}}+\abs{\mathcal{E}}\xi^2/q\vert\widetilde{\X}\vert^{-2}\right]+\abs{\mathcal{W}}\xi^5/q^2\vert\widetilde{\X}\vert^{-3},\\ \abs{\mathfrak{m}_{\xi{\bf u}^a}}&\lesssim \frac{\xi^2}{q}\left[\abs{\mathcal{F}}\vert\widetilde{\X}\vert+\abs{\mathcal{E}}+\xi\abs{\mathcal{W}}\vert\widetilde{\X}\vert^{-2}\right],\\ \abs{\mathfrak{m}_{\xi{\bf L}^a}}+\abs{\mathfrak{m}_{\xi\lambda}}&\lesssim \frac{\xi^2}{q}\abs{\mathcal{F}}(\aabs{\widetilde{X}_3}+\aabs{\partial_\lambda\widetilde{\X}})+\abs{\mathcal{E}}\left(\frac{\xi^5}{q^3}\frac{1}{\aabs{\widetilde{\X}}^2}+\frac{\xi^3}{q^2}\aabs{\partial_\lambda\widetilde{\V}}\right)+\abs{\mathcal{W}}\left(\frac{\xi^4}{q^2}\frac{1}{\aabs{\widetilde{\X}}^3}+\frac{\xi^3}{q}\aabs{\partial_\lambda\widetilde{\X}}\aabs{\widetilde{\X}}^{-3}\right)\\\ &\lesssim \frac{\xi^3}{q^2}\left(\abs{\mathcal{F}}+\abs{\mathcal{E}}\frac{\xi^2/q}{\aabs{\widetilde{\X}}^2}\right)+\abs{\mathcal{W}}\frac{\xi^4}{q^2}\aabs{\widetilde{\X}}^{-3},\\ \end{aligned} \end{equation} so that by \eqref{eq:def_weights}, \eqref{eq:EFdecay} and $\aabs{\widetilde{\X}}\gtrsim\ip{t}$ there holds that \begin{equation}\label{eq:mxi_bds_far} \begin{alignedat}{2} \abs{\mathfrak{m}_{\xi\xi}}&\lesssim \varepsilon^2\ip{t}^{-\frac{5}{4}},\qquad &\abs{\mathfrak{m}_{\xi\eta}}\frac{\mathfrak{w}_\xi}{\mathfrak{w}_\eta}&= \abs{\mathfrak{m}_{\xi\eta}}\frac{(1+\xi)^2}{\xi^4}\lesssim\varepsilon^2\ip{t}^{-\frac{5}{4}},\\ \abs{\mathfrak{m}_{\xi{\bf L}}}\frac{\mathfrak{w}_\xi}{\mathfrak{w}_{\bf L}}=\abs{\mathfrak{m}_{\xi{\bf L}}}\frac{(1+\xi)^2}{\xi^3}&\lesssim\varepsilon^2\ip{t}^{-\frac{5}{4}},\qquad &\abs{\mathfrak{m}_{\xi{\bf u}}}\frac{\mathfrak{w}_\xi}{\mathfrak{w}_{\bf u}}&= \abs{\mathfrak{m}_{\xi{\bf u}}}\frac{(1+\xi)}{\xi^2}\lesssim\varepsilon^2\ip{t}^{-\frac54}. \end{alignedat} \end{equation} Similarly, from \eqref{eq:mult_eta} and using \eqref{eq:betterdlPBetaX} and \eqref{eq:PBxiX3}, we obtain that \begin{equation} \begin{aligned} \abs{\mathfrak{m}_{\eta\xi}} &\lesssim \abs{\mathcal{F}}\xi^{-2}[\aabs{\widetilde{\X}}^2+t^2q^2\xi^{-2}]+\abs{\mathcal{E}}\xi^{-2}[\vert\widetilde{\X}\vert+tq\xi^{-1}+t^2q\vert\widetilde{\X}\vert^{-2}]\\ &\qquad +\abs{\mathcal{W}}q\xi^{-3}[1+t\xi\vert\widetilde{\X}\vert^{-2}+t^2q\vert\widetilde{\X}\vert^{-3}],\\ \abs{\mathfrak{m}_{\eta\eta}}&=\abs{\mathfrak{m}_{\xi\xi}},\\ \abs{\mathfrak{m}_{\eta{\bf u}^a}}&\lesssim [\abs{\mathcal{F}}\aabs{\widetilde{\X}}+\abs{\mathcal{E}}]\xi^{-1}[\aabs{\widetilde{\X}}+tq\xi^{-1}]+\abs{\mathcal{W}}q\xi^{-2} (1+t\xi\vert\widetilde{\X}\vert^{-2}),\\ \abs{\mathfrak{m}_{\eta{\bf L}^a}}+\abs{\mathfrak{m}_{\eta\lambda}}&\lesssim \abs{\mathcal{F}}[\aabs{\widetilde{X}_3}+\aabs{\partial_\lambda\widetilde{\X}}]\xi^{-1}[\aabs{\widetilde{\X}}+tq\xi^{-1}]+\abs{\mathcal{E}}\frac{1}{q}[1+\frac{t\xi}{\aabs{\widetilde{\X}}^2}]\\ &\qquad +\abs{\mathcal{W}}\frac{1}{\aabs{\widetilde{\X}}^2}[\frac{t}{\aabs{\widetilde{\X}}}+\frac{\xi}{q}], \end{aligned} \end{equation} implying that \begin{equation}\label{eq:meta_bds_far} \begin{alignedat}{2} \abs{\mathfrak{m}_{\eta\xi}}\frac{\mathfrak{w}_\eta}{\mathfrak{w}_\xi}=\abs{\mathfrak{m}_{\eta\xi}}\frac{\xi^4}{(1+\xi)^2}&\lesssim [\aabs{\widetilde{\X}}^2\aabs{\mathcal{F}(\widetilde{\X})}+\aabs{\widetilde{\X}}\aabs{\mathcal{E}(\widetilde{\X})}+\varepsilon^2\ip{t}^{-1}],\qquad &\abs{\mathfrak{m}_{\eta\eta}}&= \abs{\mathfrak{m}_{\xi\xi}},\\ \abs{\mathfrak{m}_{\eta{\bf u}}}\frac{\mathfrak{w}_\eta}{\mathfrak{w}_{\bf u}}=\abs{\mathfrak{m}_{\eta{\bf u}}}\frac{\xi^2}{1+\xi}&\lesssim [\aabs{\widetilde{\X}}^2\aabs{\mathcal{F}(\widetilde{\X})}+\aabs{\widetilde{\X}}\aabs{\mathcal{E}(\widetilde{\X})}+\varepsilon^2\ip{t}^{-1}],\qquad &\abs{\mathfrak{m}_{\eta{\bf L}}}\frac{\mathfrak{w}_\eta}{\mathfrak{w}_{\bf L}}&=\abs{\mathfrak{m}_{\eta{\bf L}}}\xi&\lesssim\varepsilon^2\ip{t}^{-\frac32}, \end{alignedat} \end{equation} which suffices by \eqref{eq:EFdecay}. For Poisson brackets with ${\bf u}$, we note that by \eqref{eq:PBuX} and \eqref{eq:betterdl}, there holds that \begin{equation} \begin{split} \aabs{\{{\bf u}^k,\widetilde{\X}^j\}}\lesssim [\aabs{\partial_\lambda\widetilde{\X}}+\aabs{\widetilde{X}_3}]\lesssim \frac{\xi}{q}&,\qquad \aabs{\{{\bf u}^k,\widetilde{\V}^j\}}\lesssim \frac{\xi^2}{q}\frac{1}{\aabs{\widetilde{\X}}^2},\\ \aabs{\partial_\lambda\{{\bf u}^k,\widetilde{\X}^j\}}\lesssim\frac{1}{q}\frac{\xi^2/q}{\vert\widetilde{\X}\vert}&,\qquad\aabs{\partial_\lambda\{{\bf u}^k,\widetilde{\V}^j\}}\lesssim\frac{\xi/q}{\vert\widetilde{\X}\vert^2}, \end{split} \end{equation} and with \eqref{eq:betterdl} and \eqref{eq:PBxiX3} it follows that \begin{equation} \aabs{\{\{{\bf u}^k,\widetilde{\X}^j\},\xi\}}\lesssim\aabs{\partial_\lambda\{\widetilde{\X},\xi\}}+\aabs{\{\widetilde{X}_3,\xi\}}\lesssim\frac{\xi^5}{q^3}\frac{1}{\aabs{\widetilde{\X}}^2},\qquad \aabs{\{\{{\bf u}^k,\widetilde{\V}^j\},\xi\}}\lesssim \frac{\xi^2}{q}\frac{1}{\aabs{\widetilde{\X}}^2}. \end{equation} Furthermore, using \eqref{eq:PBuX}, \eqref{eq:betterdlPBetaX} and \eqref{eq:PBxiV3}-\eqref{eq:PBxiX3}, \begin{equation} \begin{split} \aabs{\{\{{\bf u}^a,\widetilde{\X}^j\},\eta\}}\lesssim q^{-1}(1+\frac{t\xi}{\aabs{\widetilde{\X}}^2}),\qquad \aabs{\{\{{\bf u}^a,\widetilde{\V}^j\},\eta\}}\lesssim \frac{1}{\vert\widetilde{\X}\vert^2}\left(\frac{\xi}{q}+\frac{t}{\vert\widetilde{\X}\vert}\right). \end{split} \end{equation} We deduce from \eqref{eq:mult_u} that \begin{equation} \begin{aligned} \abs{\mathfrak{m}_{{\bf u}^a\xi}}&\lesssim q^{-1}\left[\abs{\mathcal{F}}\vert\widetilde{\X}\vert+\vert\mathcal{E}\vert\right][1+tq/(\xi\aabs{\widetilde{\X}})]+\abs{\mathcal{W}}\frac{1}{\aabs{\widetilde{\X}}^2}\left(\frac{\xi}{q}+\frac{t}{\aabs{\widetilde{\X}}}\right),\\ \abs{\mathfrak{m}_{{\bf u}^a\eta}}&\lesssim \frac{\xi^3}{q}\frac{1}{\vert\widetilde{\X}\vert}\left[\abs{\mathcal{F}}\vert\widetilde{\X}\vert+\abs{\mathcal{E}}\right]+\abs{\mathcal{W}}\frac{\xi^2}{q}\frac{1}{\aabs{\widetilde{\X}}^2},\\ \abs{\mathfrak{m}_{{\bf u}^a{\bf u}^b}}&\lesssim \frac{\xi}{q}[\abs{\mathcal{F}}\aabs{\widetilde{\X}}+\abs{\mathcal{E}}]+\abs{\mathcal{W}}\frac{\xi^2/q}{\aabs{\widetilde{\X}}^2},\\ \abs{\mathfrak{m}_{{\bf u}^a\lambda}}+ \abs{\mathfrak{m}_{{\bf u}^a{\bf L}^b}}&\lesssim \frac{\xi^2}{q^2}\frac{1}{\vert\widetilde{\X}\vert}\left[\abs{\mathcal{F}}\vert\widetilde{\X}\vert+\abs{\mathcal{E}}\right]+\abs{\mathcal{W}}\frac{\xi}{q}\frac{1}{\aabs{\widetilde{\X}}^2}, \end{aligned} \end{equation} and hence \begin{equation}\label{eq:mu_bds_far} \begin{alignedat}{2} \abs{\mathfrak{m}_{{\bf u}\xi}}\frac{\mathfrak{w}_{\bf u}}{\mathfrak{w}_\xi}=\abs{\mathfrak{m}_{{\bf u}\xi}}\frac{\xi^2}{1+\xi}&\lesssim \varepsilon^2\ip{t}^{-\frac{5}{4}},\qquad &\abs{\mathfrak{m}_{{\bf u}\eta}}\frac{\mathfrak{w}_{\bf u}}{\mathfrak{w}_\eta}=\abs{\mathfrak{m}_{{\bf u}\eta}}\frac{1+\xi}{\xi^2}&\lesssim \varepsilon^2\ip{t}^{-\frac{5}{4}},\\ \abs{\mathfrak{m}_{{\bf u}{\bf u}}}&\lesssim \varepsilon^2\ip{t}^{-\frac{5}{4}},\qquad &\abs{\mathfrak{m}_{{\bf u}{\bf L}}}\frac{\mathfrak{w}_{\bf u}}{\mathfrak{w}_{\bf L}}=\abs{\mathfrak{m}_{{\bf u}{\bf L}}}\frac{1+\xi}{\xi}&\lesssim\varepsilon^2\ip{t}^{-\frac32}. \end{alignedat} \end{equation} Finally, from \eqref{eq:mult_L} we have the bounds \begin{equation} \begin{aligned} \abs{\mathfrak{m}_{{\bf L}^a\xi}}&\lesssim \xi^{-1}[\abs{\mathcal{F}}\aabs{\widetilde{\X}}+\abs{\mathcal{E}}][\aabs{\widetilde{\X}}+tq\xi^{-1}]+\abs{\mathcal{W}}q\xi^{-2}[1+t\xi\aabs{\widetilde{\X}}^{-2}],\\ \abs{\mathfrak{m}_{{\bf L}^a\eta}}&\lesssim \frac{\xi^2}{q}[\abs{\mathcal{F}}\aabs{\widetilde{\X}}+\abs{\mathcal{E}}]+\abs{\mathcal{W}}\frac{\xi^3}{q}\frac{1}{\aabs{\widetilde{\X}}^2},\\ \abs{\mathfrak{m}_{{\bf L}^a{\bf u}^b}}&\lesssim [\abs{\mathcal{F}}\aabs{\widetilde{\X}}+\abs{\mathcal{E}}]\aabs{\widetilde{\X}}+\abs{\mathcal{W}}\frac{q}{\xi},\\ \abs{\mathfrak{m}_{{\bf L}^a\lambda}}+ \abs{\mathfrak{m}_{{\bf L}^a{\bf L}^b}}&\lesssim [\abs{F}\aabs{\widetilde{\X}}+\abs{\mathcal{E}}]\frac{\xi}{q}+\abs{\mathcal{W}}\frac{\xi^2}{q}\frac{1}{\aabs{\widetilde{\X}}^2},\\ \end{aligned} \end{equation} from which we see that \begin{equation}\label{eq:mL_bds_far} \begin{alignedat}{2} \abs{\mathfrak{m}_{{\bf L}\xi}}\frac{\mathfrak{w}_{\bf L}}{\mathfrak{w}_\xi}=\abs{\mathfrak{m}_{{\bf L}\xi}}\frac{\xi^3}{(1+\xi)^2}&\lesssim [\aabs{\widetilde{\X}}^2\aabs{\mathcal{F}(\widetilde{\X})}+\aabs{\widetilde{\X}}\aabs{\mathcal{E}(\widetilde{\X})}+\varepsilon^2\ip{t}^{-1}],\qquad &\abs{\mathfrak{m}_{{\bf L}\eta}}\frac{\mathfrak{w}_{\bf L}}{\mathfrak{w}_\eta}= \abs{\mathfrak{m}_{{\bf L}\eta}}\frac{1}{\xi}&\lesssim\varepsilon^2\ip{t}^{-\frac54},\\ \abs{\mathfrak{m}_{{\bf L}{\bf u}}}\frac{\mathfrak{w}_{\bf L}}{\mathfrak{w}_{\bf u}}=\abs{\mathfrak{m}_{{\bf L}{\bf u}}}\frac{\xi}{1+\xi}&\lesssim [\aabs{\widetilde{\X}}^2\aabs{\mathcal{F}(\widetilde{\X})}+\aabs{\widetilde{\X}}\aabs{\mathcal{E}(\widetilde{\X})}+\varepsilon^2\ip{t}^{-1}],\qquad &\abs{\mathfrak{m}_{{\bf L}{\bf L}}}&\lesssim\varepsilon^2\ip{t}^{-\frac54}. \end{alignedat} \end{equation} In conclusion, the first line of \eqref{eq:dtBG_far} follows by combining \eqref{eq:mxi_bds_far} and \eqref{eq:mu_bds_far}, while the second line of \eqref{eq:dtBG_far} follows from \eqref{eq:meta_bds_far} and \eqref{eq:mL_bds_far}. \end{proof} \end{lemma} \subsubsection{Propagation in the close formulation}\label{sec:prop_cl} In contrast to the far region, in the close region $\Omega_t^{cl}=\{\aabs{\widetilde{\X}}\leq 10\ip{t}\}$ the spatial location of a trajectory ${\bf X}(t)$ may be comparatively small. As a consequence, for the Gr\"onwall arguments for derivative propagation we need some refined estimates on the dynamics: \begin{lemma}\label{lem:Xintegral} Let ${\bf X}(t)$ be a trajectory of the ODE \eqref{ODE}. Then for any $t\geq0$ we have that \begin{equation}\label{eq:Xintegral} \int_{-\infty}^{\infty} \frac{d\tau}{\aabs{{\bf X}(\tau)}^2}\lesssim \xi^{-1}. \end{equation} \end{lemma} \begin{proof} We recall that by \eqref{eq:virials} there holds that \begin{equation} \frac{d^2}{dt^2}\aabs{{\bf X}(t)}^2\gtrsim a^2\geq0. \end{equation} Denoting by $t_p$ the time of periapsis, we have with $\aabs{{\bf X}(t_p)}\gtrsim \frac{q}{a^2}$ that \begin{equation} \aabs{{\bf X}(t)}^2\gtrsim \frac{q^2}{a^4}+a^2(t-t_p)^2=a^{-4}(q^2+a^6(t-t_p)^2). \end{equation} Since the trajectories of \eqref{ODE} (and thus the integral in \eqref{eq:Xintegral}) are symmetric with respect to periapsis, we may assume without loss of generality that $t_p=0$. We then have that \begin{equation} \int_{-\infty}^{\infty} \frac{d\tau}{\aabs{{\bf X}(\tau)}^2}=2\int_{0}^{\infty} \frac{d\tau}{\aabs{{\bf X}(\tau)}^2}\lesssim a^4\int_0^{q/a^{3}}\frac{d\tau}{q^2}+a^4\int_{q/a^{3}}^\infty\frac{d\tau}{a^6(t-t_p)^2}\lesssim \frac{a}{q}. \end{equation} \end{proof} Also in the close region Poisson brackets with respect to $\xi, {\bf u}$ have more favorable bounds than those with respect to $\eta,{\bf L}$, and, as in \eqref{SplittingGUDer}, we define \begin{equation*} \begin{split} \mathcal{G}'(\vartheta,{\bf a};t)&:=\mathfrak{w}_\xi\aabs{\{\xi,\gamma'\}}(\vartheta,{\bf a};t)+\mathfrak{w}_{\bf u}\aabs{\{{\bf u},\gamma'\}}(\vartheta,{\bf a};t),\\ \mathcal{U}'(\vartheta,{\bf a};t)&:=\mathfrak{w}_\eta\aabs{\{\eta,\gamma'\}}(\vartheta,{\bf a};t)+\mathfrak{w}_{\bf L}\aabs{\{{\bf L},\gamma'\}}(\vartheta,{\bf a};t), \end{split} \end{equation*} for the weight functions $\mathfrak{w}_f$, $f\in \textnormal{SIC}$, defined in \eqref{eq:def_weights}. We then have \begin{lemma}\label{lem:clprop} Assume that $(\vartheta,{\bf a})$ is such that for some $T_2>T_1\geq 0$ we have $(\vartheta,{\bf a})\in\Omega_t^{cl}$ for $T_1\leq t\leq T_2$. Assume also that $\gamma'$ is a solution of \eqref{eq:deriv'2} defined on the time interval $[T_1,T_2]$ satisfying for some $0<\delta\ll 1$ \begin{equation} \mathcal{D}[\gamma'](\vartheta,{\bf a};t)\lesssim\varepsilon\ip{t}^\delta, \end{equation} and we have the decay estimates \eqref{eq:EFdecay}. Then \begin{equation}\label{eq:B'G'_bds} \mathcal{G}'(\vartheta,{\bf a};t)\lesssim\varepsilon,\qquad \mathcal{U}'(\vartheta,{\bf a};t)\lesssim\varepsilon+\varepsilon^2 \ln^2(t/T_1). \end{equation} \end{lemma} \begin{proof} As note at the end of Section \ref{ssec:mfg}, the coefficients $\mathfrak{m}_{fg}'$ can be deduced easily from $\mathfrak{m}_{fg}$, and thus from the proof of Lemma \ref{lem:farprop} we directly have the bounds \begin{equation} \begin{aligned} \abs{\mathfrak{m}'_{\xi\xi}}&\lesssim \abs{\mathcal{F}}\cdot\xi/q[\vert\widetilde{\X}\vert+tq\xi^{-1}]+ (\abs{\mathcal{E}}+\aabs{\dot\mathcal{W}})\xi/q\cdot[1+t\xi\vert\widetilde{\X}\vert^{-2}],\\ \abs{\mathfrak{m}'_{\xi\eta}}&=\abs{\mathcal{F}}\xi^4/q^2+(\abs{\mathcal{E}}+\aabs{\dot\mathcal{W}})\xi^2/q^2(\xi^4/q\vert\widetilde{\X}\vert^{-2}),\\ \abs{\mathfrak{m}'_{\xi{\bf u}^a}}&\lesssim \frac{\xi^2}{q}\left[\abs{\mathcal{F}}\vert\widetilde{\X}\vert+\abs{\mathcal{E}}+\aabs{\dot\mathcal{W}}\right],\\ \abs{\mathfrak{m}'_{\xi\lambda}}+ \abs{\mathfrak{m}'_{\xi{\bf L}^a}} &\lesssim \frac{\xi^3}{q^2}\abs{\mathcal{F}}+(\abs{\mathcal{E}}+\aabs{\dot\mathcal{W}})\frac{\xi^5}{q^3}\frac{1}{\aabs{\widetilde{\X}}^2},\\ \end{aligned} \end{equation} as well as \begin{equation} \begin{aligned} \abs{\mathfrak{m}'_{\eta\xi}} &\lesssim \abs{\mathcal{F}}\xi^{-2}[\aabs{\widetilde{\X}}^2+t^2q^2\xi^{-2}]+(\abs{\mathcal{E}}+\aabs{\dot\mathcal{W}})\xi^{-2}[\vert\widetilde{\X}\vert+tq\xi^{-1}+t^2q\vert\widetilde{\X}\vert^{-2}],\\ \abs{\mathfrak{m}'_{\eta\eta}}&=\abs{\mathfrak{m}'_{\xi\xi}},\\ \abs{\mathfrak{m}'_{\eta{\bf u}^a}}&\lesssim [\abs{\mathcal{F}}\aabs{\widetilde{\X}}+\abs{\mathcal{E}}+\aabs{\dot\mathcal{W}}]\xi^{-1}[\aabs{\widetilde{\X}}+tq\xi^{-1}],\\ \abs{\mathfrak{m}'_{\eta\lambda}}+ \abs{\mathfrak{m}'_{\eta{\bf L}^a}}&\lesssim \abs{\mathcal{F}}[\aabs{\widetilde{X}_3}+\aabs{\partial_\lambda\widetilde{\X}}]\xi^{-1}[\aabs{\widetilde{\X}}+tq\xi^{-1}]+(\abs{\mathcal{E}}+\aabs{\dot\mathcal{W}})\frac{1}{q}[1+t\xi\aabs{\widetilde{\X}}^{-2}], \end{aligned} \end{equation} and \begin{equation} \begin{aligned} \abs{\mathfrak{m}'_{{\bf u}^a\xi}}&\lesssim q^{-1}\left[\abs{\mathcal{F}}\aabs{\widetilde{\X}}+\vert\mathcal{E}\vert+\vert\dot{\mathcal{W}}\vert\right]\cdot \left[1+tq/(\xi\vert\widetilde{\X}\vert)\right],\\ \abs{\mathfrak{m}'_{{\bf u}^a\eta}}&\lesssim \frac{\xi^3}{q^2}\abs{\mathcal{F}}+(\abs{\mathcal{E}}+\aabs{\dot\mathcal{W}})\frac{\xi^5}{q^3}\frac{1}{\aabs{\widetilde{\X}}^2},\\ \abs{\mathfrak{m}'_{{\bf u}^a{\bf u}^b}}&\lesssim \frac{\xi}{q}[\abs{\mathcal{F}}\aabs{\widetilde{\X}}+\abs{\mathcal{E}}+\aabs{\dot\mathcal{W}}],\\ \abs{\mathfrak{m}'_{{\bf u}^a\lambda}}+ \abs{\mathfrak{m}'_{{\bf u}^a{\bf L}^b}}&\lesssim \frac{\xi^2}{q^2}\left[\abs{\mathcal{F}}+[\abs{\mathcal{E}}+\aabs{\dot\mathcal{W}}]\cdot\aabs{\widetilde{\X}}^{-1}\right], \end{aligned} \end{equation} and also \begin{equation} \begin{aligned} \abs{\mathfrak{m}'_{{\bf L}^a\xi}}&\lesssim \xi^{-1}[\abs{\mathcal{F}}\aabs{\widetilde{\X}}+\abs{\mathcal{E}}+\aabs{\dot\mathcal{W}}][\aabs{\widetilde{\X}}+tq\xi^{-1}],\\ \abs{\mathfrak{m}'_{{\bf L}^a\eta}}&\lesssim \frac{\xi^2}{q}[\abs{\mathcal{F}}\aabs{\widetilde{\X}}+\abs{\mathcal{E}}+\aabs{\dot\mathcal{W}}],\\ \abs{\mathfrak{m}'_{{\bf L}^a{\bf u}^b}}&\lesssim [\abs{\mathcal{F}}\aabs{\widetilde{\X}}+\abs{\mathcal{E}}+\aabs{\dot\mathcal{W}}]\aabs{\widetilde{\X}},\\ \abs{\mathfrak{m}'_{{\bf L}^a\lambda}}+\abs{\mathfrak{m}'_{{\bf L}^a{\bf L}^b}}&\lesssim \frac{\xi}{q}[\abs{F}\aabs{\widetilde{\X}}+\abs{\mathcal{E}}+\aabs{\dot\mathcal{W}}].\\ \end{aligned} \end{equation} Observing that in the close region we have $\xi^2/q\lesssim\aabs{\widetilde{\X}}\lesssim \ip{t}$, it follows that \begin{equation}\label{eq:mxi_bds_cl} \begin{aligned} \abs{\mathfrak{m}'_{\xi\xi}}&\lesssim \abs{\mathcal{F}}\ip{t}^{\frac{3}{2}}+(\abs{\mathcal{E}}+\aabs{\dot\mathcal{W}})\bigg(q^{-\frac{1}{2}}\ip{t}^{\frac{1}{2}}+\frac{t\xi^2/q}{\aabs{\widetilde{\X}}^2}\bigg)\lesssim \varepsilon^2\ip{t}^{-\frac{5}{4}}+\varepsilon^2\ip{t}^{-1}\frac{\xi^2/q}{\aabs{\widetilde{\X}}^2},\\ \abs{\mathfrak{m}'_{\xi\eta}}\frac{\mathfrak{w}_\xi}{\mathfrak{w}_\eta}&= \abs{\mathfrak{m}'_{\xi\eta}}\frac{(1+\xi)^2}{\xi^4}\lesssim \abs{\mathcal{F}}(1+\xi)^2+(\abs{\mathcal{E}}+\aabs{\dot\mathcal{W}})\frac{(1+\xi)^2\xi^2}{\aabs{\widetilde{\X}}^2}\lesssim\varepsilon^2\ip{t}^{-\frac32}+\varepsilon^2\ip{t}^{-2}\frac{\xi^2/q}{\aabs{\widetilde{\X}}^2},\\ \abs{\mathfrak{m}'_{\xi{\bf L}}}\frac{\mathfrak{w}_\xi}{\mathfrak{w}_{\bf L}}&=\abs{\mathfrak{m}'_{\xi{\bf L}}}\frac{(1+\xi)^2}{\xi^3}\lesssim \abs{\mathcal{F}}(1+\aabs{\widetilde{\X}})+\frac{(1+\xi)^2\xi^2/q}{\vert\widetilde{\X}\vert^2}(\abs{\mathcal{E}}+\aabs{\dot\mathcal{W}})\lesssim\varepsilon^2\ip{t}^{-\frac32}+\varepsilon^2\ip{t}^{-2}\frac{\xi^2/q}{\aabs{\widetilde{\X}}^2},\\ \abs{\mathfrak{m}'_{\xi{\bf u}}}\frac{\mathfrak{w}_\xi}{\mathfrak{w}_{\bf u}}&= \abs{\mathfrak{m}'_{\xi{\bf u}}}\frac{(1+\xi)}{\xi^2}\lesssim(1+\xi)[\abs{\mathcal{F}}\aabs{\widetilde{\X}}+\abs{\mathcal{E}}+\aabs{\dot\mathcal{W}}]\lesssim\varepsilon^2\ip{t}^{-\frac54}, \end{aligned} \end{equation} and this suffices, since by Lemma \ref{lem:Xintegral} and $\xi^2/q\lesssim\aabs{\widetilde{\X}}\lesssim\ip{t}$ we have that \begin{equation} \int_0^t\ip{\tau}^{-1}\frac{\xi^2/q}{\aabs{{\bf X}(\vartheta+\tau{\bf a},{\bf a})}^2}d\tau\lesssim q^{-1}\min\{\xi,\xi^{-\frac{1}{2}}\}\lesssim_q 1. \end{equation} Similarly, there holds that \begin{equation}\label{eq:mu_bds_cl} \begin{alignedat}{2} \abs{\mathfrak{m}'_{{\bf u}\xi}}\frac{\mathfrak{w}_{\bf u}}{\mathfrak{w}_\xi}=\abs{\mathfrak{m}'_{{\bf u}\xi}}\frac{\xi^2}{1+\xi}&\lesssim \varepsilon^2\langle t\rangle^{-1}\Big(\ip{t}^{-\frac{1}{4}}+\frac{\xi^\frac{1}{2}}{\vert\widetilde{\X}\vert}\Big),\\ \abs{\mathfrak{m}'_{{\bf u}\eta}}\frac{\mathfrak{w}_{\bf u}}{\mathfrak{w}_\eta}=\abs{\mathfrak{m}'_{{\bf u}\eta}}\frac{1+\xi}{\xi^2}&\lesssim \varepsilon^2\ip{t}^{-\frac{5}{4}}\Big(1+\frac{\xi^\frac{1}{2}}{\vert\widetilde{\X}\vert}\Big),\\ \abs{\mathfrak{m}'_{{\bf u}{\bf u}}}&\lesssim \varepsilon^2\ip{t}^{-\frac{5}{4}},\\ \abs{\mathfrak{m}'_{{\bf u}{\bf L}}}\frac{\mathfrak{w}_{\bf u}}{\mathfrak{w}_{\bf L}}=\abs{\mathfrak{m}'_{{\bf u}{\bf L}}}\frac{1+\xi}{\xi}&\lesssim\varepsilon^2\ip{t}^{-\frac{5}{4}} \Big(1+\frac{\xi^\frac{1}{2}}{\vert\widetilde{\X}\vert}\Big), \end{alignedat} \end{equation} and hence the bounds for $\mathcal{G}'$ in \eqref{eq:B'G'_bds} are established. Turning to $\mathcal{U}'$, we have that \begin{equation}\label{eq:meta_bds_cl} \begin{aligned} \abs{\mathfrak{m}'_{\eta\xi}}\frac{\mathfrak{w}_\eta}{\mathfrak{w}_\xi}&=\abs{\mathfrak{m}'_{\eta\xi}}\frac{\xi^4}{(1+\xi)^2} \lesssim \varepsilon^2\Big(\ip{t}^{-1}\ln\ip{t}+\frac{\xi}{\aabs{\widetilde{\X}}^2}\Big),\\ \abs{\mathfrak{m}'_{\eta\eta}}&= \abs{\mathfrak{m}'_{\xi\xi}},\\ \abs{\mathfrak{m}'_{\eta{\bf u}}}\frac{\mathfrak{w}_\eta}{\mathfrak{w}_{\bf u}}&=\abs{\mathfrak{m}'_{\eta{\bf u}}}\frac{\xi^2}{1+\xi}\lesssim \left[\aabs{\mathcal{F}(\widetilde{\X})}\aabs{\widetilde{\X}}+\aabs{\mathcal{E}(\widetilde{\X})}+\aabs{\dot\mathcal{W}}\right](\aabs{\widetilde{\X}}+t)\lesssim \varepsilon^2\ip{t}^{-1}\ln\ip{t},\\ \abs{\mathfrak{m}'_{\eta{\bf L}}}\frac{\mathfrak{w}_\eta}{\mathfrak{w}_{\bf L}}&=\abs{\mathfrak{m}'_{\eta{\bf L}}}\xi\lesssim\varepsilon^2\langle t\rangle^{-1}\Big(\langle t\rangle^{-\frac{1}{4}}+\frac{q}{\vert\widetilde{\X}\vert^2}\Big), \end{aligned} \end{equation} and since in the $\Omega_t^{cl}$ we have $\frac{\xi}{q}\lesssim\ip{t}^{\frac12}$, there holds that \begin{equation}\label{eq:mL_bds_cl} \begin{aligned} \abs{\mathfrak{m}'_{{\bf L}\xi}}\frac{\mathfrak{w}_{\bf L}}{\mathfrak{w}_\xi}&=\abs{\mathfrak{m}'_{{\bf L}\xi}}\frac{\xi^3}{(1+\xi)^2}\lesssim \left[\aabs{\mathcal{F}(\widetilde{\X})}\aabs{\widetilde{\X}}+\aabs{\mathcal{E}(\widetilde{\X})}+\aabs{\dot\mathcal{W}}\right](\aabs{\widetilde{\X}}+t)\lesssim \varepsilon^2\ip{t}^{-1}\ln\ip{t},\\ \abs{\mathfrak{m}'_{{\bf L}\eta}}\frac{\mathfrak{w}_{\bf L}}{\mathfrak{w}_\eta}&= \abs{\mathfrak{m}'_{{\bf L}\eta}}\frac{1}{\xi}\lesssim\varepsilon^2\ip{t}^{-\frac54},\\ \abs{\mathfrak{m}'_{{\bf L}{\bf u}}}\frac{\mathfrak{w}_{\bf L}}{\mathfrak{w}_{\bf u}}&=\abs{\mathfrak{m}'_{{\bf L}{\bf u}}}\frac{\xi}{1+\xi}\lesssim \left[\aabs{\widetilde{\X}}\aabs{\mathcal{F}(\widetilde{\X})}+\aabs{\mathcal{E}(\widetilde{\X})}+\aabs{\dot\mathcal{W}}\right]\aabs{\widetilde{\X}}\lesssim\varepsilon^2\ip{t}^{-1}\ln\ip{t},\\ \abs{\mathfrak{m}'_{{\bf L}{\bf L}}}&\lesssim\left[\aabs{\mathcal{F}(\widetilde{\X})}\aabs{\widetilde{\X}}+\aabs{\mathcal{E}(\widetilde{\X})}+\aabs{\dot\mathcal{W}}\right]\ip{t}^{\frac12}\lesssim\varepsilon^2\ip{t}^{-\frac54}. \end{aligned} \end{equation} As above, the terms involving $\aabs{\widetilde{\X}}^{-2}$ will give bounded contributions by Lemma \ref{lem:Xintegral}, and we thus obtain the bounds for $\mathcal{U}'$ in \eqref{eq:B'G'_bds}. \end{proof} \section{Main theorem and asymptotic behavior}\label{sec:main-asympt} We are now ready to state and prove our main theorem which asserts global existence of small perturbations, optimal decay of the electric field and convergence of the particle distribution along modified linear trajectories. In particular, this implies Theorem \ref{thm:main_rough}. \begin{theorem}\label{MainThm} There exists $\varepsilon_\ast>0$ such that the following holds for all $0<\varepsilon_0<\varepsilon_\ast$. \begin{enumerate}[wide] \item Given $\gamma_0\in L^2\cap L^\infty$ satisfying the smallness assumptions \eqref{eq:mom_id} for moments as well as the derivative bounds \begin{equation}\label{eq:efield_optimal} \nnorm{(\xi^{5}+\xi^{-8})\sum_{f\in \textnormal{SIC}}\mathfrak{w}_f\aabs{\{f,\gamma_0\}}}_{L^\infty_{\vartheta,{\bf a}}}\lesssim\varepsilon_0, \end{equation} there exists a unique, global $C^1$ solution $\gamma$ of \eqref{NewNLEq} satisfying the moment bounds \eqref{eq:gl_mom_bds} of Theorem \ref{thm:global_moments} as well as the slow growth of derivatives \begin{equation}\label{eq:derivs_MainThm} \nnorm{\mathcal{D}(t)}_{L^\infty_{\vartheta,{\bf a}}}\lesssim \varepsilon_0\ln^2\ip{t} \end{equation} with $\mathcal{D}(t)$ as in Proposition \eqref{prop:global_derivs}. The associated electric field decays at an optimal rate, \begin{equation}\label{OptimalDecayEFMainThm} \left[t^2+\vert{\bf y}\vert^2\right]\vert\mathcal{E}({\bf y},t)\vert\lesssim\varepsilon_0^2, \end{equation} and there exist an asymptotic field $\mathcal{E}^\infty$ that converges along linearized trajectories \begin{equation}\label{ConvergenceEFMainThm} \begin{split} \Vert \langle {\bf a}\rangle^2(t^2\mathcal{E}(t{\bf a},t)-\mathcal{E}^\infty({\bf a}))\Vert_{L^\infty(\mathcal{P}_{\vartheta,{\bf a}})}&\lesssim\varepsilon_0^4\langle t\rangle^{-1/10}, \end{split} \end{equation} and a limit gas distribution $\gamma_\infty$ that converges along a modified scattering dynamic \begin{equation}\label{PDFConverges} \Vert \gamma(\vartheta+\ln(t)\left[Q\mathcal{E}^\infty({\bf a})+\mathcal{Q}\mathcal{E}^\infty(0)\right],{\bf a},t)-\gamma_\infty(\vartheta,{\bf a})\Vert_{L^\infty_{\vartheta,{\bf a}}\cap L^2_{\vartheta,{\bf a}}}\lesssim \varepsilon_0\ip{t}^{-1/30}. \end{equation} \item As a consequence, given any $(\mathcal{X}_0,\mathcal{V}_0)\in\mathbb{R}^3_{\bf x}\times\mathbb{R}^3_{\bf v}$ and $\mu_0\in L^2\cap L^\infty$ such that the function \begin{equation} \gamma_0(\vartheta,{\bf a}):=\mu_0({\bf X}(\vartheta,{\bf a})+\mathcal{X}_0,{\bf V}(\vartheta,{\bf a})+\mathcal{V}_0) \end{equation} satisfies the assumptions of part $(1)$, there exists a unique, global $C^1$ solution of \eqref{VPPC} with $(\mathcal{X}(0),\mathcal{V}(0))=(\mathcal{X}_0,\mathcal{V}_0)$. The associated electric field decays at the optimal rate \eqref{eq:efield_optimal} and the point charge has an almost free asymptotic dynamic: There exist $(\mathcal{X_\infty},\mathcal{V}_\infty)\in\mathbb{R}^3_{\bf x}\times\mathbb{R}^3_{\bf v}$ such that \begin{equation}\label{AsymptoticsXV} \begin{split} \mathcal{X}(t)=\mathcal{X}_\infty+\mathcal{V}_\infty t-\mathcal{Q}\mathcal{E}^\infty(0)\ln(t)+O(t^{-1/10}),\qquad\mathcal{V}(t)=\mathcal{V}_\infty-\frac{\mathcal{Q}}{t}\mathcal{E}^\infty(0)+O(t^{-1/10}). \end{split} \end{equation} Moreover, $\mu(t)$ converges to $\gamma_\infty$ as $t\to +\infty$ \begin{equation}\label{eq:mu_mod_scat} \mu({\bf Y}({\bf x},{\bf v},t),{\bf W}({\bf x},{\bf v},t), t)\to \gamma_\infty({\bf x},{\bf v}),\qquad t\to\infty, \end{equation} along the modified trajectories \begin{equation}\label{eq:mu_mod_traj} \begin{aligned} {\bf Y}({\bf x},{\bf v},t)&=\left(at-\frac{1}{2}\frac{q}{a^2}\ln(ta^3/q)+\ln(t)[Q\mathcal{E}^\infty({\bf a})+\mathcal{Q}\mathcal{E}^\infty(0)]\right)\cdot \frac{q^2}{4a^2L^2+q^2}\left(\frac{2}{q}{\bf R}+\frac{4a}{q^2}{\bf L}\times{\bf R}\right)\\ &\qquad+ \mathcal{V}_\infty t+\mathcal{Q}\mathcal{E}^\infty(0)\ln(t)+O(1),\\ {\bf W}({\bf x},{\bf v},t)&=a\left(1-\frac{q}{2ta^3}\right)\cdot \frac{q^2}{4a^2L^2+q^2}\left(\frac{2}{q}{\bf R}+\frac{4a}{q^2}{\bf L}\times{\bf R}\right)+\mathcal{V}_\infty+O(\ln(t)t^{-2}). \end{aligned} \end{equation} \end{enumerate} \end{theorem} \subsection{Proof of Theorem \ref{MainThm}} \begin{proof}[Proof of Part $(1)$] In view of the hypothesis, from Theorem \ref{thm:global_moments} we obtain a global solution $\gamma(t)$ with the required moment bounds, and Propostion \ref{prop:global_derivs} provides the claimed derivative control \eqref{eq:derivs_MainThm}. Optimal decay \eqref{eq:efield_optimal} and the convergence \eqref{ConvergenceEFMainThm} for the electric field then follow from $(ii)$ and $(iii)$ of Proposition \ref{PropEeff}, respectively. We deduce next the motion of the point charge. Starting from the fact that \begin{equation*} \begin{split} \dot{\mathcal{W}}(t)=\mathcal{Q}\mathcal{E}(0,t),\qquad\mathcal{W}(t)\to_{t\to\infty}0, \end{split} \end{equation*} we obtain $\mathcal{V}_\infty\in\mathbb{R}^3$ for which \begin{equation} \dot{\mathcal{X}}=\mathcal{V}_\infty+\mathcal{W}(t), \end{equation} and from \eqref{ConvergenceEFMainThm} and integration we deduce that there exists $\mathcal{X}_\infty$ such that \begin{equation}\label{AsymptoticXVCCL} \begin{split} \vert\mathcal{W}(t)+\frac{\mathcal{Q}}{t}\mathcal{E}^\infty(0)\vert&\le \varepsilon_0^4\ip{t}^{-11/10},\qquad\vert\mathcal{X}(t)-\mathcal{V}_\infty t+\mathcal{Q}\mathcal{E}^\infty(0)\ln(t)-\mathcal{X}_\infty\vert\le\varepsilon_0^4\ip{t}^{-1/10}. \end{split} \end{equation} This gives \eqref{AsymptoticsXV}, and informs the derivation of the modified scattering dynamic for the gas distribution, which we give next. Since we have convergence of the electric field, we can now integrate the leading order evolution of the gas distribution. We do this in the far formulation of the equations. Motivated by the above convergence and the expression for the Hamiltonian \eqref{NewNLEq}, we introduce the asymptotic Hamiltonian \begin{equation*} \begin{split} \mathbb{H}_4^\infty:=\frac{1}{t}\left[Q\Psi^\infty({\bf a})-\mathcal{Q}\mathcal{E}^\infty(0)\cdot{\bf a}\right]. \end{split} \end{equation*} The flow of $\mathbb{H}_4^\infty$ can be integrated explicitly: its characteristics simply \emph{shear} with the diffeomorphism \begin{equation} s_\infty:(\vartheta,{\bf a})\mapsto(\vartheta-\ln(t)\left[Q\mathcal{E}^\infty({\bf a})+\mathcal{Q}\mathcal{E}^\infty(0)\right],{\bf a}). \end{equation} To account for this dynamic, we thus introduce \begin{equation*} \begin{split} \sigma:=\gamma\circ s_\infty^{-1},\qquad \sigma(\vartheta,{\bf a},t):=\gamma(\vartheta+\ln(t)\left[Q\mathcal{E}^\infty({\bf a})+\mathcal{Q}\mathcal{E}^\infty(0)\right],{\bf a},t), \end{split} \end{equation*} which satisfies \begin{equation}\label{eq:dtsigma} \begin{aligned} \partial_t\sigma&=(\partial_t\gamma+\{\mathbb{H}_4^\infty,\gamma\})\circ s_\infty^{-1}\\ &=\left(-\{\mathbb{H}_4,\gamma\}+\{\mathbb{H}_4^\infty,\gamma\}\right)\circ s_\infty^{-1}\\ &=\left(-Q\mathcal{E}_j(\widetilde{\X},t)\{\widetilde{\X}^j,\gamma\}-\mathcal{W}_j(t)\{\widetilde{\V}^j,\gamma\}+t^{-1}(Q\mathcal{E}_j^\infty(a)-\mathcal{Q}\mathcal{E}_j^\infty(0))\{{\bf a}^j,\gamma\}\right)\circ s_\infty^{-1} \\ &=-Q\mathcal{E}_j(\widetilde{\X},t)\{\widetilde{\bf Z}^j,\gamma\}\circ s_\infty^{-1}-(tQ\mathcal{E}_j(\widetilde{\X},t)+\mathcal{W}_j(t))\{\widetilde{\V}^j-{\bf a}^j,\gamma\}\circ s_\infty^{-1}\\ &\quad +\left(t^{-1}Q(\mathcal{E}^\infty_j({\bf a})-t^2\mathcal{E}_j(\widetilde{\bf X},t))+(\mathcal{Q}t^{-1}\mathcal{E}^\infty(0)+\mathcal{W}_j(t))\right)\{{\bf a}^j,\gamma\}\circ s_\infty^{-1}. \end{aligned} \end{equation} In the bulk, these terms can be bounded as follows: By the bounds \eqref{PrecisedBoundZBracketBulk} in Lemma \ref{ZPBFormula} and \eqref{OptimalDecayEFMainThm}, we have that \begin{equation} \begin{aligned} \aabs{\mathfrak{1}_{\mathcal{B}}Q\mathcal{E}_j(\widetilde{\X},t)\{\widetilde{\bf Z}^j,\gamma\}}+\aabs{\mathfrak{1}_{\mathcal{B}}(tQ\mathcal{E}_j(\widetilde{\X},t)+\mathcal{W}_j(t))\{\widetilde{\V}^j-{\bf a}^j,\gamma\}}&\lesssim \varepsilon_0^2\ip{t}^{-2}\cdot \ip{\xi}^3\ln\ip{t}\mathfrak{1}_{\mathcal{B}}\aabs{\mathcal{D}[\gamma](t)}\\ &\lesssim \varepsilon_0^3\ip{t}^{-2}\ln^3\ip{t}, \end{aligned} \end{equation} where in the last inequality we have used \eqref{eq:PBcl_byfar_bulk} to control the contribution from the derivatives of $\gamma$ in the close region. Similarly, using also \eqref{AsymptoticEFCCL} and \eqref{AsymptoticXVCCL} we obtain that \begin{equation} \begin{aligned} &\mathfrak{1}_{\mathcal{B}}\abs{t^{-1}Q(\mathcal{E}^\infty_j({\bf a})-t^2\mathcal{E}_j(\widetilde{\bf X},t))+(\mathcal{Q}t^{-1}\mathcal{E}^\infty(0)+\mathcal{W}_j(t))}\aabs{\{{\bf a}^j,\gamma\}}\lesssim \varepsilon_0^4\ip{t}^{-11/10}\cdot\varepsilon_0\ln^2\ip{t}, \end{aligned} \end{equation} where we used \eqref{eq:PBcl_byfar_bulk} to bound $\mathfrak{1}_{\mathcal{B}\cap\Omega_t^{cl}}\aabs{\{{\bf a}^j,\gamma\}}\lesssim (1+\xi^{-1})(\xi^2+\xi^{-3})\aabs{\mathcal{D}[\gamma'](t)}$ in the close region. Altogether we thus see from the last two lines of \eqref{eq:dtsigma} that \begin{equation*} \begin{split} \Vert \mathfrak{1}_{\mathcal{B}}\partial_t\sigma\Vert_{L^\infty_{\vartheta,{\bf a}}}&\lesssim\varepsilon_0^3 t^{-21/20}. \end{split} \end{equation*} Upon mollification it follows that $\mathfrak{1}_{\mathcal{B}}\sigma$ converges in $L^\infty$ to a limit $\gamma_\infty$ as $t\to\infty$. We thus have that \begin{equation} \norm{\gamma(\vartheta+\ln(t)\left[Q\mathcal{E}^\infty({\bf a})+\mathcal{Q}\mathcal{E}^\infty(0)\right],{\bf a},t)-\gamma_\infty(\vartheta,{\bf a})}_{L^\infty_{\vartheta,{\bf a}}}\leq \norm{\sigma\mathfrak{1}_{\mathcal{B}^c}}_{L^\infty_{\vartheta,{\bf a}}}+\norm{\sigma\mathfrak{1}_{\mathcal{B}}-\gamma_\infty}_{L^\infty_{\vartheta,{\bf a}}}\lesssim \varepsilon_0\ip{t}^{-21/20}, \end{equation} where outside of the bulk we used \eqref{eq:nonbulk-bds} and \eqref{eq:gl_mom_bds}. Convergence in $L^2$ then follows from interpolation with the conserved $L^1$ norm of $\gamma$. \end{proof} Part $(2)$ now follows with relative ease: \begin{proof}[Proof of Part $(2)$] It remains to establish the expression for the modified trajectories \eqref{eq:mu_mod_traj} and the convergence of $\mu$ \eqref{eq:mu_mod_scat} along them, the rest of Part $(2)$ follows directly from Part $(1)$. By construction, from \eqref{NewNLUnknown} and \eqref{NewVariables} we have that \begin{equation} \gamma(\vartheta,{\bf a},t)=\nu({\bf X}(\vartheta+t{\bf a},{\bf a}),{\bf V}(\vartheta+t{\bf a},{\bf a}),t)=\mu({\bf X}(\vartheta+t{\bf a},{\bf a})+\mathcal{X}(t),{\bf V}(\vartheta+t{\bf a},{\bf a})+\mathcal{V_\infty},t). \end{equation} Hence $\mu(t)$ converges along the trajectories \begin{equation} \begin{aligned} {\bf Y}({\bf x},{\bf v},t)&={\bf X}(\vartheta+\ln(t)[Q\mathcal{E}^\infty({\bf a})+\mathcal{Q}\mathcal{E}^\infty(0)]+t{\bf a},{\bf a})+\mathcal{X}(t)\\ &=\left(at-\frac{1}{2}\frac{q}{a^2}\ln(ta^3/q)+\ln(t)[Q\mathcal{E}^\infty({\bf a})+\mathcal{Q}\mathcal{E}^\infty(0)]\right)\cdot \frac{q^2}{4a^2L^2+q^2}\left(\frac{2}{q}{\bf R}+\frac{4a}{q^2}{\bf L}\times{\bf R}\right)\\ &\qquad+ \mathcal{V}_\infty t+\mathcal{Q}\mathcal{E}^\infty(0)\ln(t)+O(1),\\ {\bf W}({\bf x},{\bf v},t)&={\bf V}(\vartheta+\ln(t)[Q\mathcal{E}^\infty({\bf a})+\mathcal{Q}\mathcal{E}^\infty(0)]+t{\bf a},{\bf a})+\mathcal{V_\infty}\\ &=a\left(1-\frac{q}{2ta^3}\right)\cdot \frac{q^2}{4a^2L^2+q^2}\left(\frac{2}{q}{\bf R}+\frac{4a}{q^2}{\bf L}\times{\bf R}\right)+\mathcal{V}_\infty+O(\ln(t)t^{-2}), \end{aligned} \end{equation} where we have used Remark \ref{rem:XVexpansion}. \end{proof} \subsection*{Acknowledgments} B.\ Pausader is supported in part by NSF grant DMS-1700282 and by a Simons fellowship. K.\ Widmayer gratefully acknowledges support of the SNSF through grant PCEFP2\_203059. J.\ Yang was supported by NSF grant DMS-1929284 while in residence at ICERM (Fall 2021-Spring 2022). This work was partly carried out while the authors where participating at the ICERM program on ``Hamiltonian methods in dispersive and wave equations'' and they gratefully acknowledge the hospitality of the institute. B.\ Pausader also thanks P.\ G\'erard, P.\ Rapha\"{e}l and M.\ Rosenzweig for stimulating and informative conversations on the Vlasov-Poisson equation and its symplectic structure. \appendix \section{Auxiliary results}\label{sec:appdx_trans} \begin{lemma}\label{lem:trans_full} Under the assumptions of Proposition \ref{prop:global_derivs} (where in particular $\ip{t}\abs{\mathcal{W}(t)}\lesssim \varepsilon^2$) and with outgoing (resp.\ incoming) asymptotic actions, on the close region $\Omega_t^{cl}$ there holds that for some $C>0$ \begin{equation}\label{EquivXiXi'App} C^{-1}\xi'\leq\xi\leq C\xi', \end{equation} and for any scalar function $\zeta$ we have \begin{equation}\label{TransitionMapBoundsApp} \begin{aligned} \mathfrak{w}_{\xi'}\aabs{\{\xi',\zeta\}}&\lesssim \mathfrak{w}_{\xi}\aabs{\{\xi,\zeta\}}+\varepsilon^2(\langle t\rangle^{-1/2}+\aabs{\widetilde{\X}}^{-1})\mathcal{D}[\zeta](t),\\ \mathfrak{w}_{{\bf L}'}\aabs{\{{\bf L}',\zeta\}}&\lesssim \mathfrak{w}_{{\bf L}}\aabs{\{{\bf L},\zeta\}}+\varepsilon^2\mathcal{D}[\zeta](t),\\ \aabs{\{{\bf u}',\zeta\}}&\lesssim \xi^{-3}\mathfrak{w}_{\bf L}\aabs{\{{\bf L},\zeta\}}+(1+\varepsilon^2\xi^{-1}+\varepsilon^2\aabs{\widetilde{\X}}^{-1})\mathcal{D}[\zeta](t),\\ \mathfrak{w}_{\eta'}\aabs{\{\eta',\zeta\}}&\lesssim\mathfrak{w}_\eta\aabs{\{\eta,\zeta\}}+\mathfrak{w}_{{\bf L}}\aabs{\{{\bf L},\zeta\}}+(1+\xi)\ip{t}\aabs{\widetilde{\X}}^{-1}\mathfrak{w}_\xi\aabs{\{\xi,\zeta\}}+\varepsilon^2(1+\ip{t}\aabs{\widetilde{\X}}^{-1})\mathcal{D}[\zeta](t). \end{aligned} \end{equation} The analogous estimates hold with the roles of the primed and unprimed variables interchanged. \end{lemma} \begin{remark}\label{rem:transition_bds} We highlight two consequences of these bounds: \begin{enumerate} \item For the transition from close to far (resp.\ far to close), on $\Omega_t^{cl}\cap\Omega_t^{far}$ we have that $\ip{t}\leq\aabs{\widetilde{\X}}\leq 10\ip{t}$ and thus the bounds \eqref{eq:deriv-deriv-PB} from Lemma \ref{lem:trans_short} follow, i.e.\ we have that \begin{equation} \mathcal{D}[\gamma'](t)\lesssim (\xi +\xi^{-3})\mathcal{D}[\gamma](t),\qquad \mathcal{D}[\gamma](t)\lesssim (\xi +\xi^{-3})\mathcal{D}[\gamma'](t). \end{equation} \item On $\Omega_t^{cl}$ we have the following control of Poisson brackets of $\gamma$ in terms of those of $\gamma'$: \begin{equation}\label{eq:PBcl_byfar} \mathfrak{1}_{\Omega_t^{cl}}\mathcal{D}[\gamma](t)\lesssim (\xi+\xi^{-3})(1+\ip{t}\aabs{\widetilde{\X}}^{-1})\mathcal{D}[\gamma'](t). \end{equation} By \eqref{eq:bulk-bds}, in the bulk this improves to \begin{equation}\label{eq:PBcl_byfar_bulk} \mathfrak{1}_{\Omega_t^{cl}\cap\mathcal{B}}\mathcal{D}[\gamma](t)\lesssim (\xi^2+\xi^{-3})\mathcal{D}[\gamma'](t). \end{equation} \end{enumerate} \end{remark} \begin{proof}[Proof of Lemma \ref{lem:trans_full}] We will use repeatedly the following bounds which follow from \eqref{eq:XX1X3}, \eqref{PBX1}-\eqref{PBX1'} and \eqref{eq:betterdl}, together with the bound $\vert\widetilde{\X}\vert\lesssim\langle t\rangle$ on $\Omega_t^{cl}$ (with outgoing resp.\ incoming asymptotic actions): \begin{equation}\label{PBXZetaApp} \begin{aligned} \aabs{\{\widetilde{\V},\zeta\}}&\lesssim\frac{q}{\xi^2}[1+\frac{t\xi}{\aabs{\widetilde{\X}}^2}]\aabs{\{\xi,\zeta\}}+\frac{\xi^3}{2q\aabs{\widetilde{\X}}^2}\aabs{\{\eta,\zeta\}}+\aabs{\widetilde{\V}}\aabs{\{{\bf u},\zeta\}}+\frac{\xi^2}{q\aabs{\widetilde{\X}}^2}\aabs{\{{\bf L},\zeta\}}\\ &\lesssim q\ip{\xi}^{-1}(1+\frac{t\xi}{\aabs{\widetilde{\X}}^2})\mathfrak{w}_\xi\aabs{\{\xi,\zeta\}}+\frac{\xi\ip{\xi}}{q\aabs{\widetilde{\X}}^2}\mathfrak{w}_\eta\aabs{\{\eta,\zeta\}}+\frac{q}{\xi}\aabs{\{{\bf u},\zeta\}}+\frac{\xi\ip{\xi}}{q\aabs{\widetilde{\X}}^2}\mathfrak{w}_{\bf L}\aabs{\{{\bf L},\zeta\}},\\ \aabs{\{\widetilde{\X},\zeta\}}&\lesssim \xi^{-1}[\aabs{\widetilde{\X}}+\frac{tq}{\xi}]\aabs{\{\xi,\zeta\}}+\frac{\xi^2}{q}\aabs{\{\eta,\zeta\}}+\aabs{\widetilde{\X}}\aabs{\{{\bf u},\zeta\}}+\frac{\xi}{q}\aabs{\{{\bf L},\zeta\}}\\ &\lesssim \ip{t}\mathfrak{w}_\xi\aabs{\{\xi,\zeta\}}+\frac{\ip{\xi}}{q}\mathfrak{w}_\eta\aabs{\{\eta,\zeta\}}+\aabs{\widetilde{\X}}\aabs{\{{\bf u},\zeta\}}+\frac{\ip{\xi}}{q}\mathfrak{w}_{\bf L}\aabs{\{{\bf L},\zeta\}}. \end{aligned} \end{equation} On $\Omega_t^{cl}$ there holds that $\aabs{\widetilde{\X}}\lesssim\ip{t}$, and thus $\xi\lesssim\ip{t}^{1/2}$. It then follows for $\xi'\neq 0$ that \begin{equation}\label{eq:xiquot} \frac{\xi}{\xi'}=\frac{\xi}{q}a'=\frac{\xi}{q}(a'-a)+1,\qquad \frac{\xi}{q}\abs{a'-a}\leq 4\frac{\xi}{q}\abs{\mathcal{W}(t)}\lesssim \frac{\varepsilon}{q\ip{t}^{1/2}}\qquad \Rightarrow \qquad \frac{1}{2}\xi'\leq \xi\leq 2\xi', \end{equation} which gives \eqref{EquivXiXi'App}. \begin{enumerate}[wide] \item In $\xi$: Note that since $\{a,\zeta\}=\frac{1}{2a}\{a^2,\zeta\}$, we have that \begin{equation} \{\xi,\zeta\}=-\frac{\xi^2}{q}\{a,\zeta\}=-\frac{\xi^3}{2q^2}\{a^2,\zeta\} \end{equation} and thus by \eqref{eq:diffa2} \begin{equation*} \{\xi',\zeta\}=\frac{(\xi')^3}{2q^2}\{a^2-(a')^2,\zeta\}-\frac{(\xi')^3}{2q^2}\{a^2,\zeta\}=\frac{(\xi')^3}{q^2}\mathcal{W}(t)\cdot\{\widetilde{\V},\zeta\}+(\frac{\xi'}{\xi})^3\{\xi,\zeta\}. \end{equation*} Hence \begin{equation}\label{eq:xixi'PB} \begin{aligned} \mathfrak{w}_{\xi'}\aabs{\{\xi',\zeta\}}&\lesssim\frac{\xi'\ip{\xi'}}{q^2}\abs{\mathcal{W}(t)}\aabs{\{\widetilde{\V},\zeta\}}+\frac{\xi'\ip{\xi'}}{\xi\ip{\xi}}\mathfrak{w}_\xi\aabs{\{\xi,\zeta\}}\\ &\lesssim \left(\xi'\frac{\ip{\xi'}}{\ip{\xi}}q^{-1}\Big(1+\frac{t\xi}{\aabs{\widetilde{\X}}^2}\Big)\abs{\mathcal{W}(t)}+\frac{\xi'\ip{\xi'}}{\xi\ip{\xi}}\right)\mathfrak{w}_\xi\aabs{\{\xi,\zeta\}}\\ &\qquad +\frac{\xi'\ip{\xi'}}{q^2}\abs{\mathcal{W}(t)}\left(\frac{\xi\ip{\xi}}{q\aabs{\widetilde{\X}}^2}\mathfrak{w}_\eta\aabs{\{\eta,\zeta\}}+\frac{q}{\xi}\aabs{\{{\bf u},\zeta\}}+\frac{\xi\ip{\xi}}{q\aabs{\widetilde{\X}}^2}\mathfrak{w}_{\bf L}\aabs{\{{\bf L},\zeta\}}\right). \end{aligned} \end{equation} By \eqref{eq:xiquot} we then have \begin{equation}\label{PBXIprimeZetaApp} \mathfrak{w}_{\xi'}\aabs{\{\xi',\zeta\}}\lesssim (1+\frac{\varepsilon^2}{\aabs{\widetilde{\X}}})\mathfrak{w}_\xi\aabs{\{\xi,\zeta\}}+\varepsilon^2\frac{1}{\aabs{\widetilde{\X}}}\left(\mathfrak{w}_\eta\aabs{\{\eta,\zeta\}}+\mathfrak{w}_{\bf L}\aabs{\{{\bf L},\zeta\}}\right)+\varepsilon^2\ip{t}^{-1/2}\aabs{\{{\bf u},\zeta\}}. \end{equation} This gives the first inequality in \eqref{TransitionMapBoundsApp}. \item In ${\bf L}$: Since ${\bf L}'-{\bf L}=-\widetilde{\X}\times\mathcal{W}=:-\delta{\bf L}$, we have that \begin{equation} \{{\bf L}',\zeta\}-\{{\bf L},\zeta\}=\mathcal{W}(t)\times\{\widetilde{\X},\zeta\}. \end{equation} Hence \begin{equation} \aabs{\{{\bf L}',\zeta\}}\lesssim \aabs{\{{\bf L},\zeta\}}+\abs{\mathcal{W}(t)}\aabs{\{\widetilde{\X},\zeta\}} \end{equation} and thus \begin{equation} \begin{aligned} \mathfrak{w}_{{\bf L}'}\aabs{\{{\bf L}',\zeta\}}&\lesssim \left(\frac{\mathfrak{w}_{{\bf L}'}}{\mathfrak{w}_{\bf L}}+\abs{\mathcal{W}(t)}\frac{\ip{\xi}\mathfrak{w}_{{\bf L}'}}{q}\right)\mathfrak{w}_{{\bf L}}\aabs{\{{\bf L},\zeta\}}\\ &\qquad +\abs{\mathcal{W}(t)}\mathfrak{w}_{{\bf L}'}\left(\langle t\rangle\mathfrak{w}_\xi\aabs{\{\xi,\zeta\}}+\frac{\ip{\xi}}{q}\mathfrak{w}_\eta\aabs{\{\eta,\zeta\}}+\aabs{\widetilde{\X}}\aabs{\{{\bf u},\zeta\}}\right), \end{aligned} \end{equation} and hence \begin{equation} \mathfrak{w}_{{\bf L}'}\aabs{\{{\bf L}',\zeta\}}\lesssim \mathfrak{w}_{{\bf L}}\aabs{\{{\bf L},\zeta\}}+\varepsilon^2\left(\mathfrak{w}_\xi\aabs{\{\xi,\zeta\}}+\mathfrak{w}_\eta\aabs{\{\eta,\zeta\}}+\aabs{\{{\bf u},\zeta\}}\right), \end{equation} where we used that $\mathfrak{m}_{\bf L}\mathfrak{m}_{{\bf L}'}^{-1}\lesssim 1$ by \eqref{eq:xiquot}. This gives the second inequality in \eqref{TransitionMapBoundsApp}. \item In ${\bf u}$: Recall that ${\bf R}=\frac{q}{2}\frac{{\bf X}}{\abs{{\bf X}}}+{\bf V}\times{\bf L}=\frac{q}{2}{\bf u}+a{\bf u}\times{\bf L}$ and thus \begin{equation} \{{\bf R},\zeta\}=\frac{q}{2}\{{\bf u},\zeta\}-q\{\xi^{-1}{\bf L}\times{\bf u},\zeta\}=\frac{q}{2}\{{\bf u},\zeta\}+q\xi^{-2}({\bf L}\times{\bf u})\{\xi,\zeta\}-q\xi^{-1}[{\bf L}\times\{{\bf u},\zeta\}-{\bf u}\times\{{\bf L},\zeta\}]. \end{equation} It follows that \begin{equation} {\bf L}\times\{{\bf R},\zeta\}=\frac{q}{2}{\bf L}\times\{{\bf u},\zeta\}-q\lambda^2\xi^{-2}{\bf u}\{\xi,\zeta\}+q\xi^{-1}[\lambda^2\{{\bf u},\zeta\}+{\bf L}({\bf u}\cdot\{{\bf L},\zeta\})+{\bf u}({\bf L}\cdot\{{\bf L},\zeta\})], \end{equation} and hence \begin{equation} \begin{aligned} \{{\bf R},\zeta\}+2\xi^{-1}{\bf L}\times\{{\bf R},\zeta\}&=q\xi^{-2}({\bf L}\times{\bf u})\{\xi,\zeta\}+q\xi^{-1}{\bf u}\times\{{\bf L},\zeta\}-2\xi^{-1}q\lambda^2\xi^{-2}{\bf u}\{\xi,\zeta\}\\ &\quad +2q\xi^{-2}[{\bf L}({\bf u}\cdot\{{\bf L},\zeta\})+{\bf u}({\bf L}\cdot\{{\bf L},\zeta\})]+\Big(\frac{q}{2}+2q\xi^{-2}\lambda^2\Big)\{{\bf u},\zeta\}, \end{aligned} \end{equation} i.e.\ \begin{equation} \begin{aligned} \frac{q}{2}(1+4\kappa^2)\{{\bf u},\zeta\}&=\{{\bf R},\zeta\}+2\xi^{-1}{\bf L}\times\{{\bf R},\zeta\}-q\xi^{-2}({\bf L}\times{\bf u})\{\xi,\zeta\}-q\xi^{-1}{\bf u}\times\{{\bf L},\zeta\}\\ &\quad +2q\kappa^2{\bf u}\xi^{-1}\{\xi,\zeta\}-2q\xi^{-2}[{\bf L}({\bf u}\cdot\{{\bf L},\zeta\})+{\bf u}({\bf L}\cdot\{{\bf L},\zeta\})]\\ &=\{{\bf R},\zeta\}+2\kappa{\bf l}\times\{{\bf R},\zeta\}+q\xi^{-1}[2\kappa^2{\bf u}-\kappa({\bf l}\times{\bf u})]\{\xi,\zeta\}\\ &\quad -q\xi^{-1}[{\bf u}\times\{{\bf L},\zeta\}+2\kappa({\bf l}({\bf u}\cdot\{{\bf L},\zeta\})+{\bf u}({\bf l}\cdot\{{\bf L},\zeta\}))]. \end{aligned} \end{equation} Applying this to $({\bf u}^\prime,{\bf R}^\prime,\xi^\prime,\kappa^\prime,{\bf L}^\prime)$, we get \begin{equation} \begin{aligned} \aabs{\{{\bf u}',\zeta\}}&\lesssim \ip{\kappa'}^{-1}\aabs{\{{\bf R'},\zeta\}}+(\xi')^{-1}\aabs{\{\xi',\zeta\}}+\ip{\kappa'}^{-1}(\xi')^{-1}\aabs{\{{\bf L}',\zeta\}}. \end{aligned} \end{equation} \begin{itemize} \item For the third term we obtain \begin{equation} \ip{\kappa'}^{-1}(\xi')^{-1}\aabs{\{{\bf L}',\zeta\}}\lesssim \ip{\kappa'}^{-1}(\xi')^{-1}\mathfrak{w}_{{\bf L}}^{-1}\cdot\mathfrak{w}_{\bf L}\aabs{\{{\bf L},\zeta\}}+\ip{\kappa'}^{-1}(\xi')^{-1}\abs{\mathcal{W}(t)}\aabs{\{\widetilde{\X},\zeta\}} \end{equation} with, using \eqref{EquivXiXi'App}, \begin{equation} (\xi')^{-1}\mathfrak{w}_{{\bf L}}^{-1}\cdot\mathfrak{w}_{\bf L}\aabs{\{{\bf L},\zeta\}}\lesssim \ip{\xi}\xi^{-2}\cdot\mathfrak{w}_{\bf L}\aabs{\{{\bf L},\zeta\}} \end{equation} and, using \eqref{PBXZetaApp}, \begin{equation} \begin{aligned} (\xi')^{-1}\abs{\mathcal{W}(t)}\aabs{\{\widetilde{\X},\zeta\}}&\lesssim\abs{\mathcal{W}(t)}\xi^{-1}\left(\ip{t}\mathfrak{w}_\xi\aabs{\{\xi,\zeta\}}+\frac{\ip{\xi}}{q}\mathfrak{w}_\eta\aabs{\{\eta,\zeta\}}+\ip{t}\aabs{\{{\bf u},\zeta\}}+\frac{\ip{\xi}}{q}\mathfrak{w}_{\bf L}\aabs{\{{\bf L},\zeta\}}\right)\\ &\lesssim \varepsilon^2\cdot\xi^{-1}\cdot\left(\mathfrak{w}_\xi\aabs{\{\xi,\zeta\}}+\ip{t}^{-1/2}\mathfrak{w}_\eta\aabs{\{\eta,\zeta\}}+\aabs{\{{\bf u},\zeta\}}+\ip{t}^{-1/2}\mathfrak{w}_{\bf L}\aabs{\{{\bf L},\zeta\}}\right). \end{aligned} \end{equation} \item For the second term we obtain using \eqref{PBXIprimeZetaApp}, \begin{equation} (\xi')^{-1}\aabs{\{\xi',\zeta\}}\lesssim \mathfrak{w}_{\xi'}\aabs{\{\xi',\zeta\}}\lesssim \mathfrak{w}_\xi\aabs{\{\xi,\zeta\}}+\varepsilon^2\langle t\rangle^{-\frac{1}{2}}\left(\mathfrak{w}_\eta\aabs{\{\eta,\zeta\}}+\aabs{\{{\bf u},\zeta\}}+\mathfrak{w}_{\bf L}\aabs{\{{\bf L},\zeta\}}\right). \end{equation} \item The first term is more involved. With ${\bf R}'={\bf R}-\mathcal{W}\times{\bf L}-{\bf V}\times\delta{\bf L}+\mathcal{W}\times\delta{\bf L}$ we obtain that \begin{equation} \begin{aligned} \ip{\kappa'}^{-1}\aabs{\{{\bf R'},\zeta\}}&\lesssim \ip{\kappa'}^{-1}\aabs{\{\mathcal{W}\times{\bf L}+{\bf V}\times\delta{\bf L}-\mathcal{W}\times\delta{\bf L},\zeta\}}+\frac{\ip{\kappa}}{\ip{\kappa'}}\ip{\kappa}^{-1}\aabs{\{{\bf R},\zeta\}}\\ &\lesssim \abs{\mathcal{W}(t)}\ip{\kappa'}^{-1}\left(\aabs{\{{\bf L},\zeta\}}+(q\xi^{-1}+\abs{\mathcal{W}(t)})\aabs{\{\widetilde{\X},\zeta\}}+\aabs{\widetilde{\X}}\aabs{\{\widetilde{\V},\zeta\}}\right)+\frac{\ip{\kappa}}{\ip{\kappa'}}\ip{\kappa}^{-1}\aabs{\{{\bf R},\zeta\}}. \end{aligned} \end{equation} By Lemma \ref{lem:prime_bds} \begin{equation} \abs{\kappa-\kappa'}\leq \xi^{-1}\abs{\lambda-\lambda'}+\lambda'\frac{\abs{\xi'-\xi}}{\xi\xi^\prime}\lesssim \abs{\mathcal{W}(t)}\left[\xi^{-1}\ip{t}+\xi^\prime\kappa'\right], \end{equation} and thus, using also that $\xi\lesssim_q \langle t\rangle$, \begin{equation} \frac{\ip{\kappa}}{\ip{\kappa'}}\lesssim 1+\xi^{-1}\ip{t}\aabs{\mathcal{W}(t)}\lesssim 1+\varepsilon^2\xi^{-1}. \end{equation} Next we use that \begin{equation} \aabs{\{{\bf R},\zeta\}}\lesssim q\left[\ip{\kappa}\aabs{\{{\bf u},\zeta\}}+\xi^{-1}\kappa\aabs{\{\xi,\zeta\}}+\xi^{-1}\aabs{\{{\bf L},\zeta\}}\right] \end{equation} to conclude that \begin{equation} \begin{aligned} \ip{\kappa'}^{-1}\aabs{\{{\bf R'},\zeta\}}&\lesssim_q \left(1+\varepsilon^2\xi^{-1}+\frac{\varepsilon^2}{\aabs{\widetilde{\X}}}\right)\mathfrak{w}_\xi\aabs{\{\xi,\zeta\}}+\varepsilon^2\left(\langle t\rangle^{-\frac{1}{2}}(1+\xi^{-1})+\frac{1}{\aabs{\widetilde{\X}}}\right)\mathfrak{w}_\eta\aabs{\{\eta,\zeta\}}\\ &\quad +\left(1+\varepsilon^2\xi^{-1}\right)\aabs{\{{\bf u},\zeta\}} +\left(1+\xi^{-3}+\frac{\varepsilon^2}{\aabs{\widetilde{\X}}}\right)\mathfrak{w}_{\bf L}\aabs{\{{\bf L},\zeta\}}. \end{aligned} \end{equation} \end{itemize} Therefore, \begin{equation} \begin{aligned} \aabs{\{{\bf u}',\zeta\}}&\lesssim (1+\varepsilon^2\xi^{-1}+\varepsilon^2\aabs{\widetilde{\X}}^{-1})\mathfrak{w}_\xi\aabs{\{\xi,\zeta\}}+\varepsilon^2\left((1+\xi^{-1})\langle t\rangle^{-\frac{1}{2}}+\aabs{\widetilde{\X}}^{-1}\right)\mathfrak{w}_\eta\aabs{\{\eta,\zeta\}}\\ &\quad+(1+\varepsilon^2\xi^{-1})\aabs{\{{\bf u},\zeta\}}+(1+\xi^{-3}+\varepsilon^2\aabs{\widetilde{\X}}^{-1})\mathfrak{w}_{\bf L}\aabs{\{{\bf L},\zeta\}}, \end{aligned} \end{equation} which gives the third bound in \eqref{TransitionMapBoundsApp}. \item In $\eta$: By \eqref{eq:diffeta} there holds that \begin{equation} \begin{aligned} \eta'+\sigma\circ\Phi_t^{-1}\circ\mathcal{M}_t^{-1}&=(\eta+\sigma\circ\Phi_t^{-1})-\delta A\\ \delta A&:=t\frac{(a')^3-a^3}{q}+\frac{a-a'}{q}(\widetilde{\X}\cdot\widetilde{\V})+\frac{a'}{q}(\widetilde{\X}\cdot\mathcal{W}(t)). \end{aligned} \end{equation} Note that since $\sigma\circ\Phi_t^{-1}\circ\mathcal{M}_t^{-1}=\sigma(\eta'+tq^2(\xi')^{-3},\kappa')$ we have that \begin{equation} \begin{aligned} \{\sigma\circ\Phi_t^{-1}\circ\mathcal{M}_t^{-1},\zeta\}&=\partial_\eta\sigma(\eta'+tq^2(\xi')^{-3},\kappa')(\{\eta',\zeta\}-3tq^2(\xi')^{-4}\{\xi',\zeta\})\\ &\quad+\partial_\kappa\sigma(\eta'+tq^2(\xi')^{-3},\kappa')\{\kappa',\zeta\}, \end{aligned} \end{equation} and similarly for $\sigma\circ\Phi_t^{-1}$, where by \eqref{eq:deriv_rho}, \eqref{BoundsOnSigma}-\eqref{ImprovedBoundsOnSigma} and the bound $\rho\gtrsim \vert\widetilde{\X}\vert\cdot q\xi^{-2}$, we have that \begin{equation} \abs{\partial_\eta\sigma}\leq \frac{1}{2\rho}\leq\frac{1}{2},\qquad \abs{\partial_\eta\sigma(\eta,\kappa)}\lesssim\frac{q}{a^2\aabs{{\bf X}}},\qquad\abs{\partial_\kappa\sigma}\lesssim \min\{\kappa\rho^{-2},\ip{\kappa}^{-1}\}. \end{equation} Hence \begin{equation} \begin{aligned} \frac{1}{2}\aabs{\{\eta',\zeta\}}&\leq [1+\partial_\eta\sigma(\eta'+tq^2(\xi')^{-3},\kappa')]\cdot \aabs{\{\eta',\zeta\}}\\ &\lesssim \aabs{\partial_\eta\sigma\circ\Phi_t^{-1}\circ\mathcal{M}_t^{-1}}\cdot tq^2(\xi')^{-4}\aabs{\{\xi',\zeta\}}+\aabs{\partial_\kappa\sigma\circ\Phi_t^{-1}\circ\mathcal{M}_t^{-1}}\aabs{\{\kappa',\zeta\}}\\ &\quad +\aabs{\{\eta+\sigma\circ\Phi_t^{-1},\zeta\}}+\aabs{\{\delta A,\zeta\}}. \end{aligned} \end{equation} Hence with $\xi\sim\xi'$ we have the bounds \begin{equation} \mathfrak{w}_\eta\cdot\aabs{\partial_\eta\sigma\circ\Phi_t^{-1}\circ\mathcal{M}_t^{-1}}\cdot tq^2(\xi')^{-4}\aabs{\{\xi',\zeta\}}\lesssim \frac{q}{(1+\xi)^2}t\aabs{\partial_\eta\widetilde{\sigma}}\cdot \mathfrak{w}_{\xi'}\aabs{\{\xi',\zeta\}}\lesssim \frac{t}{\aabs{\widetilde{\X}}}\left(\mathfrak{w}_\xi\aabs{\{\xi',\zeta\}}+\varepsilon^2\mathcal{D}[\zeta]\right), \end{equation} and \begin{equation} \begin{aligned} \mathfrak{w}_\eta\cdot\aabs{\partial_\kappa\sigma\circ\Phi_t^{-1}\circ\mathcal{M}_t^{-1}}\aabs{\{\kappa',\zeta\}}&\lesssim\frac{\xi^2}{1+\xi}\cdot\ip{\kappa'}^{-1}\xi^{-1}\left(\aabs{\{\lambda',\zeta\}}+\kappa'\aabs{\{\xi',\zeta\}}\right)\\ &\lesssim \mathfrak{w}_{{\bf L}}\aabs{\{{\bf L}',\zeta\}}+\frac{\xi^3}{(1+\xi)^2}\mathfrak{w}_\xi\aabs{\{\xi',\zeta\}}\\ &\lesssim \xi\mathfrak{w}_\xi\aabs{\{\xi,\zeta\}}+\mathfrak{w}_{{\bf L}}\aabs{\{{\bf L},\zeta\}}+\varepsilon^2(1+\xi\aabs{\widetilde{\X}}^{-1})\mathcal{D}[\zeta](t), \end{aligned} \end{equation} where we used \eqref{eq:xixi'PB} to absorb one extra factor of $\xi$. Similarly, \begin{equation} \mathfrak{w}_\eta\cdot\aabs{\{\eta+\sigma\circ\Phi_t^{-1},\zeta\}}\lesssim \mathfrak{w}_\eta\aabs{\{\eta,\zeta\}}+\mathfrak{w}_{{\bf L}}\aabs{\{{\bf L},\zeta\}}+(\xi+t\aabs{\widetilde{\X}}^{-1})\mathfrak{w}_\xi\aabs{\{\xi,\zeta\}}. \end{equation} Finally, we observe that with $\{a^3,\zeta\}=\frac{3}{2}a\{a^2,\zeta\}$ and using \eqref{eq:diffa2}, it follows that \begin{equation} \begin{aligned} \{(a')^3-a^3,\zeta\}&=\frac{3}{2}a'\{(a')^2-a^2,\zeta\}+\frac{3}{2}(a'-a)\{a^2,\zeta\}\\ &=-3a'\mathcal{W}(t)\cdot\{\widetilde{\V},\zeta\}-3(a'-a)\frac{q^2}{\xi^3}\{\xi,\zeta\}, \end{aligned} \end{equation} and \begin{equation} \begin{aligned} \{a'-a,\zeta\}&=\frac{\xi'}{2q}\{(a')^2-a^2,\zeta\}+\frac{\xi'-\xi}{2q}\{a^2,\zeta\}\\ &=-\frac{\xi'}{q}\mathcal{W}(t)\cdot\{\widetilde{\V},\zeta\}-(\xi'-\xi)\frac{q}{\xi^3}\{\xi,\zeta\}, \end{aligned} \end{equation} so that, using \eqref{prime_bds}, \begin{equation} \begin{aligned} \aabs{\{\delta A,\zeta\}}&\lesssim_q t\abs{\mathcal{W}(t)}\left(\xi^{-1}\aabs{\{\widetilde{\V},\zeta\}}+\xi^{-3}\aabs{\{\xi,\zeta\}}\right)+\xi\abs{\mathcal{W}(t)}\aabs{\widetilde{\X}}\aabs{\widetilde{\V}}\left(\aabs{\{\widetilde{\V},\zeta\}}+\xi^{-2}\aabs{\{\xi,\zeta\}}\right)\\ &\quad +\abs{\mathcal{W}(t)}\left(\aabs{\widetilde{\X}}\aabs{\{\widetilde{\V},\zeta\}}+\aabs{\widetilde{\V}}\aabs{\{\widetilde{\X},\zeta\}}\right)+\abs{\mathcal{W}(t)}\left(\aabs{\widetilde{\X}}(\xi')^{-2}\aabs{\{\xi',\zeta\}}+(\xi')^{-1}\aabs{\{\widetilde{\X},\zeta\}}\right). \end{aligned} \end{equation} From this and \eqref{PBXZetaApp}, we deduce that \begin{equation} \begin{aligned} \mathfrak{w}_\eta\aabs{\{\delta A,\zeta\}}&\lesssim \varepsilon^2 \mathfrak{w}_\xi\aabs{\{\xi,\zeta\}}+\xi^2\ip{\xi}^{-2}\aabs{\mathcal{W}(t)}\aabs{\widetilde{\X}}\mathfrak{w}_\xi\aabs{\{\xi',\zeta\}}\\ &\quad+\abs{\mathcal{W}(t)}(\aabs{\widetilde{\X}}\xi^{2}\ip{\xi}^{-1}+t\xi\ip{\xi}^{-1})\aabs{\{\widetilde{\V},\zeta\}}+\varepsilon^2\ip{t}^{-1}\frac{\xi}{\ip{\xi}}\aabs{\{\widetilde{\X},\zeta\}}\\ &\lesssim \varepsilon^2\left[(1+t\aabs{\widetilde{\X}}^{-1})\mathfrak{w}_\xi\aabs{\{\xi,\zeta\}}+\mathfrak{w}_{\bf u}\aabs{\{{\bf u},\zeta\}}+(1+\aabs{\widetilde{\X}}^{-1})\left(\mathfrak{w}_\eta\aabs{\{\eta,\zeta\}}+\mathfrak{w}_{{\bf L}}\aabs{\{{\bf L},\zeta\}}\right)\right]. \end{aligned} \end{equation} Altogether we obtain that \begin{equation} \mathfrak{w}_{\eta'}\aabs{\{\eta',\zeta\}}\lesssim\mathfrak{w}_\eta\aabs{\{\eta,\zeta\}}+\mathfrak{w}_{{\bf L}}\aabs{\{{\bf L},\zeta\}}+(1+\xi)\ip{t}\aabs{\widetilde{\X}}^{-1}\mathfrak{w}_\xi\aabs{\{\xi,\zeta\}}+\varepsilon^2(1+\ip{t}\aabs{\widetilde{\X}}^{-1})\mathcal{D}[\zeta](t), \end{equation} which gives the last inequality in \eqref{TransitionMapBoundsApp}. \end{enumerate} \end{proof} \end{document}
arXiv
\begin{document} \setlength{\baselineskip}{21pt} \title{Discontinuous Homomorphisms of $C(X)$ with $2^{\aleph_0}>\aleph_2$} \author{Bob A. Dumas\\ Department of Philosophy\\ University of Washington\\ Seattle, Washington 98195} \date{November 11, 2016} \maketitle \begin{abstract} Assume that $M$ is a c.t.m. of $ZFC+CH$ containing a simplified $(\omega_1,2)$-morass, $P\in M$ is the poset adding $\aleph_3$ generic reals and $G$ is $P$-generic over $M$. In $M$ we construct a function between sets of terms in the forcing language, that interpreted in $M[G]$ is an $\mathbb R$-linear order-preserving monomorphism from the finite elements of an ultrapower of the reals, over a non-principal ultrafilter on $\omega$, into the Esterle algebra of formal power series. Therefore it is consistent that $2^{\aleph_0}=\aleph_3$ and, for any infinite compact Hausdorff space $X$, there exists a discontinuous homomorphism of $C(X)$, the algebra of continuous real-valued functions on $X$. \end{abstract} \baselineskip = 18pt \setcounter{section}{0} \section{Introduction} This paper addresses a problem in the theory of Banach Algebras concerning the existence of discontinuous homomorphisms of $C(X)$, the algebra of continuous real-valued functions with domain $X$, where $X$ is an infinite compact Hausdorff space. In [\ref{Johnson}], B. Johnson proved that there is a discontinuous homomorphism of $C(X)$ provided that there is a nontrivial submultiplicative norm on the finite elements of an ultrapower of $\mathbb R$ over $\omega$. In [\ref{Esterle1}], J. Esterle constructs an algebra of formal power series, $\mathcal{E}$, and shows in [\ref{Esterle2}] that the infinitesimal elements of $\mathcal{E}$ admit a nontrivial submultiplicative norm. By results of Esterle in [\ref{Esterle3}], it is known that $\mathcal{E}$ is an $\eta_1$-ordering of cardinality $2^{\aleph_0}$. Furthermore, $\mathcal{E}$ is a totally ordered field by a result of Hahn in 1907 [\ref{Hahn}], and is real-closed by a result of Maclane [\ref{Maclane}]. It is a theorem of P. Erd\"{o}s, L. Gillman and M. Henriksen in [\ref{Erdos}] that any pair of $\eta_1$-ordered real-closed fields of cardinality $\aleph_1$ are isomorphic as ordered fields. In fact, it is shown using a back-and-forth argument that any order-preserving field isomorphism between countable subsets of $\eta_1$-ordered real-closed fields may be extended to an order-isomorphism. It is a standard result of model theory that for any non-principal ultrafilter $U$ on $\omega$, ${\mathbb R}^{\omega}/U$ is an $\aleph_1$-saturated real-closed field (and hence an $\eta_1$-ordering). By a result of Johnson [\ref{Johnson}], between any pair of $\eta_1$-ordered real-closed fields of cardinality $\aleph_1$ there is an $\mathbb R$-linear order-preserving field isomorphism (hereafter $\mathbb R$-isomorphism). This implies, in a model of the continuum hypothesis (CH), that there is an $\mathbb R$-linear order-preserving monomorphism (hereafter $\mathbb R$-monomorphism) from the finite elements of ${\mathbb R}^{\omega}/U$ to $\mathcal{E}$, and hence in models of ZFC+CH there exists a discontinuous homomorphism of $C(X)$. The proof that in a model of ZFC+CH there exists a discontinuous homomorphism of $C(X)$ is due independently to Dales [\ref{Dales1}] and Esterle [\ref{Esterle2}]. Shortly thereafter R. Solovay found a model of ZFC+$\neg$CH in which all homomorphisms of $C(X)$ are continuous. Later, in his Ph.D. thesis, W.H. Woodin constructed a model of ZFC+Martin's Axiom in which all homomorphisms of $C(X)$ are continuous. This naturally gave rise to the question of whether there is a model of set theory in which CH fails and there is a discontinuous homomorphism of $C(X)$. Woodin subsequently showed that in the Cohen extension of a model of ZFC+CH by generic reals indexed by $\omega_2$, there is a discontinuous homomorphism of $C(X)$ [\ref{Woodin}]. Woodin shows that in this model the gaps in $\mathcal{E}$ that must be witnessed in a classical back-and-forth construction are always countable. He observes that this construction may not be extended to a Cohen extension by more than $\aleph_2$ generic reals. He suggests the plausibility of using morasses to construct an $\mathbb R$-monomorphism from the finite elements of an ultrapower of the reals to the Esterle algebra in generic extensions with more than $\aleph_2$ generic reals. Woodin's argument does not extend to higher powers of the continuum and leaves open the question of whether there exists a discontinuous homomorphism of $C(X)$ in models of set theory in which $2^{\aleph_0}>\aleph_2$. In this paper we show that the existence of a simplified $(\omega_1,2)$-morass in a model of $ZFC + CH$ is sufficient for the existence of a discontinuous homomorphism of $C(X)$ in a model in which $2^{\aleph_0}=\aleph_3$. We show that in the Cohen extension adding $\aleph_2$ generic reals to a model of $ZFC + CH$ containing a simplified $(\omega_1,1)$-morass, there is a level, morass-commutative term function that, interpreted in the Cohen extension, is an $\mathbb R$-monomorphism of the finite elements of an ultrapower of $\mathbb R$ over $\omega$ into the Esterle Algebra. This is achieved with a transfinite construction of length $\omega_1$, utilizing the morass functions from the gap-one morass to complete the construction of size $\aleph_2$ by commutativity with morass maps. Using the techniques of this argument, we construct a term function with a transfinite construction of length $\omega_1$ and utilize morass-commutativity with the embeddings of a gap-2 morass to complete the construction of an $\mathbb R$-monomorphism from the finite elements of a standard ultrapower of $\mathbb R$ over $\omega$ to the Esterle Algebra in the Cohen extension adding $\aleph_3$ generic reals. The technical obstacles to such a construction may be reduced to conditions we call morass-extendability. This paper is dependent on the results of [\ref{Dumas}] and [\ref{Dumas2}], in which term functions are constructed that are forced to be order-preserving functions. The arguments here are very similar, with the additional requirement that the functions are also ring homomorphisms. \section{Preliminaries} In our initial construction we use a simplified $(\omega_1,1)$-morass. We construct a function on terms in the forcing language for adding $\aleph_2$ generic reals that is forced in all generic extensions to be an $\mathbb R$-monomorphism from the finite elements of ${\mathbb R}^{\omega}/U$, where $U$ is a standard non-principal ultrafilter (see Definition 6.14 [\ref{Dumas}]) in the generic extension, into the Esterle Algebra, $\mathcal{E}$. In some sense we follow the classical route to such constructions - extension by transcendental elements in an inductive construction of length $\omega_1$. We will require commutativity with morass maps to construct a function on a domain of cardinality $\aleph_2$ making only $\aleph_1$ many explicit commitments. However with each commitment of the construction, there are uncountably many future commitments implied by commutativity with morass maps. In [\ref{Velleman}] D. Velleman defines a simplified $(\omega_1,1)$-morass. \begin{definition}$($Simplified $(\omega_1,1)$-morass$)$\label{thm2.1} A simplified $(\omega,1)$-morass is a structure \[ \mathcal{M}=\langle (\theta_\alpha \mid \alpha\leq \omega_1),(\mathcal{F}_{\alpha \beta}\mid \alpha<\beta\leq \omega_1)\rangle \] that satisfies the following conditions:\\ (P0) (a) $\theta_0=1$, $\theta_{\omega_1}=\omega_2$, $(\forall \alpha<\omega_1)\ 0<\theta_\alpha<\omega_1$.\\ \ (b) $\mathcal{F}_{\alpha \beta}$ is a set of order-preserving functions $f:\theta_\alpha \to \theta_\beta$.\\ (P1) $\mid \mathcal{F_{\alpha \beta}} \mid \leq \omega$ for all $\alpha<\beta<\omega_1$.\\ (P2) If $\alpha<\beta<\gamma$, then $\mathcal{F}_{\alpha \gamma}=\{ f\circ g\mid f\in \mathcal{F}_{\beta \gamma}$, $g\in \mathcal{F}_{\alpha \beta} \}$.\\ (P3) If $\alpha <\omega_1$, then $\mathcal{F}_{\alpha (\alpha+1)}=\{ \textnormal{id}\upharpoonright \theta_{\alpha} , f_{\alpha} \}$ where $f_{\alpha}$ satisfies: \[ (\exists \delta_{\alpha}<\theta_\alpha)\ f_\alpha \upharpoonright \delta_{\alpha} = \textnormal{id}\upharpoonright \delta_{\alpha} \ \textnormal{ and } \ f_\alpha(\delta_{\alpha})\geq \theta_\alpha. \] (P4) If $\alpha \leq \omega_1$ is a limit ordinal, $\beta_1, \beta_2<\alpha$, $f_1\in \mathcal{F}_{\beta_1 \alpha}$ and $f_2\in \mathcal{F}_{\beta_2 \alpha}$, then there is $\gamma<\alpha$, $\gamma>\beta_1, \beta_2$, and there is $f_1'\in \mathcal{F}_{\beta_1 \gamma}$, $f_2'\in \mathcal{F}_{\beta_2 \gamma}$, $g\in \mathcal{F}_{\gamma \alpha}$ such that $f_1=g\circ f_1'$ and $f_2=g\circ f_2'$.\\ (P5) For all $\alpha>0$, $\theta_{\alpha}=\bigcup \{ f[\theta_{\beta}] \mid \beta<\alpha$, $f\in \mathcal{F}_{\beta \alpha} \}$. \end{definition} Simplified gap-1 morasses, as well as higher gap simplified morasses, are known to exist in $L$. We will construct, by an inductive argument of length $\omega_1$, a function between sets of terms in the forcing language adding $\aleph_2$ generic reals. We interpret the morass functions on ordinals as functions between terms in the forcing language and require that the set of terms under construction satisfy certain commutativity constraints with the morass functions. It is implicit that any commitment to an ordered pair of terms in the construction is \emph{de facto} a commitment to uncountably many ordered pairs in mutually generic extensions. In [\ref{Dumas}] we worked explicitly with terms in the forcing language. We wish to simplify the details of the construction by working with objects in a forcing extension. We use the notions of term complexity and strict level, from [\ref{Dumas}], and apply it to objects in a forcing extension. \begin{notation}$($P(A)$)$\label{thm4.7} If $A$ is a set of ordinals, we let $P(A)$ be the poset adding generic reals indexed by the ordinals of $A$. That is, \[ P(A):= Fn(A\times \omega,2), \] the finite partial functions from $A\times \omega$ to $2$. \end{notation} Let $M$ be a c.t.m of ZFC, $\beta$ be an ordinal and $P$ be the poset adding generic reals indexed by $\beta$, then $P(\beta)=Fn(\beta \times \omega,2)$. Suppose $S\subseteq \beta$. Let $\tau \in M^{P(S)}$ be a discerning term in the forcing language adding generic reals indexed by $S$. Then $\tau$ has strict level $S$ provided that, for any proper subset of $S$, $A$, and term, $\sigma$, in the forcing language adding generic reals indexed by $A$, $\Vdash \tau \neq \sigma$. Not all terms have strict levels. Alternatively, we consider $P(\beta)$ as the product forcing $P(A)\times P(S\setminus A) \times P(\beta \setminus S)$. Suppose $G(A)$ is $P(A)$-generic over $M$ and $G$ is $P(S\setminus A)$-generic over $M[G(A)]$, and $H$ is $P(\beta \setminus S)$-generic over $M[G(A),G]$. Then if $\tau \in M[G(A),G,H]$ has strict level $S$, $\tau \notin M[G(A)]$. So an object in a forcing extension has strict level $S$ just in case there is a term for $\tau$ in the forcing language with strict level $S$, and if $\tau$ is a term of strict level $S$, then in any generic extension the value of $\tau$ has strict level $S$. Consequently in our construction we pass freely between objects of strict level $S$ in a generic extension and term of strict level $S$ in the forcing language. Many of the constraints required for commutativity with morass maps are expressed in terms of the strict level of objects in a forcing extension (or correspondingly, terms in the forcing language). For instance, in [\ref{Dumas}] we define a term function to be level if the strict level of any term in the domain equals the strict term of its image under the function. Such maps will commute with morass maps in the manner required by our construction. \section{Constructing an $\mathbb R$-monomorphism on a real-closed field} We wish to construct a set of terms in the forcing language for adding $\aleph_2$ generic reals that is forced in all generic extensions to be an $\mathbb R$-monomorphism on the finite elements of an ultrapower of $\mathbb R$. It is a result of B. Johnson [\ref{Johnson}] that $\eta_1$-ordered real-closed fields with cardinality $\aleph_1$ are $\mathbb R$-isomorphic in models of ZFC+CH. This result strengthens the classical result that $\aleph_1$-saturated real closed fields of cardinality $\aleph_1$ are isomorphic. It is conceivable that in a naive back-and-forth construction of an order-isomorphism between $\eta_1$-ordered real-closed fields that a choice is made for an image (or pre-image) of the function under construction that precludes satisfaction of $\mathbb R$-linearity. \begin{definition}$($Full real-closed field$)$\label{thm3.1} A real-closed field $D$ is full iff for every finite element, $r+\delta$, where $r\in \mathbb R$ and $\delta$ is infinitesimal, $r\in D$. \end{definition} We will need to extend two results due to B. Johnson [\ref{Johnson}] to meet the requirement of $\mathbb R$-linearity in the context of constructing term functions using a morass. \begin{lemma}$(B. Johnson)$\label{thm3.2} Assume that $D$ and $I$ are full real-closed subfields of $\eta_1$-ordered real-closed fields $D^*$ and $I^*$ (resp.), $\phi:D\to I$ is an $\mathbb R$-monomorphism, and $r\in R$. Then there is an extension of $\phi$, $\phi^*$, that is an $R$-monomorphism of the real closure of the field generated by $D$ and $r$, $F(D,r)$, onto the real closure of the field generated by $I$ and $r$, $F(I,r)$. Furthermore $F(D,r)$ (and consequently, $F(I,r)$) is full. \end{lemma} \begin{lemma}$(B. Johnson)$\label{thm3.3} Let $D$, $D^*$, $I$, $I^*$ and $\phi$ be as in Lemma \ref{thm3.2}, $x\in D^*$ and assume that the real closure of the field generated by $D$ and $d$, $F(D,d)$, is full. Let $y\in I^*$ be such that \[ (\forall d\in D)(d<x \iff \phi(d)<y). \] Then there is an $\mathbb R$-monomorphism extending $\phi$, $\phi^*:F(D,r)\to I^*$, such that $\phi^*(x)=y$. \end{lemma} \begin{definition}$($Archimedean valuation$)$\label{thm3.4} If $x$ and $y$ are non-zero elements of a real-closed field, they have the same Archimedean valuation, $x \sim y$, provided that there are $m, n \in \mathbb{N}$ such that \[ \mid x\mid <n\mid y\mid \] and \[ \mid y\mid<m\mid x\mid . \] If $\mid x\mid <\mid y\mid$ and $x \nsim y$, then $x$ has Archimedean valuation greater than $y$, $x \succeq y$. \end{definition} Archimedean valuation is an equivalence relation on the non-zero elements of a real-closed field (RCF). The non-zero real numbers have the same valuation. Elements with the same valuation as a real number are said to have real valuation. In a nonstandard real-closed field, elements with valuation greater than a real valuation are infinitesimal. The finite elements of a real closed field are the infinitesimal elements and those with real valuation. \section{The Esterle algebra} We define the Esterle algebra [\ref{Esterle2}] and review some basic properties. \begin{definition} $(S_{\omega_1})$\label{thm4.1} $S_{\omega_1}$ is the lexicographic linear-ordering with domain $\{s:\omega_1 \to 2 \mid s$ has countable support and the support of s has a largest element $\}$. \end{definition} \begin{definition} $(G_{\omega_1})$\label{thm4.2} $G_{\omega_1}$ is the ordered group with domain $\{ g:S_{\omega_1} \to \mathbb R \mid g$ has countable well-ordered support$ \}$, lexicographic ordering, and group operation pointwise addition. \end{definition} We define an ordered algebra of formal power series, $\mathcal{E}$. The universe of $\mathcal{E}$ is the set of formal power series, $\sum_{\lambda <\gamma} \alpha_{\lambda} x^{a_{\lambda}}$, where: \begin{enumerate} \item $\gamma<\omega_1$. \item $(\forall \lambda<\gamma)\, \alpha_{\lambda} \in \mathbb R$. \item $\{a_{\lambda}\mid \lambda<\gamma\}$ is a countable well-ordered subset of $G_{\omega_1}$ and $\lambda_1<\lambda_2<\gamma \Rightarrow a_{\lambda_1}<a_{\lambda_2}$. \end{enumerate} The ordered algebra, $\mathcal{E}$, is isomorphic to the set of functions, with countable well-ordered support, from $G_{\omega_1}$ to $\mathbb R$. The lexicographic order linearly-orders $\mathcal{E}$. Addition is pointwise and multiplication is defined as follows:\, \\ Suppose $a=\sum_{\lambda<\gamma_1} \alpha_{\lambda} x^{a_{\lambda}}$ and $b= \sum_{\kappa<\lambda_2} \beta_{\kappa}x^{b_{\kappa}}$ are members of $\mathcal{E}$. Let \[ C= \{ c\mid (\exists \lambda<\gamma_1) (\exists \kappa<\gamma_2) \, \, c=a_{\lambda}+b_{\kappa} \}. \] Then \[ a\cdot b= \sum_{c\in C}((\sum_{a_{\lambda}+b_{\kappa}=c} \alpha_{\lambda}\cdot \beta_{\kappa}) \, x^c ). \] \begin{definition}$($Esterle algebra, $\mathcal{E})$\label{thm4.3} The Esterle algebra, $\mathcal{E}$, is $\{ f:G_{\omega_1} \to \mathbb R \mid f$ has countable well-ordered support$\}$. $\mathcal{E}$ is lexicographically ordered, with pointwise addition, and multiplication defined above. \end{definition} $\mathbb R$ may be embedded in $\mathcal{E}$ by $\alpha \longmapsto \alpha x^e$, where $e$ is the group identity in $G_{\omega_1}$. Exponents in $G_{\omega_1}$ larger than $e$ (called positive exponents) correspond to infinitesimal Archimedean valuations, and those smaller than $e$ (called negative exponents) correspond to infinite valuations. The finite elements of $\mathcal{E}$ are those with leading exponent $\geq e$. The Archimedean valuations of the Esterle algebra are represented by the group of exponents of $\mathcal{E}$. \begin{theorem}(J. Esterle [\ref{Esterle2}]) \label{thm4.4} $\mathcal{E}$ is an $\eta_1$-ordered real-closed field. \end{theorem} A norm, $\| \, \|$, on an algebra $A$ is submultiplicative if for any $a,b \in A$, \[ \| a\cdot b\| \leq \| a\| \cdot \| b\|. \] \begin{theorem}(G. Dales [\ref{Dales1}], J. Esterle [\ref{Esterle3}]) \label{thm4.5} The set of finite elements of $\mathcal{E}$ bears a submultiplicative norm. \end{theorem} It is a standard result of model theory that if $U$ is a non-principal ultrafilter on $\omega$, the ultrapower ${\mathbb R}^{\omega}/U$ is an $\aleph_1$-saturated real-closed field. Any two $\aleph_1$-saturated, or $\eta_1$-ordered, real-closed fields with cardinality of the continuum are isomorphic in models of ZFC+CH. Hence CH implies that there is a discontinuous homomorphism of $C(X)$. \begin{theorem}(B.Johnson [\ref{Johnson}]) \label{thm4.6} (CH) If $U$ is a non-principal ultrafilter, there is an $\mathbb R$-monomorphism from the finite elements of ${\mathbb R}^{\omega}/U$ into $\mathcal{E}$. \end{theorem} We turn our attention to terms in a forcing language $M^P$ that are forced to be members of the Esterle algebra. In [\ref{Dumas}] and [\ref{Dumas2}], we found sufficient conditions for morass constructions. The aggregate of these conditions were characterized as morass-definability and gap-2 morass-definability. The central theorem of the papers were that morass-definable $\eta_1$-orderings are order-isomorphic in the Cohen extension adding $\aleph_2$ generic reals of a model of $ZFC + CH$ containing a simplified $(\omega_1,1)$-morass; and gap-2 morass definable $\eta_1$-orderings are order-isomorphic in the Cohen extension adding $\aleph_3$ generic reals of a model of $ZFC + CH$ containing a simplified $(\omega_1,2)$-morass. \begin{definition}(Level-dense) Let $T_X\in M^P$ be forced to be a linear-ordering and $X\subseteq M^P$ be a set of terms of strict level for the domain of $T_X$. $X$ is level-dense provided that for $x, y\in X$, where the support of $x$ and the support of $y$ are disjoint, and $G$, $P$-generic over $M$, if $M[G]\models x<y$ then there is $z\in X\cap M$ such that $M[G] \models x<z<y$. \end{definition} \begin{definition}(Upward level-dense) Let $T_X\in M^P$ be forced to be a linear-ordering, and $X\subseteq M^P$ be a set of terms of strict level for the domain of $T_X$. $X$ is upward level-dense provide that for every $x,y,z\in X$, in which $z$ has strict level $A\subseteq \kappa$, $p\in P$ with $p\Vdash x<z<y$ and $B\supseteq A$, there is a term of strict level $B$, $w\in X$, such that $p\Vdash x<w<y$. \end{definition} \begin{definition}$($Morass-commutative$\,)$\label{def4.8} Suppose $\morass$ is a simplified $(\omega_1,1)$-morass, $\lambda\leq\omega_1$ and $X\subseteq M^{P(\omega_1)}$. We say that $X$ is morass-commutative beneath $\lambda$ if for any $\zeta<\xi\leq\lambda$, $f\in \mathcal{F}_{\zeta \xi}$ and $x\in X\cap M^{P_{\zeta}}$, $f(x)\in X$. We say that $X$ is morass-commutative if $X$ is morass-commutative beneath $\omega_1$. \end{definition} \begin{definition}$($Morass-definable$\, )$\label{thm4.8} Let $\langle X,<\rangle \in M[G]$ be a linear ordering. $X$ is morass-definable if there is a set of terms $T\subseteq M^P$ \begin{enumerate} \item $T$ is a morass-commutative set of terms of strict level, and $val_G(T)=X$ \item $T$ is level dense and upward level dense \item Every term of $T$ has countable support (that is, any term in $T$ is a term in $M^{P(A)}$, for some $A$ adding generic reals indexed by a countable subset of $\omega_2$) \end{enumerate} \end{definition} Let $E\subseteq M^P$ be the set of terms of strict level for elements in the Esterle algebra in the forcing language of the poset $P$. It is routine to check that $\mathcal{E}$ is morass-commutative and that every element of the Esterle algebra is the interpretation of a term with countable support. \begin{lemma} The Esterle algebra is level-dense. \end{lemma} Proof: Let $a, b\in M^P$ be discerning terms for elements of the Esterle algebra bearing disjoint supports. Let $G$ be $P$-generic over $M$ and $M[G]\models a<b$. We wish to show that $a$ and $b$ are separated in $M[G]$ by an element of $\mathcal{E}\cap M$. We work in $M[G]$. Let $\gamma_1$ and $\gamma_2$ be countable ordinals and \[ a=\sum_{\lambda<\gamma_1} \alpha_{\lambda} x^{a_\lambda} \] and \[ b=\sum_{\lambda<\gamma_2} \beta_{\lambda} x^{b_\lambda}. \] If $a$ and $b$ are equal on a partial sum, then that partial sum is in $M$, so subtracting the largest common partial sum of $a$ and $b$, we may assume that $a$ and $b$ differ on the first term of the sums, and \[ \alpha_0 x^{a_0} \neq \beta_0 x^{b_0}. \] If $a_0=b_0$, then $a_0 \in M$ and there is $q\in \mathbb{Q}$ such that \[ \alpha_0<q<\beta_0. \] Then $q x^{a_0}\in M$ and \[ a<q x^{a_0}<b. \] Hence we assume that $a$ and $b$ have distinct Archimedean valuations. We consider the case $a_0<b_0$. Then $\alpha_0<0$. It is sufficient to prove that there is an element of $\mathcal{E} \cap M$ that has valuation between $a_0$ and $b_0$. Let \[ a_0=f:S_{\omega_1} \to \mathbb R \] and \[ b_0=g:S_{\omega_1} \to \mathbb R. \] If $a_0$ and $b_0$ are equal on an initial segment of their supports, then this initial segment is in the ground model. We can therefore assume that $a_0$ and $b_0$ either have distinct least members, or have the same least member of their supports, $s\in M$, and \[ f(s)<g(s). \] In the latter case there is $q\in \mathbb{Q}$ such that \[ f(s)<q<g(s). \] Then $\{ (s,q) \} \in M$ and \[ a_0< \{ (s,q) \}<b_0. \] Hence we have left to consider the case in which $s_{a_0}$ is the least member of the support of $a_0$, $s_{b_0}$ is the least member of the support of $b_0$ and $s_{a_0}\neq s_{b_0}$. If either $s_{a_0}$ or $s_{b_0}$ are in $M$ we can find a member of $G_{\omega_1} \cap M$ that is a valuation between $a_0$ and $b_0$. So we may assume that neither $s_{a_0}$ nor $s_{b_0}$ are in $M$. As we shall see, it is sufficient to show that between any distinct elements of $S_{\omega_1}$ from mutually generic extensions, there is a member of $S_{\omega_1}$ in the ground model. We assume without loss of generality that $s_{b_0}<s_{a_0}$ (the case $s_{a_0}<s_{b_0}$ is altogether similar). Treating $s_{b_0}$ and $s_{a_0}$ as countable subsets of $\omega_1$, let $\mu$ be the least element of $s_{b_0}$ that is not a member of $s_{a_0}$. Let $\Delta=s_{a_0} \cap \mu$. Then $\Delta=s_{b_0}\cap \mu \in M$. We note that $b_0(\mu)>0$, otherwise $b_0<a_0$, contrary to assumption. Let \[ s_{c_0}=\Delta \cup \{ \mu \}. \] Then $s_{c_0}\in M$ and \[ s_{b_0}<s_{c_0}<s_{a_0} .\] Let $c_0\in M$ be defined so that, for $\rho<\mu$, \[ b_0(\rho)=c_0(\rho) \] and \[ 0<c_0(\mu)<b_0(\mu). \] Then $c_0 \in M$ and \[ a_0<c_0<b_0. \] Since $a<b$, the coefficient of $a_0>0$. Let $q\in \mathbb{Q}$ and $0<q<a_0$. Then $q x^{c_0} \in M$ and \[ a<q x^{c_0}<b. \] Therefore $\mathcal{E}$ is level dense. {}{ $\Box$} \vskip 5pt \par \begin{lemma} The Esterle algebra is upward level-dense. \end{lemma} Let $A\subseteq B$ be countable subsets of $\omega_1$. Suppose $x, y, z\in M^P$, with $x<z<y$, and $z$ has level $A$. It is sufficient to show that there is an element $w\in \mathcal{E}$ of strict level $B$ such that \[ x<w<y. \] We may assume without loss of generality that $z\in M$. If $x$ and $y$ have an identical partial sum, then $z$ must share that partial sum, and it is therefore in $M$. We may subtract the partial sum from all three formal power series and pass to $x$ and $y$ that disagree on the first term of the formal power series. If $x$ and $y$ have the same Archimedean valuation, $a$, then the first term of $z$ has valuation $a$. Let $\alpha\in \mathbb R$ be of strict level $B$ and lie between the initial coefficients of $x$ and $y$. Let $w= \alpha x^a$. Then $w$ has strict level $B$ and \[ x<w<y. \] So we assume that $x$ and $y$ have distinct valuation. If $x$ and $z$ have the same valuation, $a$, and the initial coefficient of $x$ is negative, let $\alpha \in \mathbb R$ be negative and greater than the the initial coefficient of $x$ and have strict level $B$. If the initial coefficient of $x$ is positive, let $\alpha$ be positive and greater then the initial coefficient of $x$ and have strict level $B$. In either case, let $w=\alpha x^a$. Then $w$ has strict level $B$ and \[ x<w<y. \] The cases for $x$ and $y$ having the same valuation are similar. If $x$ and $y$ have distinct valuation, let $\alpha \in \mathbb R$ be positive and have strict level $B$. Let $w=\alpha \cdot z$. Then $w$ has strict level $B$ and \[ x<w<y. \] Therefore $\mathcal{E}$ is upward level-dense. {}{ $\Box$} \vskip 5pt \par \begin{theorem} The Esterle algebra is morass-definable. \end{theorem} \section{Extendable functions} In order to extend an order-monomorphism by commutativity with a splitting map we must satisfy both algebraic conditions and order constraints. In the next section we show that the morass-commutative extension of an $\mathbb R$-monomorphism, satisfying certain technical constraints (extendability), may be extended to an $\mathbb R$-monomorphism satisfying those same constraints. The technical constraints are those required for an inductive construction along the vertices of a simplified morasses. We state the central technical condition that permits the inductive construction of the following sections. We restrict our attention to functions from terms for a standard ultrapower of $\mathbb R$ to terms for elements of the Esterle algebra. \begin{definition} $($Extendable Function$)$ \label{thm5.2} Let $M$ be a c.t.m.of ZFC, $P$ be the poset adding generic reals indexed by a countable ordinal and $G$ be $P$-generic over $M$. The term function, $\phi:X\to Y$, is extendable provided that the following are satisfied: \begin{enumerate} \item $X$ is a set of terms of strict level for a subring of a standard ultrapower of $\mathbb R$ (over $\omega$) in $M[G]$ that has countable transcendence degree over $\mathbb R$ \item $Y$ is a set of terms of strict level for elements of the Esterle algebra that is closed under partial sums and contains coefficients \item $\phi:X\to Y$ is a level term function that is forced to be an $\mathbb R$-isomorphism (with respect to $\mathbb R \cap X$). \end{enumerate} \end{definition} We interpret a splitting map on a set of countable ordinals as a function on terms in the forcing language as in [\ref{Dumas}]. If $\nu$ is a vertex of a morass, $\theta_{\nu}$ is the ordinal associated with $\nu$, $X\cap M^{P(\theta_{\nu})}$ is morass-commutative beneath $\nu$ and $\sigma\in \mathcal{F}_{\nu \nu+1}$ is the splitting function on $\theta_{\nu}$ then $X\cup \sigma[X]$ is morass-commutative beneath $\nu+1$. We will show that an extendable function may be commutatively extended by a splitting function to an extendable function. \section{Commutative extensions of term functions} In the inductive construction of the following sections we will need technical lemmas of two types: those insuring that commutativity with morass maps may be used to extend extendable functions to extendable functions, and those allowing the extension of the domain of an extendable function by a specified element to an extendable function. Throughout this section we assume: \begin{enumerate} \item $\phi:X\to Y$ is an extendable function. \item $\sigma$ is a splitting map with splitting point $\delta=0$. That is, $\theta<\omega_1$, $\sigma:\theta \to \omega_1$ and $\sigma[\theta]\cap \theta =\emptyset$. \item $P$ is the poset adding generic reals indexed by $\theta$. \item $G$ is $P$-generic over $M$. \item $H$ is $\sigma[P]$-generic over $M[G]$. \end{enumerate} We wish to show that the morass-commutative extension of $\phi$ to the ring generated by $X \cup \sigma[X]$ is extendable. \subsection{Splitting maps and algebraic independence} The central result of this subsection states, roughly, that a subset of a field in a generic extension that is algebraically independent (AI) over the restriction of the field to the ground model, will be AI over the restriction of an extension field to a mutually generic extension. It will follow that the union of an AI subset of a field in a generic extension with its morass ``split" in a mutually generic extension will be AI over the restriction of the field to the ground model. \begin{lemma}\label{thm6.1} Let $P$ be the poset adding generic reals indexed by $\omega_1$ and $G$ be $P$-generic over $M$. Let $F$ be a morass-definable real-closed field in $M[G]$. Let $\theta<\omega_1$, $P(\theta)$ be the poset adding generic reals indexed by $\theta$, $G(\theta)$ be the $P(\theta)$-generic factor of $G$, and $\sigma$ be a splitting map on $\theta$ (with splitting point $\delta=0$). If $\chi=\{ x_1,\ldots,x_n\} \subseteq (F\cap M[G])$ is linearly independent (LI) over $M\cap F$ and $H$ is $\sigma[P]$-generic over $M[G]$, then $\chi$ is LI over $M[H]\cap F$. \end{lemma} Proof: Let \[ \mathbf{x}=(x_1, \ldots, x_n)\in F^n\cap M[G]. \] Suppose that there is a non-zero vector of $\mathbf{y}\in F^n\cap M[H]$ which is orthogonal to $\mathbf{x}$. Without loss of generality we may assume that the components of $\mathbf{y}$ are LI over $M\cap F$. We consider terms for $\mathbf{x}$ and $\mathbf{y}$, $x\in M^P$ and $y\in M^{\sigma[P]}$ respectively. Let $p\in P$, $q\in \sigma[P]$, be such that \[ p\cdot q \Vdash x \cdot y=0. \] We work below $p\cdot q$ and assume that $x \cdot y=0$ in all $P(\theta \cup \sigma[\theta])$-generic extensions. Let $\langle \sigma_i \mid i\in \omega \rangle$ be a sequence of splitting functions such that \begin{enumerate} \item $\sigma_0=\sigma$. \item For any $i\in \omega$, $\sigma_i$ has splitting point $\delta=0$. \item For any $i\in \omega$, the range of $\sigma_i$, $\theta_i$, is contained in $\omega_1$. \item If $i\neq j$, then the range of $\sigma_i$ is disjoint from the range of $\sigma_j$. \end{enumerate} Let $P_i$ be the poset adding generic reals indexed by $\theta_i$ and $Q_1$ be the poset adding generic reals indexed by $\bigcup_{i\leq n} \theta_i$. Let $q_1 \in Q_1$ be a condition which forces that the span of terms $\{ \sigma_i(y) \mid i \leq n \}$ has maximal possible dimension, $m\leq n$, and let $\tau$ be a term for the row-reduced echelon form of the matrix with rows $\sigma_1(y), \ldots, \sigma_n(y)$. We consider $q_1$ as a condition in $\Pi_{i=1}^n P_i$, $p_1\cdots p_n$, in which $p_i \leq q$ for all $i\leq n$. Hence $p\cdot q_1$ forces that $x$ is in the null space of $\tau$. For $0<i\leq n$, let \[ \sigma_i^*=\sigma_{n+i} \circ \sigma_i^{-1}. \] Then \[ \sigma_i^*[\theta_i]=\theta_{i+n}. \] Furthermore, through the definitions of [\ref{Dumas}], $\sigma_i^*$ is naturally interpreted as a function from $P_i$ to $P_{i+n}$ and as well as a function from $M^{P_i}$ to $M^{P_{i+n}}$. Let \[ \sigma^*:\bigcup_{i\leq n} \theta_i \to \bigcup_{i\leq n} \theta_{i+n}. \] If $Q_2$ is the poset adding generic reals indexed by $\bigcup_{i=1}^n \theta_{i+n}$ then $\sigma^*:Q_1 \to Q_2$. Furthermore, since $F$ is morass-definable, $\sigma^*(\tau)$ is forced in all $Q_2$-generic extensions to be a row-reduced matrix with dimension $m$. Let $H_1$ be $Q_1$ generic over $M$, with $q_1\in H_1$ and $H_2$ be $Q_2$-generic over $M[H_1]$, with $\sigma^*(q_1)\in H_2$. Let $A_1$ be the value of $\tau$ in $M[H_1]$ and $A_2$ be the value of $\sigma(\tau)$ in $M[H_2]$. Then $A_1$ and $A_2$ are row-reduced matrices with dimension $m$. Furthermore, since $m$ is the maximum possible rank for $\tau$ in any generic extension, \[ Span(A_1)=Span(A_2). \] The row reduced echelon form is canonical, so \[ A_1=A_2. \] However $M[H_1]$ and $M[H_2]$ are mutually generic extensions, $A_1\in M[H_1]$ and $A_2\in M[H_2]$. Therefore \[ A_1=A_2\in M. \] In $M[G,H_1,H_2]$, $x$ is in the null-space of $A_1\in M$. This contradicts the assumption that the components of $x$ are LI over $M\cap F$. {}{ $\Box$} \vskip 5pt \par \begin{lemma}\label{thm6.2} Let $F$, $G$, $H$, $\chi$ satisfy the hypotheses of Lemma \ref{thm6.1}. Assume $n\in \mathbb N$, $\bar{x}=(x_1,\ldots,x_n)\in F^n\cap M[G]$, $\bar{y}=(y_1,\ldots,y_n)\in F^n\cap M[H]$ and \[ \sum_{i=1}^n x_i\cdot y_i = 0. \] Then $\bar{y}$ is in the span of vectors in $F^n\cap M$, all of which are orthogonal to $\bar{x}$. \end{lemma} Proof: Let \[ \bar{x}=(x_1,\ldots,x_n) \] and \[ \bar{y}=(y_1,\ldots,y_n). \] Similar to the previous argument, we work beneath a condition that forces $\bar{x} \perp \bar{y}$. Let $\sigma_1,\ldots,\sigma_m$ be splitting functions with pairwise disjoint ranges and $\sigma_1(\bar{x}),\ldots\sigma_m(\bar{x})$ be a maximal LI set of vectors in mutually generic extensions. It is a consequence of this maximal linear independence that in any mutually generic extension of $M$, the interpretation of $\bar{x}$ is in the span of $\sigma_1(\bar{x}),\ldots, \sigma_m(\bar{x})$. For $i\leq n$, let \[ P_i=\sigma_i[P] \] and \[ P^*=\Pi_{i=1}^m P_i. \] Let $A_1$ be the rank $m$ matrix with row vectors $\sigma_1(\bar{x}),\ldots,\sigma_m(\bar{x})$ as the first $m$ rows, and the the zero vector for the remaining rows. The nullity of $A_1$ is $n-m$. Let $B_1$ be the row-reduced echelon form of the basis matrix of the null space of $A_1$. We note that, in a generic extension including generic reals indexed by $\sigma(P)$, $\bar{y}$ is in the null space of $A_1$ and hence in the span of the rows of $B_1$. Let $\tau_{A} \in M^{P^*}$ be a term for $A_1$, and $\tau_{B}$ be a term for $B_1$. We work beneath a condition of $P^*$ that forces that $\tau_{B}$ is the row-reduced echelon form of the basis matrix of the null space of $\tau_{A}$. Let $G^*$ be $P^*$-generic and $H^*$ be $P^*$-generic over $M[G^*][H]$. As in the previous Lemma, if $A_2$ [resp. $B_2$] is the interpretation of $\tau_A$ [resp. $\tau_B$] in $M[H^*]$, then $A_2$, is a row-reduced echelon matrix of rank $n$ and, in $M[H^*][H]$, $\bar{y}$ is in the null space of $A_2$. Additionally $B_2$ is the row-reduced echelon form of the basis matrix for the null space of $A_2$. Every row of $A_2$ is in the span of $A_1$, and both have dimension $m$, so the range of $A_1$ and $A_2$ are identical. Therefore \[ B_1=B_2. \] However $B_1\in M[G^*]$ and $B_2\in M[H^*]$, with $G^*$ and $H^*$ mutually generic. Hence \[ B_1 \in M. \] Therefore $\bar{y}$ is in the span of vectors in $F^n\cap M$. {}{ $\Box$} \vskip 5pt \par \begin{corollary}\label{thm6.3} Let $F$, $G$ and $H$ satisfy the hypotheses of Lemma \ref{thm6.1}. If $\chi \in M[G] \cap F$ is algebraically independent (AI) over $M\cap F$, then $\chi$ is AI over $M[H]\cap F$. \end{corollary} Proof: Let $\chi^*$ be the multiplicative semi-group generated by the elements of $\chi$. Then $\chi^*$ is LI over $M\cap F$. By Lemma \ref{thm6.1}, $\chi^*$ is LI over $M[H]\cap F$. Therefore $\chi$ is AI over $M[H]\cap F$. {}{ $\Box$} \vskip 5pt \par \begin{corollary}\label{thm6.4} If $\chi$ is $AI$ over $M\cap F$, then $\chi \cup \sigma[\chi]$ is AI over $M\cap F$. \end{corollary} Proof: Let $G$ be $P$-generic over $M$, and $H$ be $\sigma[P]$-generic over $M[G]$. $F$ is morass-commutative so $\sigma[\chi]$ is AI over $M$. Suppose there is a nontrivial linear combination (over $M\cap F$) of distinct elements of the semigroup generated by $\chi \cup \sigma[\chi]$ that equals 0. By Lemma \ref{thm6.4}, $\chi$ is AI over $M[H]\cap F$, so there must be a nontrivial linear combination (over $M\cap F$) of elements of the semigroup generated by $\sigma[\chi]$ that equals 0. However $F$ is morass commutative, and $\sigma[\chi]$ is AI over $M\cap F$. {}{ $\Box$} \vskip 5pt \par \begin{lemma}\label{thm6.6} Let $X$ be a subring of a standard ultrapower of $\mathbb R$, ${\mathbb R}^{\omega}/U$. Let $X^*$ be the ring generated by $X\cup \sigma[X]$, where $\sigma$ is a splitting function. If $\phi:X\to \mathcal{E}$ is an extendable $\mathbb R$-monomorphism, then there is an $\mathbb R$-monomorphism, $\phi^*:X^* \to \mathcal{E}$, extending $\phi$ and $\sigma(\phi)$. \end{lemma} Proof: An element of $X^*$ may be expressed as $\sum_{i=1}^n x_i\cdot \sigma(y_i)$, for some $n\in \mathbb N$, and $x_1,\ldots,x_n,y_1,\ldots,y_n \in X$. Let $S$ be the set of expressions of this form. We define a function $\psi:S\to \mathcal{E}$ where \[ \psi(\sum_{i=1}^n x_i\cdot \sigma(y_i)) = \sum_{i=1}^n \phi(x_i)\cdot \sigma(\phi(y_i)). \] Let $\iota:S \to X^*$ be the natural quotient map from the expressions of $S$ to $X^*$. The kernel of $\iota$ is the set of expressions of $S$ that sum to $0$ in $X^*$. $\psi$ defines a ring homomorphism on $X^*$ if and only if for any $s$ in the kernel of $\iota$, \[ \psi(s)=0. \] For $i\leq n$ let $z_i=\sigma(y_i)$ and $s=\sum_{i=1}^n x_i\cdot z_i$ be in the kernel of $\iota$. Then in $X^*$, \[ \sum_{i=1}^n x_i\cdot z_i=0. \] Let \[ \bar{x}=(x_1,\ldots,x_n)\in X^n \cap M[G], \] \[ \bar{y}=(y_1,\ldots,y_n) \in X^n\cap M[G] \] and \[ \bar{z}=(z_1,\ldots,z_n) \in \sigma([X])^n \cap M[H]. \] By Lemma \ref{thm6.2}, $\bar{z}$ is in the span of elements of $X^n\cap M$ that are orthogonal to $\bar{x}$. Let $\{ b_1,\ldots,b_m\}$ be an LI set of vectors of $X^n\cap M$ orthogonal to $\bar{x}$ that contains $\bar{z}$ in its span. Let $\langle\cdot,\cdot \rangle$ be the dot product and $(\alpha_1,\ldots,\alpha_m)\in \sigma([X])^n\cap M[H]$ be such that \[ \sum_{i=1}^m \alpha_i\cdot b_i=\bar{z}. \] Let $\bar{\phi}:X^n\to \mathcal{E}^n$ be defined by \[ \bar{\phi}(s_1,\ldots,s_n)=(\phi(s_1),\ldots,\phi(s_n)). \] Recall that for a splitting map $\sigma$, $\sigma(\phi):\sigma[X]\to \mathcal{E}$ is defined so that $\sigma$ and $\phi$ commute. Then \[ \psi(\langle \bar{x},\bar{z}\rangle) = \langle \bar{\phi}(\bar{x}),\sigma(\bar{\phi})(\sum_{i=1}^m \alpha \cdot b_i) \rangle = \sum_{i=1}^m \sigma(\alpha_i)\cdot \langle \bar{\phi}(\bar{x}), \sigma(\bar{\phi})(b_i) \rangle. \] However $b_i\in X^n\cap M$ for all $i\leq n$, so \[ \sigma(\bar{\phi})(b_i)=\bar{\phi}(b_i). \] Hence \[ \sum_{i=1}^m \sigma(\alpha_i)\cdot \langle \bar{\phi}(\bar{x}), \sigma(\bar{\phi}) (b_i) \rangle = \sum_{i=1}^m \sigma(\alpha_i)\cdot \langle \bar{\phi}(\bar{x}),\bar{\phi}(b_i) \rangle = \sum_{i=1}^m \sigma(\alpha_i)\cdot \phi(\langle \bar{x},b_i\rangle). \] However, for all $i\leq m$, $b_i\perp \bar{x}$. So for all $i\leq m$, \[ \phi(\langle \bar{x},b_i\rangle ) = 0 \] and $\psi(\langle \bar{x},\bar{z} \rangle)=0$. Therefore $\psi$ defines a ring homomorphism on $X^*$, $\phi^*$, that extends $\phi \cup \sigma(\phi)$ to $X^*$. Assume that $\bar{x}, \bar{y}\in M[G]\cap X^n$, $\bar{z}=\sigma(\bar{y})$ and \[ \langle \bar{x},\bar{z} \rangle \neq 0 \] then there are \[ x_1',\ldots,x_m' \in X \] and \[ z_1',\ldots z_m' \in \sigma[X] \] such that $\{ x_1',\ldots,x_m'\}$ is LI over $M\cap X$ and \[ \sum_{i=1}^m x_i'\cdot z_i'=\sum_{i=1}^n x_i\cdot z_i \neq 0. \] Then $\{ \phi(x_1'),\ldots,\phi(x_m')\}$ is LI over $\phi[X] \cap M$. By Lemma \ref{thm6.1}, $\{ \phi(x_1'),\ldots,\phi(x_m')\}$ is LI over $\mathcal{E} \cap M[H]$. Therefore \[ \sum_{i=1}^n \phi^*(x_i\cdot z_i)=\sum_{i=1}^m \phi^*(x_i'\cdot z_i')\neq 0. \] Thus $\phi^*$ is a monomorphism. Let $r\in \mathbb R \cap X^*$. Then $r$ is an element of the ring generated by the reals of $\bar{X} \cup \sigma[\bar{X}]$. Hence, for all $r\in \mathbb R\cap X^*$, \[ \phi^*(r)=r \] and $\phi^*$ is $\mathbb R$-linear. We show that $\phi^*$ is order-preserving. If $x_1,\ldots,x_n,y_1,\ldots,y_n \in X$, we show that $\sum_{i=1}^n x_i\cdot \sigma(y_i)>0$ iff $\sum_{i=1}^n \phi^*(x_i)\cdot \phi^*(\sigma(y_i))>0$. If $n=1$, then the sign of $x_1\cdot \sigma(y_1)$ is the sign of the product of the leading coefficients of $x_1$ and $\sigma(y_1)$, which are preserved by $\phi$ and $\sigma(\phi)$ respectively. So \[ x_1\cdot \sigma(y_1)>0 \iff \phi(x_1)\cdot \sigma(\phi(y_1))>0. \] Assume that $n \geq 2$, and \[ \sum_{i=1}^n x_i\cdot \sigma(y_i)>0. \] Let \[ y=\phi^*(\sum_{i=1}^n x_i\cdot \sigma(y_i))=\sum_{\lambda<\gamma} \alpha_{\lambda}x^{a_{\lambda}}. \] By assumption $\phi[X]$ is closed under partial sums, so there are elements\\ $u_1,\ldots, u_j,u_{j+1},\ldots u_k,v_1,\ldots,v_k \in Y$ such that \begin{enumerate} \item $y=\sum_{i=1}^k u_i\cdot \sigma(v_i)$. \item For $i\leq j$, every term in the power series expansion of $u_i\cdot \sigma(v_i)$ has power (valuation) less than $a_0$. \item For $j<i\leq k$, every term of the power series expansion of $u_i\cdot \sigma(v_i)$ has power (valuation) at least $a_0$. \end{enumerate} Every term of $y$ expressed as a power series has valuation no less than $a_0$, therefore \[ \sum_{i=1}^j u_i\cdot \sigma(v_i)=0 \] and \[ y=\sum_{i=j+1}^k u_i\cdot \sigma(v_i). \] If $j<i\leq k$, then $s_i=u_i\cdot \sigma(v_i)$ has valuation no less than $a_0$ in $G_{\omega_1}$. If $s_i$ has valuation greater than $a_0$, then $s_i$ has Archimedean valuation greater than $y$. Let $S$ be the set of indexes of the $u_i\cdot \sigma(v_i)$ with valuation $a_0$ and $T$ be the set of indexes of the $u_i\cdot \sigma(v_i)$ with valuation greater than $a_0$. $S$ is nonempty and every element of $T$ has Archimedean valuation greater than every element of $S$. Since $\phi$ is extendable, there are $b_0\in M[G]\cap G_{\omega_1}$ and $c_0\in M[H]\cap G_{\omega_1}$ such that $x^{b_0} \in \phi[X]$, $x^{c_0}\in \sigma[\phi[X]]$, $b_0, c_0\geq e$ (in $G_{\omega_1}$) and \[ a_0=b_0+c_0. \] Let $u,v\in X$ be such that \[ \phi(u)=x^{b_0} \] and \[ \sigma(\phi(v))=x^{c_0}. \] We let $\bar{X}$ be the field generated by $X$ and $\psi:\bar{X} \to \mathcal{E}$ be the unique order-preserving field-monomorphism extending $\phi$. We have previously observed that $\phi^*$ is a ring monomorphism and $\psi \cup \sigma(\psi)$ is an order-preserving injection. So \[ \phi^*(\sum_{i=1}^n x_i\cdot \sigma(y_i))=\phi^*(\sum_{i=j+1}^k u_i\cdot \sigma(v_i)) \] and \[ \psi(\sum_{i=j+1}^k u_i/u\cdot \sigma(v_i/v))=\alpha_0 + \sum_{0<\lambda<\gamma} \alpha_{\lambda}x^{(a_\lambda-a_0)}. \] The real number $\alpha_0$ is in the domain and range of $\phi^*$, $\phi^*$ is $\mathbb R$-linear and $\phi^*\subseteq \psi$, so \[ \phi^*(\alpha_0)=\psi(\alpha_0)=\alpha_0. \] Let $z=\sum_{0<\lambda<\gamma} \alpha_{\lambda}x^{a_{\lambda}-a_0}$. The range of $\phi^*$ is closed under partial sums, so $\sum_{0<\lambda<\gamma} \alpha_{\lambda}x^a_{\lambda}$ is in the range of $\phi^*$ and $u\cdot v$ is in the range of $\psi$. Thus $z\in \psi[\bar{X}]$. Every term of $z$ is infinitesimal. Therefore $\psi^{-1}(z)$ is infinitesimal and $\sum_{i=1}^n x_i\cdot \sigma(y_i)$, $\sum_{i=1}^n (x_i/u)\cdot \sigma (y_i/v)$ and $\alpha_0$ are each greater than $0$. Therefore $\phi^*$ is extendable. {}{ $\Box$} \vskip 5pt \par \begin{lemma} \label{thm6.7} Let \begin{enumerate} \item $\bar{\nu}<\nu \leq \omega_1$ \item $D$ be a subring of a standard ultrapower of $\mathbb R$ over $\omega$ \item $\phi:D\to \mathcal{E} \in M[G_{\bar{\nu}}]$ be an extendable $\mathbb R$-monomorphism on $D$ \item $D^*=\bigcup_{\sigma \in \mathcal{F}_{\bar{\nu}, \nu}} \sigma[D]$. \end{enumerate} Then there is a unique extendable $\mathbb R$-monomorphism $\phi^*$ on the ring generated by $D^*$ which, for any $\sigma \in \mathcal{F}_{\bar{\nu} \nu}$, extends $\sigma[\phi]$. \end{lemma} Proof: If $\nu=\bar{\nu}+1$, then the result follows from Lemma \ref{thm6.6}. If there is no limit ordinal ordinal $\lambda$, $\bar{\nu}<\lambda \leq \nu$. Then there is $n\in \omega$ such that \[ \nu=\bar{\nu}+n. \] By Lemma \ref{thm6.6}, for any extendable $\phi:D\to \mathcal{E}$ and splitting function $\sigma$, the ring monomorphism on the ring generated by $D\cup \sigma[D]$ extending $\phi \cup \sigma[\phi]$ is extendable. By $n$ iterated applications of Lemma \ref{thm6.6}, there is a unique extendable $\mathbb R$-monomorphic extension of $\phi$, $\phi^*\supset \bigcup_{\sigma \in \mathcal{F}_{\bar{\nu} \nu}} \sigma[\phi]$, to the ring generated by $\bigcup_{\sigma \in \mathcal{F}_{\bar{\nu} \nu}} \sigma[D]$. So assume there is a limit ordinal $\lambda$, $\nu_{\bar{\alpha}}<\lambda \leq \nu_{\alpha}$. Let $\lambda$ be the least limit ordinal greater than $\nu_{\bar{\alpha}}$. Let \[ D_{\lambda}=\bigcup_{\sigma \in \mathcal{F}_{\bar{\nu}, \lambda}} \sigma[D]. \] Let $D^*_{\lambda}$ be the ring generated by $D_{\lambda}$, We show that there is an extendable $\mathbb R$-monomorphism of $D^*_{\lambda}$ which, for any $\sigma \in \mathcal{F}_{\bar{\nu} \lambda}$, extends $\sigma[\phi]$. Let $F$ be a finitely generated subring of $D^*_{\lambda}$. Let $\{ d_1,\ldots,d_n\}$ generate $F$, $\sigma_1, \ldots,\sigma_n \in \mathcal{F}_{\bar{\nu},\lambda}$ and for all $i\leq n$, $c_i\in D$ be such that \[ d_i=\sigma_i(c_i). \] By condition P4 in the definition of the simplified morass, there is $n\in \omega$, and $g\in \mathcal{F}_{\nu_{\bar{\alpha}}+n, \lambda}$ and $f_1,\ldots,f_n \in \mathcal{F}_{\nu_{\bar{\alpha}}, \nu_{\bar{\alpha}}+n}$ such that, for $i\leq n$, \[ \sigma_i=g\circ f_i. \] For each $m<n$, let $h_m$ be the splitting function of $\mathcal{F}_{\bar{\nu}+m, \bar{\nu}+m+1}$. By Lemma \ref{thm6.6}, $\phi \cup h_1[\phi]$ may be extended to an extendable $\mathbb R$-monomorphism. Furthermore this ring monomorphism may be extended by the splitting functions $h_2$ through $h_m$. Let $\psi$ be the function on $\bigcup_{\sigma \in \mathcal{F}_{\bar{\nu} \lambda}} \sigma[D]$ resulting after the $n$ splits. Then $\psi$ is extendable and $f_i(c_i)$ is in the domain of $\psi$ for all $i\leq n$. Therefore $g\circ \psi$ is an $\mathbb R$-linear order monomorphism and is the restriction of $\phi^*$ to a ring containing $F$. Thus \[ \phi^*_{\lambda}=\bigcup_{\sigma \in \mathcal{F}_{\bar{\nu} \lambda}} \sigma[\phi] \] is a well-defined extendable $\mathbb R$-monomorphism of $D^*_{\lambda}$. By an inductive argument on $\nu$, invoking condition P2, and the results above at limits and Lemma \ref{thm6.6} at successor ordinals, it is straightforward to show that $\bigcup_{\sigma \in \mathcal{F}_{\lambda, \nu}} \sigma[\phi^*_{\lambda}]$ has a unique extension to an extendable $\mathbb R$-monomorphism of the ring generated by $\bigcup_{\sigma \in \mathcal{F}_{\lambda, \nu}} \sigma[D^*_{\lambda}]$. {}{ $\Box$} \vskip 5pt \par \subsection{Extensions by a specified element} Because ordered subrings of real-closed fields have unique extensions to real-closed subfields, we will be able to restrict our attention to extending subrings by algebraically independent elements. If $X$ is a subring of a real-closed field and $Z$ is a subset of that real-closed field, we let $X[Z]$ be the subring of the real-closed field generated by $X\cup Z$. We continue with conventions similar to those outlined earlier in the section. Specifically we assume: \begin{enumerate} \item $M$ is a c.t.m. of ZFC \item $P$ is the poset adding generic reals indexed by $\theta$ \item $G$ is $P$-generic over $M$ \item $F$ is the ring of finite elements of a standard ultrapower of $\mathbb R$ over $\omega$ in M[G] \item $X\subset F$ \item $Y\subset \mathcal{E}$ \item $\phi:X\to Y$ is extendable. \end{enumerate} We wish to prove analogues of Johnson's theorems that extendable functions may be extended by a specified element. Throughout the arguments of this section, we will commonly use $x$ to represent an element of $F$, and also to represent the variable in the power series representations of member of $\mathcal{E}$. Presumably the context will make clear which use is intended. \begin{lemma}\label{thm6.8} Suppose $\phi:X\to Y$ is an extendable function, $r\in \mathbb R$ and $r$ is transcendental over $X$. Then there is an extendable function that extends $\phi$, $\psi:X[r] \to Y[r]$. \end{lemma} Proof: Let $\phi^*$ be the $\mathbb R$-monomorphism extending $\phi$ to the real closure of $X$. Then $\phi^*\upharpoonright_{\mathbb R\cap X}$ is $\mathbb R$-linear. Suppose that $r$ is transcendental over $X$. The real closure of $X$ and the real closure of $Y$ contain precisely the same real numbers and are full. By Lemma \ref{thm3.2}, there is an $\mathbb R$-monomorphic extension of $\phi^*$, $\psi^*$, to the real closure of the field generated by $X[r]$. Let $\psi=\psi^*\upharpoonright_{X[r]}$. Then $\psi$ is extendable. {}{ $\Box$} \vskip 5pt \par \begin{lemma}\label{thm6.9} Suppose $\phi:X\to Y$ is an extendable function, $x\in F$ has strict level, and is transcendental over $X$. Then there is $\bar{X}\supseteq X[x]$ and extendable $\psi:\bar{X} \to \mathcal{E}$ that extends $\phi$. \end{lemma} Proof: If $x\in \mathbb R$, then apply Lemma \ref{thm6.8}. Assume $x\notin \mathbb R$. We may assume that $x$ is infinitesimal. Let $(l,u)$ be the gap formed by $x$ in $X$, $L=\phi[l]$ and $U=\phi[u]$. The Esterle algebra is an $\eta_1$-ordering, so there is $y\in \mathcal{E}$ that witnesses the gap $(L,U)$. By application of Johnson's Lemma \ref{thm3.3}, there is $y\in \mathcal{E}$ that witnesses the gap $(L,U)$ and such that the real closure of the field extending $Y[y]$ is full. Although the existence of such an element can be used to advantage, the element $y$ may fail to have some of the properties we require for a morass construction. Let $\mu\subseteq \omega_1$ be the strict level of $x$. We seek a candidate in $\mathcal{E}$ for the image of $x$ with strict level $\mu$ that is transcendental over $Y$. The Esterle algebra is level dense and upward level dense, so by Lemma 4.5 of [\ref{Dumas}] there is $y\in \mathcal{E}$, with the strict level $\mu$, such that for all $z\in X$, \[ x<z \iff y<\phi(z). \] It may happen that $Y[y]$ is not closed under partial sums.\\ \\ Case 1: There is a largest partial sum of $y$ that is a member of $Y$.\\ \\ We include in this case that the first term of $y$ is a monomial not in $Y$. Let $t$ be the largest partial sum of $y$ that is also a member of $Y$ and $s=\phi^{-1}(t)\in X$. We shift our attention to the gap formed by $x-s$ in $X$. Then for all $z\in X$ \[ x-s<z \iff y-t<\phi(z). \] We note that $x-s$ is transcendental over $X$. Either $x-s$ has the same Archimedean valuation as a member of $X$, or it has a distinct valuation. Assume $x-s$ has the same Archimedean valuation as a member of $X$. Let $\alpha x^a$ be the leading term of $y-t$. Then $x^a\in Y$ and $\alpha \notin Y$. The coefficient $\alpha$ has strict level contained in $\mu$. Let $(L^*,U^*)$ be the cut formed in $\mathbb R \cap X$ by $\alpha$. Then there is a real number with strict level $\mu$ that witnesses the gap $(L^*,U^*)$. So we assume that $\alpha$ has strict level $\mu$. By Lemma \ref{thm6.8} there is an extendable term function, $\psi \supset \phi$, $\psi:X[\alpha]\to Y[\alpha]$, with $\psi(\alpha \phi^{-1}(x^a))=\alpha x^a$. We assume that $x-s$ has a valuation distinct from the valuations of members of $X$. We also note that the strict level of $s$ and $t$ are equal and contained in $\mu$. If the strict level of $s$ is a proper subset of $\mu$, then $x-s$ and $y-t$ have level $\mu$. If the strict level of $x-s$ is contained in $\mu$, then there is an element of $y^*\in \mathcal{E}$ with a valuation distinct from the valuations of $Y$ having strict level equal to the strict level of $x-s$. We need consider only the case in which $x$ has valuation distinct from the valuations of $X$. Let $y\in \mathcal{E}$ witness the gap $(L,U)$, and let $y_0=\alpha x^a$ be the leading term of $y$. If $\alpha>0$, then $x^a$ witnesses the gap $(L,U)$. We assume without loss of generality that $y_0=x^a$. The strict level of $y_0$ equals the strict level of $a\in G_{\omega_1}$. It is straightforward to see that there is $b\in G_{\omega_1}$ such that \begin{enumerate} \item $b$ is positive. \item Any element of the support of $b$ is greater than any element of the support of any exponent occurring in any power series of $Y$. \item $b$ has strict level $\mu$. \end{enumerate} Let $S$ be the union of supports of all exponents among the elements in $Y$. The exponent $b\in G_{\omega_1}$ is greater than $0$ but less than any positive exponent in $Y$. Furthermore, $b$ is greater than the constant $0$ function, the additive identity of $G_{\omega_1}$ (and the valuation of standard reals $\mathcal{E}$). However $b$ is greater than any positive valuation occurring in $Y$. Consequently $x^b$ is less than any positive infinitesimal of $Y$. Additionally, \begin{enumerate} \item $x^{a+b}$ witnesses the gap $(L,U)$. \item $x^{a+b}$ is transcendental over $X$. \item $x^{a+b}$ has strict level $\mu$. \item The ring generated by $Y\cup \{ x^{a+b}\}$ in $\mathcal{E}$ is closed under partial sums and coefficients. \end{enumerate} Let $\psi:X[x]\to Y[x^{a+b}]$ be the unique $\mathbb R$-monomorphism extending $\phi$ such that \[ \psi(x)=x^{a+b}. \] Then $\psi$ is extendable.\\ \\ Case 2: There is no largest partial sum of $y$ that is a member of $Y$.\\ \\ Let $D$ be the countable well-ordered sequence of exponents of $y$. Let $D'$ be the smallest initial segment of $D$ such that $y\upharpoonright_{D'}$ is not a member of $Y$. Let $y'=y\upharpoonright_{D'}$. Then $y'$ witnesses the gap $(L,U)$. The strict level of $y'$ is a subset of $\mu$, and is possibly proper subset of $\mu$. If the strict level of $y'$ equals $\mu$, let $\psi:X[x]\to Y[y']$ be the unique $\mathbb R$-monomorphism extending $\phi$ such that \[ \psi(x)=y'. \] Otherwise, let $a\in G_{\omega_1}$ be an exponent of $\mathcal{E}$ that has strict level $\mu$ and is greater than all exponents occurring in $Y$. Let \[ y^*=y'+x^{a}. \] Then $y^*$ witnesses the gap $(L,U)$ and has strict level $\mu$. Let $\psi^*:X[x]\to Y[y^*]$ be the unique $\mathbb R$-monomorphism extending extending $\phi$ such that \[ \psi(x)=y^*. \] Then $\psi$ satisfies the conditions for an extendable function, except for closure under partial sums. In particular $y'$ and $x^a$ are not in the range of $\psi^*$. Let $(L^*,U^*)$ be the gap formed by $x^a$ in $Y[y^*]$. We observe that $x^a$ is infinitesimal with respect to every member of $Y[y^*]$. Since $F$ is level dense and upward level dense, there is a positive element of $F$, $\epsilon$, with strict level $\mu$ that is infinitesimal with respect to all elements of $X[x]$. Therefore there is an $\mathbb R$-monomorphism $\psi:X[x,\epsilon]\to Y[y^*, x^a]$ extending $\psi^*$ and such that \[ \psi(\epsilon)=x^a. \] We note that $y\in Y[y^*,x^a]$ and \[ Y[y^*, x^a]=Y[y, x^a]. \] Then $Y[y, x^a]$ is closed under coefficients and partial sums, so $\psi$ is extendable. {}{ $\Box$} \vskip 5pt \par These results permit a simplification of the construction. Given an extendable $\mathbb R$-monomorphism, $\phi:D\to \mathcal{E}$, we may extend $\phi$ to an $\mathbb R$-monomorphism of $D[\mathbb R]$, and then extend by algebraically independent infinitesimals. \section{An $\mathbb R$-monomorphism from the finite elements of ${\mathbb R}^{\omega}/U$ into the Esterle algebra.} In the Cohen extension adding $\aleph_2$ generic reals, we construct a level $\mathbb R$-monomorphism, $\phi$, from the finite elements of a morass-definable ultrapower of $\mathbb R$ into $\mathcal{E}$. This construction differs significantly from the construction of Woodin [\ref{Woodin}]. The construction of Woodin relies on the fact that in the Cohen extension of $L$ by $\aleph_2$-generic reals, any cut of $\mathcal{E}$ has a countable, cofinal subcut. As Woodin observes, this argument is not generalizable to models of ZFC with higher powers of the continuum. Our construction yields a monomorphism that is level, and therefore respects the ``complexity" (with respect to the index of Cohen reals) of the elements of the ultrapower and the Esterle algebra. Consequently, for any $S\subseteq \omega_2$, $\phi \cap M[G(S)]$ is an $\mathbb R$-monomorphism of ring of finite elements of ${\mathbb R}^{\omega}/U \cap M[G(S)]$ to $\mathcal{E} \cap M[G(S)]$. \begin{theorem}\label{thm7.1} Suppose $M$ is a c.t.m. of ZFC+CH containing a simplified $(\omega_1,1)$-morass and $P$ is the poset adding generic reals indexed by ordinals less than $\omega_2$. Let $G$ be $P$-generic over $M$, $F\in M[G]$ be the ring of finite elements of an ultrapower of $\mathbb R$ over a standard ultrafilter on $\omega$, and $\mathcal{E} \in M[G]$ be the Esterle Algebra computed in $M[G]$. Then there is a level $\mathbb R$-monomorphism, $\phi:F\to \mathcal{E}$. \end{theorem} Proof: For each $\nu$, $\alpha<\omega_1$, let $G_{\nu}$ be the factor of $G$ adding generic reals indexed by $\theta_{\nu}$ (the ordinal associated with the vertex $\nu$ in the morass), and, for $S\subseteq \omega_1$, $G(S)$ be the factor of $G$ adding generic reals indexed by $S$. Let \[ F_{\nu}=F\cap M[G_{\nu}], \] \[ F(S)=F\cap M[G(S)] \] and $X$ be a maximal algebraically independent set of positive morass generators of $F$. Then $F$ is composed of the finite elements of the real closure of the field generated by the morass-closure of $X$. Let $\langle x_{\alpha} \mid \alpha<\omega_1 \rangle$ be a well-ordering of $X$. Let $\langle \nu_{\alpha} \mid \alpha<\omega_1 \rangle$ be a weakly ascending transfinite sequence of countable ordinals such that $x_{\alpha} \in M[G_{\nu_{\alpha}}]$. We will construct an ascending sequence of functions (ordered by inclusion), $\langle \phi_{\alpha}:D_{\alpha} \to \mathcal{E} \mid \alpha<\omega_1 \rangle$, such that for all $\alpha<\omega_1$ \begin{enumerate} \item $x_\alpha \in D_{\alpha}$ \item $\phi_{\alpha}:D_{\alpha}\to E_{\alpha}$ is extendable \item If $x\in D_{\alpha}$, $\nu \leq \nu_{\alpha}$, the morass generator of $x$ is $x^*\in M[G_{\nu}]$ and $\sigma \in \mathcal{F}_{\nu \nu_{\alpha}}$, then $\sigma(x^*)\in D_{\alpha}$. \end{enumerate} At each stage of the construction, $\alpha<\omega_1$, the domain of $\phi_{\alpha}$ extends the domain of $\bigcup_{\beta<\alpha, f\in \script{F}_{\beta \alpha}} f(\phi_{\beta})$, so that it includes all morass descendants in $M[G_{\nu_{\alpha}}]$, and $\phi_{\alpha}$ is extendable. Case: $\alpha=0$. Let $\langle x_{0,n} \mid n\in \omega \rangle \subseteq M[G_{\nu_{0}}]$ be a well-ordering of the morass-descendants of $x_0$ in $M[G_{\nu_{0}}]$. We construct a sequence of $\mathbb R$-monomorphisms, $\langle \phi_{0,n}:D_{0,n}\to \mathcal{E} \mid n\in \omega \rangle$ such that for all $n\in \omega$, \begin{enumerate} \item $D_{0,0}=\mathbb{Q}[x_{0,0}]$ \item $D_{0,n}[x_{0, n+1}]\subseteq D_{0, n+1}$ \item $\phi_{0,n}\in M[G_{\nu_0}]$ \item For all $m<n$, $\phi_{0,m}\subseteq \phi_{0,n}$. \item $\phi_{0,n}$ is extendable. \end{enumerate} Let $z=x_{0,0}$, and $\mu$ be the strict level of $z$. We may assume that $z$ is positive. If $z\in \mathbb R$, let $D_{0,0}=\mathbb{Q}[z]$ and $\phi_{0,0}:D_{0,0}\to \mathcal{E}$ be the identity restricted to $D_{0,0}$. If $z$ is infinitesimal, let $a\in G_{\omega_1}$ be positive and have strict level $\mu$, and \[ y=x^a \in \mathcal{E}. \] Let $D_{0,0}=\mathbb{Q}[z]$ and $\phi_{0,0}:D_{0,0} \to \mathcal{E}$ be the $\mathbb R$-linear ring monomorphism such that \[ \phi_{0,0}(z)=y. \] If $z=r+\epsilon$, where $\epsilon$ is infinitesimal and positive, let $a\in G_{\omega_1}$ have the same strict level as $\epsilon$. Then let \[ y=x^a \] and $D_{0,0}=\mathbb{Q}[r,\epsilon]$. Let $\phi_{0,0}:D_{0,0}\to \mathcal{E}$ be the ring monomorphism extending the identity on $\mathbb{Q}[r]$ such that \[ \phi_{0,0}(z)=y. \] If $\epsilon<0$, let $y=-x^a$ and define $D_{0,0}$ and $\phi_{0,0}$ analogously. Let $N\in \omega$ and assume that $\langle \phi_{0,n} \mid n\leq N \rangle$ satisfies conditions 1 - 5 above (below $N+1$). Then $\phi_{0,N}$ is extendable and Lemmas \ref{thm6.8} and \ref{thm6.9} apply. Let $z=x_{0,N+1}$. Since $z$ is the image of a morass-generator with strict level, $z$ is transcendental over $D_{0,N}$. If $z\in \mathbb R$, let \[ D_{0,N+1}=D_{0,N}[z] \] and \[ \phi^*=\phi_{0,N} \cup \{ (z,z) \}. \] By Lemma \ref{thm6.8}, there is an $\mathbb R$-monomorphic extension of $\phi^*$, $\phi_{0,N+1}:D_{0,N+1}\to \mathcal{E}$. The sub-ring of $\mathcal{E}$ generated by a set closed under partial sums and coefficients is also closed under partial sums and coefficients, so $\phi_{0,N+1}$ is extendable. If $z$ is non-standard, let $R^*$ be the set of reals contained in the smallest full real closure of $D_{0,N}\cup \{ z\}$. By Lemma \ref{thm6.8} there is an extendable level $\mathbb R$-monomorphism extending $\phi_{0,N}$, $\phi^*:D_{0,N}[R^*]\to \mathcal{E}$. Let \[ D_{0,N+1}=D_{0,N}[R^*,z]. \] Then $D_{0,N+1}\in M[G_{\nu_0}]$. By Lemma \ref{thm6.9} there is an extendable extension of $\phi^*$, \[ \phi_{0,N+1}:D_{0,N+1}\to \mathcal{E}. \] Let \[ D_0=\bigcup_{n\in \omega} D_{0,n} \] and \[ \phi_0=\bigcup_{n\in \omega} \phi_{0,n}. \] Then the morass descendants of $x_0$ (in $M[G_{\nu_0}]$) are elements of the domain of $\phi_0$, and $\phi_0$ is extendable. Assume $\alpha$ a successor. Let $\alpha=\bar{\alpha}+1$. Assume that $\langle D_{\beta} \mid \beta < \alpha \rangle$ and $\langle \phi_{\beta} \mid \beta<\alpha \rangle$ have been defined so that for all $\gamma<\beta<\alpha$ \begin{enumerate} \item $x_\gamma\in D_{\gamma}$ \item $D_{\gamma}\subseteq D_{\beta}$ is closed under morass-maps below $\beta$ \item $\phi_{\gamma} \subseteq \phi_{\beta}$ \item $\phi_{\beta}:D_{\beta} \to \mathcal{E}$ is an extendable $\mathbb R$-monomorphism. \end{enumerate} If $\nu_{\alpha}=\nu_{\bar{\alpha}}$, then we may argue as in the previous case. Let $\langle x_{\alpha,n} \mid n<\omega \rangle$ be an enumeration of the morass descendants of $x_{\alpha}$ in $M[G_{\nu_{\alpha}}]$. We may extend $D_{\bar{\alpha}}$ to $D_{\alpha}$ containing the morass descendants of $x_{\alpha}$, and $\phi_{\bar{\alpha}}$ to $\phi_{\alpha}:D_{\alpha} \to \mathcal{E}$ so that for all $\gamma<\beta \leq \alpha$ \begin{enumerate} \item $D_{\gamma} \subseteq D_{\beta}$. \item $\phi_{\gamma}\subseteq \phi_{\beta}$. \item $\phi_{\alpha}$ is extendable. \end{enumerate} If $\nu_{\bar{\alpha}}<\nu_{\alpha}$, then by Lemma \ref{thm6.7} there is an extendable $\mathbb R$-monomorphism, $\phi^*$, extending $\bigcup_{\sigma \in \mathcal{F}_{\nu_{\bar{\alpha}} \nu_{\alpha}}} \sigma[\phi_{\bar{\alpha}}]$ to the ring generated by $\bigcup_{\sigma \in \mathcal{F}_{\nu_{\bar{\alpha}} \nu_{\alpha}}} \sigma[D_{\bar{\alpha}}]$, $D^*$. Let $D'$ be ring generated by $D^*$ and the morass descendants of $x_{\alpha}$ in $M[G_{\nu_{\alpha}}]$. Let $\mathbb R^*$ be the real numbers of the smallest full extension of the real closure of $D'$. Then the real closure of $D'[\mathbb R^*]$ is full. Let \[ D_{\alpha}=D'[\mathbb R^*]. \] By the preceding case, there is an extendable $\mathbb R$-monomorphism $\phi_{\alpha}:D_{\alpha} \to \mathcal{E}$ with $\phi_{\alpha}\supseteq \phi^*$. \\ \\ Finally, assume $\alpha<\omega_1$ is a limit ordinal. \\ \\ If there is $\beta<\alpha$ such that $\nu_{\alpha}=\nu_{\beta}$ then we may proceed as in the case $\alpha=0$ to define $D_{\alpha}$ and $\phi_{\alpha}$.\\ \\ So we assume that $\nu_{\beta}<\nu_{\alpha}$ for all $\beta<\alpha$. Let \[ \lambda=\bigcup_{\beta<\alpha} \nu_{\beta}. \] If $\lambda=\nu_{\alpha}$, then let \[ D^*=\bigcup_{\beta<\alpha}( \bigcup_{\sigma \in \mathcal{F}_{\nu_\beta \nu_\alpha}} \sigma[D_{\beta}]). \] Let $D$ be a finitely generated subring of $D^*$ with generators $\{ d_1,\ldots,d_n\}$. Then there is $\beta<\alpha$, \[ C=\{ c_1,\ldots,c_n \} \] and for, $i\leq n$, morass functions $\sigma_i \in \mathcal{F}_{\nu_{\beta} \nu_{\alpha}}$ such that \[ \sigma_i(c_i)=d_i. \] By condition P4 of Definition \ref{thm2.1}, there is $\nu_{\beta} \leq \gamma<\lambda$, $f_1, \ldots, f_n \in \mathcal{F}_{\nu_{\beta} \gamma}$ and $g\in \mathcal{F}_{\gamma \lambda}$ such that for all $i\leq n$, \[ \sigma_i(c_i)=g\circ f_i(c_i)=d_i. \] By Lemma \ref{thm6.7} there is an extendable $\mathbb R$-monomorphism $\phi^*$ on the ring generated by $\bigcup_{\sigma \in \mathcal{F}_{\nu_{\beta} \lambda}} \sigma[\phi_{\beta}]$. It follows that there is an extendable $\mathbb R$-monomorphism on the ring generated by $\bigcup_{\beta<\alpha}(\bigcup_{\sigma \in \mathcal{F}_{\nu_{\beta} \nu_{\alpha}}} \sigma[D_{\beta}])$ extending $\bigcup_{\beta<\alpha}(\bigcup_{\sigma \in \mathcal{F}_{\nu_{\beta} \nu_{\alpha}}} \sigma[\phi_{\beta}])$. We may then proceed as in earlier cases to define an extendable $\mathbb R$-monomorphism $\phi_{\alpha}:D_{\alpha} \to \mathcal{E}$, with $D_{\alpha}$ containing the morass descendants of $x_{\alpha}$ in $M[G_{\nu_{\alpha}}]$. Finally, assume that \[ \lim_{\beta<\alpha} \nu_{\beta}=\lambda<\nu_{\alpha}. \] By the previous argument, there is extendable $\phi^*:\bigcup_{\beta<\alpha}(\bigcup_{\sigma \in \mathcal{F}_{\nu_{\beta} \lambda}} \sigma[D_{\beta}]) \to \mathcal{E}$ extending the morass images in $M[G_{\lambda}]$ of the $\phi_{\beta}$. By Lemma \ref{thm6.7} there is an extendable $\mathbb R$-monomorphism $\phi'_{\alpha}:D'_{\alpha} \to \mathcal{E}$ such that \[ D'_{\alpha} \supseteq \bigcup_{\beta<\alpha}(\bigcup_{\sigma \in \mathcal{F}_{\nu_{\beta} \nu_{\alpha}}} \sigma[D_{\beta}]) \] and \[ \phi'_{\alpha} \supseteq \bigcup_{\beta<\alpha}(\bigcup_{\sigma \in \mathcal{F}_{\nu_{\beta} \nu_{\alpha}}} \sigma[\phi_{\beta}]). \] We proceed as in earlier cases to extend $\phi'_{\alpha}$ to an extendable $\mathbb R$-monomorphism, $\phi_{\alpha}:D_{\alpha} \to \mathcal{E}$, where $D_{\alpha}$ extends $D'$ and contains the morass descendants of $x_{\alpha}$ in $M[G_{\nu_{\alpha}}]$. Let $D^*$ be the ring in $M[G]$ generated by $\bigcup_{\alpha<\omega_1}(\bigcup_{\sigma \in \mathcal{F}_{\nu_{\alpha} \omega_1}} \sigma[D_{\alpha}])$. Then by earlier argument there is an $\mathbb R$-monomorphism, $\phi^*:D^*\to \mathcal{E}$, such that \[ \phi^*\supseteq \bigcup_{\alpha<\omega_1}(\bigcup_{\sigma \in \mathcal{F}_{\nu_{\alpha} \omega_1}} \sigma[\phi_{\alpha}]). \] Then $\phi^*$ is an $\mathbb R$-monomorphism on a domain that contains a transcendental basis for $F$, and extends uniquely to an $\mathbb R$-monomorphism, $\phi:F \to \mathcal{E}$. {}{ $\Box$} \vskip 5pt \par \begin{theorem} Suppose $M$ is a c.t.m. of $ZFC+CH$ containing a simplified $(\omega_1,1)$-morass and $M[G]$ is the Cohen extensions adding $\aleph_2$ generic reals. Then if $X$ is an infinite compact Hausdorff space in $M[G]$, there is a discontinuous homomorphism of $C(X)$ in $M[G]$. \end{theorem} Proof: By Corollary 6.9 of [\ref{Dumas}], any non-principal ultrafilter on $\omega$ in $M$ may be extended to a standard ultrafilter in the Cohen extension adding $\aleph_2$-generic reals. If $U$ is a standard ultrafilter, then there is a level $\mathbb R$-monomorphism from the finite elements of ${\mathbb R}^{\omega}/U$ into $\mathcal{E}$. Hence the finite elements of ${\mathbb R}^{\omega}/U$ bear a non-trivial submultiplicative norm. The theorem follows from results of B. Johnson [\ref{Johnson}]. {}{ $\Box$} \vskip 5pt \par \section{Discontinuous homomorphisms of $C(X)$ in a Cohen extension adding $\aleph_3$-generic reals.} The existence of a discontinuous homomorphism of $C(X)$ when the continuum has power $\aleph_2$ has been known for a long while (Woodin [\ref{Woodin}]). Unfortunately the methods of the existing proof do not extend to higher powers of the continuum. It has been suggested that the consistency of a discontinuous homomorphism of $C(X)$ with higher powers of the continuum might be proved with higher-gap morasses. Theorem \ref{thm7.1} is a modest strengthening of Woodin's result, as the constructed $\mathbb R$-monomorphism is an extendable function (except for the cardinality of the function). We require a simplified $(\omega_1,2)$-morass for the next construction (Velleman [\ref{Velleman2}]). \begin{definition} $(Simplified \: \: (\kappa,2)-morass)$ \label{def3.1} The structure $\langle \overrightarrow{\varphi}, \overrightarrow{\script{G}}, \overrightarrow{\theta}, \overrightarrow{\script{F}} \rangle$ is a simplified $(\kappa,2)$-morass provided it has the following properties: \begin{enumerate} \item $\langle \overrightarrow{\varphi}, \overrightarrow{\script{G}} \rangle$ is a neat simplified $(\kappa^+,1)$-morass. \item $\forall \alpha<\beta\leq \kappa$, $\script{F}_{\alpha \beta}$ is a family of embeddings (see page 172, $[\ref{Velleman2}]$) from $\langle \langle \varphi_{\zeta}\mid \zeta<\theta_{\alpha}\rangle, \langle \script{G}_{\zeta \xi} \mid \zeta<\xi\leq \theta_{\alpha}\rangle \rangle$ to $\langle \langle \varphi_{\zeta}\mid \zeta<\theta_{\beta}\rangle, \langle \script{G}_{\zeta \xi} \mid \zeta<\xi\leq \theta_{\beta}\rangle \rangle$. \item $\forall \alpha<\beta<\kappa \: \: (\mid \script{F}_{\alpha \beta} \mid<\kappa)$. \item $\forall \alpha<\beta<\gamma\leq \kappa \: \: (\script{F}_{\alpha \gamma}=\{ f \circ g\mid f\in \script{F}_{\beta \gamma}, g\in \script{F}_{\alpha \beta} \} )$. Here $f\circ g$ is defined by:\\ \[ (f\circ g)_{\zeta}=f_{g(\zeta)} \circ g_{\zeta} \; \; \; \; for \; \zeta\leq \theta_{\alpha}, \] \[ (f\circ g)_{\zeta \xi}=f_{g(\zeta) g(\xi)} \circ g_{\zeta \xi} \; \; \; \; for \; \zeta<\xi\leq \theta_{\alpha}. \] \item $\forall \alpha<\kappa$, $\script{F}_{\alpha \alpha+1}$ is an amalgamation (see page 173 $[\ref{Velleman2}]$). \item If $\beta_1, \beta_2 <\alpha \leq \kappa$, $\alpha$ a limit ordinal, $f_1\in \script{F}_{\beta_1 \alpha}$ and $f_2\in \script{F}_{\beta_2, \alpha}$, then $\exists \beta (\beta_1, \beta_2 < \beta <\alpha$ and $\exists f_1'\in \script{F}_{\beta_1 \beta} \: \exists f_2'\in \script{F}_{\beta_2 \beta} \: \exists g\in \script{F}_{\gamma \alpha} (f_1=g\circ f_1'$ and $f_2=g\circ f_2'))$. \item If $\alpha\leq \kappa$ and $\alpha$ is a limit ordinal, then: \begin{enumerate} \item $\theta_{\alpha}=\bigcup \{ f[\theta_{\beta}] \mid \beta<\alpha$, $f\in \script{F}_{\beta \alpha} \}$. \item $ \forall \zeta \leq \theta_{\alpha}$, $\varphi_{\zeta}=\bigcup \{ f_{\bar{\zeta }}[\varphi_{\bar{\zeta}}] \mid \exists \beta<\alpha (f\in \script{F}_{\beta \alpha}$, $f(\bar{\zeta})=\zeta) \}$. \item $\forall \zeta<\xi \leq \theta_{\alpha}$, $\script{G}_{\zeta \xi}=\bigcup \{ f_{\bar{\zeta} \bar{\xi}}[\script{G}_{\bar{\zeta} \bar{\xi}}] \mid \exists \beta<\alpha \: (f\in \script{F}_{\beta \alpha}$, $f(\bar{\zeta})=\zeta$, $f(\bar{\xi})=\xi ) \}$. \end{enumerate} \end{enumerate} \end{definition} \begin{theorem}\label{thm8.1} Let $M$ be a c.t.m. of $ZFC+CH$ containing a simplified $(\omega_1,2)$-morass, and $M[G]$ be a generic extension of $M$ adding $\aleph_3$ generic reals. Let $X$ be an infinite compact Haussdorf space in $M[G]$, and $C(X)$ be the algebra of continuous real-valued functions of $X$ in $M[G]$. Then there is a discontinuous homomorphism of $C(X)$ in $M[G]$. \end{theorem} Proof: Let $M$ be a c.t.m. of $ZFC + CH$ containing a simplified $(\omega_1,2)$-morass, $\langle \overrightarrow{\varphi}, \overrightarrow{\script{G}}, \overrightarrow{\theta}, \overrightarrow{\script{F}} \rangle$. Let $P$ be the poset adding generic reals indexed by $\omega_3$, and $G$ be $P$-generic over $M$. Then $\langle \overrightarrow{\varphi}, \overrightarrow{\script{G}} \rangle$ is a simplified $(\omega_2,1)$-morass, and below $\omega_1$, $\langle \overrightarrow{\varphi}, \overrightarrow{\script{G}} \rangle$ satisfies the axioms of a simplified $(\omega_1,1)$-morass. Hence the construction of Theorem \ref{thm7.1} below $\omega_1$ can be completed in $M$. In particular, if $U_0\in M$ is a non-principal ultrafilter in $M$, then by Corollary 6.9 of [\ref{Dumas2}], there is $\overline{U}\subseteq M^{P(\omega_1)}$, a standard term for an ultrafilter that is morass-commutative below $\omega_1$, that is forced to extend $U_0$. Furthermore, the morass-continuation of $\overline{U}$, $U$, is a standard ultrafilter that is gap-2 morass-commutative and morass-commutative. By Theorem 6.4 of [\ref{Dumas2}], ${\mathbb R}^{\omega}/U$ is a gap-2 morass-definable $\eta_1$ real-closed field. We adapt the argument of Theorem \ref{thm7.1} to the gap-2 morass construction of [\ref{Dumas2}]. Hence we construct a level term function from the finite elements of a standard ultrapower to the Esterle algebra that is closed under morass-embeddings and is forced to be an $\mathbb R$-monomorphism. For $\alpha<\omega_1$, let $X_{\alpha}=({\mathbb R}^{\omega}/U) \cap M^{P_{\alpha}}$ and $Y_{\alpha}=\mathcal{E} \cap M^{P_{\alpha}}$. We consider $X_{\alpha}$ and $Y_{\alpha}$ as the restrictions of ${\mathbb R}^{\omega}/U$ and $\mathcal{E}$, resp., to the forcing language adding generic reals indexed by $\varphi_{\theta_{\alpha}}$. In any $P$-generic extension of $M$, $M[G]$, the interpretation of $X_{\alpha}$ in $M[G]$ is the interpretation of $X_{\alpha}$ in $M[G_{\alpha}]$ where $G_{\alpha}$ is the factor of $G$ that is $P_{\alpha}$-generic over $M$. We construct a morass-commutative level term injection from $X_{\omega_1}$ to $Y_{\omega_1}$ that is forced to be an $\mathbb R$-linear order-monomorphism. The closure under embeddings, $f_{\theta_{\beta}}$ where $f\in \script{F}_{\beta \omega_1}$, of this function will be the term function we seek. Let $\{ x_{\beta} \mid \beta<\omega_1 \}\subseteq X_{\omega_1}$ be a transfinite sequence of terms of strict level for a maximal algebraically independent set of morass generators of the infinitesimal elements of $X_{\omega_1}$, such that $x_{\alpha}\in X_{\alpha}$ for all $\alpha<\omega_1$. We will inductively construct a transfinite sequence of morass-commutative term functions $\langle F_{\beta}:D_{\beta}\to E_{\beta} \mid \beta<\omega_1 \rangle$ that satisfies the following for all $\alpha \leq \beta<\omega_1$, \begin{enumerate} \item $D_{\beta}\subseteq X_{\theta_{\beta}}$ is a morass-commutative subring \item $E_{\beta}\subseteq Y_{\theta_{\beta}}$ is closed under partial sums and contains all coefficients \item $D_{\alpha}\subseteq D_{\beta}$ and $E_{\alpha}\subseteq E_{\beta}$ \item $x_{\beta}\in D_{\beta}$ \item $F_{\beta}$ is a level term function that is forced to be an order-preserving, $\mathbb R$-monomorphism \item $f_{\theta_{\alpha}}[F_{\alpha}]\subseteq F_{\beta}$ for all $f\in \script{F}_{\alpha \beta}$. \end{enumerate} We call a sequence of term functions satisfying these conditions (beneath $\beta$) an extendable sequence. We argue be induction on $\gamma<\omega_1$. Base Case: $\gamma=0$. Let $y_0$ be a positive infinitesimal monomial of $Y_0$ having the same strict level as $x_0$, and $\mathbb R_0$ be the reals of the ground model. Let $D_0$ be the ring generated by $\mathbb R_0 \cup \{ x_0\}$, $\mathbb R_0 [x_0]$, and $E_0=\mathbb R_0 [y_0]$. We observe that $E_0$ is closed under partial sums. There is an $R$-linear order-monomorphism, $F_0:D_0 \to E_0$, with $F_0(x_0)=y_0$. Successor Case: $\gamma=\beta+1$. Let $\langle F_{\alpha}:D_{\alpha}\to E_{\alpha} \mid \alpha \leq \beta \rangle$ be an extendable sequence satisfying conditions 1-6 above. Let $D^*$ be the ring generated by $\{ g_{\theta_{\beta}}[D_{\beta}] \mid g\in \script{G}_{\theta_{\beta} \theta_{\gamma}} \}$. Then $D^*$ is generated by the union of the images of $D_{\beta}$ under the second components of left-branching embeddings of $\script{F}_{\beta \gamma}$. Let $h$ be the right-branching embedding of $\script{F}_{\beta \gamma}$ and $D'$ be the ring generated by $D^*$ and $h_{\theta_{\beta}}[D_{\beta}]$. By Lemma 5.2 of [\ref{Dumas2}], $\bigcup \{ f_{\theta_{\beta}}[F_{\beta}] \mid f\in \script{F}_{\beta \gamma} \}$ is a level term injection that is forced to be an $\mathbb R$-linear order-preserving injection. The Lemma does not guarantee that $\bigcup \{ f_{\theta_{\beta}}[F_{\beta}] \mid f\in \script{F}_{\beta \gamma} \}$ extends to a homomorphism of $D'$. However, by Lemma \ref{thm6.7}, there is a unique extendable $\mathbb R$-monomorphism, $F^*:D^* \to \mathcal{E}$. Let $B$ be a transcendental basis of $D_{\beta}$ over ${\mathbb R}^{\omega}/U \cap M$ containing terms of strict level. Let $D$ be the ring generated by $B$ and $D_{\beta}\cap M$. Let $V$ be the semigroup generated by $B$. Because $\script{F}_{\beta \gamma}$ is a set of compatible embeddings, Lemma \ref{thm6.1} applies and, treating $D$ as a vector space (over the field generated by $D_{\beta} \cap M$), $V$ is a basis of elements of strict level for $D$. Therefore, if $F:D \to \mathcal{E}$ is the naturally induced $\mathbb R$-linear extension of $\bigcup_{f\in \script{F}_{\beta \gamma}} f_{\theta_{\beta}}[F_{\beta}]$, the image of $V$ under $F$ is a linearly independent subset of $\mathcal{E}$ over $E_{\beta} \cap M$. Therefore $F$ is a linear transformation and an $\mathbb R$-monomorphism of $D$. We argue along the lines of Lemma \ref{thm6.6} to show that $F'$ is forced to be order-preserving. By Lemma \ref{thm6.6}, $F'\upharpoonright_{D^*}$ is forced to be order-preserving. If $z\in D'$, there is $n\in \mathbb N$, $x_1,\ldots,x_n \in D^*$ and $y_1,\ldots,y_n \in f_{\theta_{\beta}}[D_{\beta}]$ such that $z=\sum_{i=1}^n x_i\cdot y_i$. We show that $\sum_{i=1}^n x_i\cdot y_i>0$ iff $\sum_{i=1}^n F'(x_i)\cdot F'(y_i)>0$. If $n=1$, then the sign of $x_1\cdot y_1$ is the sign of the product of the leading coefficients of $F'(x_1)$ and $F'(y_1)$. However, $F'$ is $\mathbb R$-linear, so \[ x_1\cdot y_1>0 \iff F'(x_1)\cdot F'(y_1)>0. \] Assume that $n\geq 2$, and \[ \sum_{i=1}^n x_i\cdot y_i>0. \] Let \[ z=F'(\sum_{i=1}^n x_i\cdot y_i)=\sum_{\lambda<\gamma} \alpha_{\lambda}x^{a_{\lambda}}. \] By assumption, the range of $F'$ is closed under partial sums, so there are elements $u_1,\ldots, u_j,u_{j+1},\ldots u_k,v_1,\ldots,v_k \in Y$ such that \begin{enumerate} \item $z=\sum_{i=1}^k u_i\cdot v_i$ \item For $i\leq j$, every term in the power series expansion of $u_i\cdot v_i$ has power less than $a_0$ \item For $j<i\leq k$, every term of the power series expansion of $u_i\cdot v_i$ has power at least $a_0$. \end{enumerate} Every term of $z$, expressed as a power series of $\mathcal{E}$, has valuation no less than $a_0$, therefore \[ \sum_{i=1}^j u_i\cdot v_i=0 \] and \[ z=\sum_{i=j+1}^k u_i\cdot v_i. \] If $j<i\leq k$, then $s_i=u_i\cdot v_i$ has valuation no less than $a_0$ in $G_{\omega_1}$. If the leading terms of $s_i$ has exponent greater than $a_0$, then $s_i$ has Archimedean valuation less than $z$. Let $S$ be the set of all terms of all $s_i$, for all $1\leq j\leq i$, having exponent $a_0$ and $T$ be the set of all terms of all $s_i$, $1\leq i\leq j$, with exponent greater than $a_0$. $S$ is nonempty and every element of $T$ has Archimedean valuation less than every element of $S$. There are $b_0, c_0\in G_{\omega_1}$ such that $x^{b_0}$ is in the range of $F^*$ and $x^{c_0}$ is in the range of $f_{\theta_{\beta}}[F_{\beta}]$ and \[ a_0=b_0+c_0. \] Let $u\in D^*$ and $v\in f_{\theta_{\beta}}[D_{\beta}]$ be such that \[ F'(u)=x^{b_0} \] and \[ F'(v)=x^{c_0}. \] We observe that the ordered ring $D'$ has a unique extension to the field closure of $D'$, $\bar{D}$.Treated as a field map, the unique monomorphic extension of $F'$, $\bar{F}:\bar{D} \to \mathcal{E}$, is an $\mathbb R$-linear field monomorphism of $D'$. We have previously observed that $F'$ is an $\mathbb R$-linear monomorphism, and $F^* \cup f_{\theta_{\beta}}[F_{\beta}]$ is an order-preserving injection. So \[ F'(\sum_{i=1}^n x_i\cdot y_i) =F'(\sum_{i=j+1}^k u_i\cdot v_i) \] and \[ \bar{F}(\sum_{i=j+1}^k u_i/u\cdot v_i/v)=\alpha_0 + \sum_{0<\lambda<\gamma} \alpha_{\lambda}x^{(a_\lambda-a_0)}. \] The real number, $\alpha_0$, is in the domain $\bar{F}$, so $\bar{F}(\alpha_0)=F'(\alpha_0)=\alpha_0$. The range of $F'$ is closed under partial sums, so both $\sum_{0<\lambda<\gamma} \alpha_{\lambda}x^a_{\lambda}$ and $x^{a_0}$ are in the range of $F'$. Thus $\sum_{0<\lambda<\gamma} \alpha_{\lambda}x^{a_{\lambda}-a_0}$ is in the range of $\bar{F}$. Let \[ \delta=\sum_{0<\lambda<\gamma} \alpha_{\lambda}x^{a_{\lambda}-a_0}. \] Every term of $\delta$ is infinitesimal (has Archimedean valuation greater than elements of $\mathbb R$). $\bar{F}$ is order preserving on $D'$, so $\bar{F}^{-1}(\delta)$ is infinitesimal. Hence \[ \sum_{i=1}^n (x_i\cdot y_i)/(u\cdot v)=\alpha_0+\bar{F}^{-1}(\delta) \] and the $\sum_{i=1}^n x_i\cdot y_i>0$ iff $\alpha_0>0$ iff $F'(\sum_{i=1}^n x_i\cdot y_i)>0$. Therefore $F':D' \to \mathcal{E}$ is an $\mathbb R$-linear order-monomorphism, in which the range of $F'$ contains all coefficients and all partial sums. Let $\hat{D}$ be the ring generated by $D'$ and $\mathbb R \cap M[G_{\theta_{\gamma}}]$. By application of Lemma \ref{thm6.8}, we may extend $F'$ to a level order-preserving $\mathbb R$-monomorphism, $\hat{F}:\hat{D} \to \mathcal{E}$. Assume that $x_{\gamma}\notin D'[\mathbb R]$. By Lemma \ref{thm6.9}, there is $D_{\gamma}\supset \hat{D}$, with $x_{\gamma}\in D_{\gamma}$, and an extension of $\hat{F}$, $F_{\gamma}:D_{\gamma} \to \mathcal{E}$, such that $F_{\gamma}$ is an extendable function. Limit Case: Suppose $\gamma$ is a limit ordinal. Let $D'=\bigcup_{\beta<\gamma} (\bigcup_{f\in \script{F}_{\beta \gamma}} { f_{\theta_{\beta}}[D_{\beta}] })$. Then $D'$ is a subring of $D_{\omega_1}$ and has countable transcendence degree over $\mathbb R$. By condition 6 of Definition \ref{def3.1} it is straightforward to verify that $F':D' \to \mathcal{E}$ defined by $F'=\bigcup_{\beta<\gamma} (\bigcup_{f\in \script{F}_{\beta \gamma}} \{ f_{\theta_{\beta}}[F_{\beta}] \})$ is an extendable function. Since $D'$ is full, we may apply Lemma \ref{thm6.8} to extend $F'$ to an $\mathbb R$-linear order-monomorphism of $\hat{D}$. Let $D_{\gamma}=\hat{D}$. Then by Lemma \ref{thm6.9} there is an extension of $F'$, $F_{\gamma}:D_{\gamma}\to \mathcal{E}$ that is an extendable function. Let $F=\{ f_{\omega_1}[F_{\omega_1}]\mid f\in \script{F}_{\omega_1 \omega_2} \}$. By Lemma 3.3 of [\ref{Dumas2}], the domain of $F$ is forced to be the finite elements of ${\mathbb R}^{\omega}/U$. {}{ $\Box$} \vskip 5pt \par \begin{corollary} Let $M$ be a c.t.m. of $ZFC+CH$ containing a simplified $(\omega_2,2)$-morass, and $M[G]$ be a generic extension of $M$ adding $\aleph_4$ generic reals. Let $X$ be an infinite compact Haussdorf space in $M[G]$. Then there is a discontinuous homomorphism of $C(X)$, the algebra of continuous real-valued functions on $X$ in $M[G]$. \end{corollary} Proof. By Woodin's argument [\ref{Woodin}], in the generic extension adding $\aleph_2$ generic reals to $L$, all cuts of $\mathcal{E}$ have countable cofinal subcuts. Let $M$ be a model of $ZFC+CH$ containing a simplified $(\omega_2,2)$-morass and $G$ be the generic extension adding $\aleph_4$ generic reals. If the cuts of $\mathcal{E}$ in $M[G]$ admit countable cofinal subcuts, then the argument for Theorem \ref{thm8.1} yields the existence of a discontinuous homomorphism of $C(X)$ in a model with $2^{\aleph_0}=\aleph_4$. {}{ $\Box$} \vskip 5pt \par \section{Pressing the continuum} The techniques of this paper, and those of [\ref{Dumas}] and [\ref{Dumas2}] depend on the construction of functions between sets of terms in the forcing language of Cohen extensions, utilizing commutativity with order-preserving injections indexing ordinals of the Cohen poset. Having presented the details of the construction of level term functions using simplified gap-1 and gap-2 morasses, it is relatively straightforward to see how these constructions extend to simplified higher (finite) gap morasses. A simplified $(\kappa,n+1)$-morass, for $\kappa$ a regular cardinal and integer $n$, is a family of embeddings between fake simplified $(\kappa, n)$-morass segments that satisfies properties analogous to those relating a simplified $(\omega_1,2)$-morass to embeddings between fake simplified $(\omega_1,1)$-morass segments. Central to the utility of these constructions is that the second component of the embeddings are order-preserving injections between ordinals. For a thorough treatment of simplified finite gap morasses, including an inductive definition, see Szalkai [\ref{Szalkai}]. Higher gap morasses will allow the extension of results of this paper and [\ref{Dumas2}] to Cohen extensions adding more than $\aleph_4$ generic reals. The definition of gap-2 morass-definable $\eta_1$-orderings and $\eta_1$-ordered real-closed fields (resp.) are easily generalized to gap-n morass-definable $\eta_1$-orderings and $\eta_1$-ordered real-closed fields (resp.). We state the following without proof. \begin{claim} Let $M$ be a model of $ZFC + CH$ containing a simplified $(\omega_1,n)$-morass, $P$ be the poset adding $\aleph_n$ generic reals and $G$ be $P$-generic over $M$. Then in $M[G]$ \begin{enumerate} \item Gap-n morass-definable $\eta_1$-orderings without endpoints are order-isomorphic. \item There exists a gap-n morass-definable, non-principal ultrafilter over $\omega$. \item If $U$ is a gap-n morass-definable non-principal ultrafilter over $\omega$, then ${\mathbb R}^{\omega}/U$ is a gap-n morass-definable $\eta_1$-ordered real-closed field. \item There exists an $\mathbb R$-isomorphism between gap-n morass-definable $\eta_1$-ordered real-closed fields. \item There exists an $\mathbb R$-monomorphism from the finite elements of a standard ultrapower of $\mathbb R$ (over $\omega$) into the Esterle algebra. \item If $X$ is an infinite compact Hausdorff space, then there exists a discontinuous homomorphism of $C(X)$, the algebra of continuous real-valued functions on $X$. \end{enumerate} \end{claim} \end{document}
arXiv
\begin{definition}[Definition:Geometric Progression] The term '''geometric progression''' is used to mean one of the following: \end{definition}
ProofWiki
Wedderburn's little theorem In mathematics, Wedderburn's little theorem states that every finite division ring is a field. In other words, for finite rings, there is no distinction between domains, division rings and fields. The Artin–Zorn theorem generalizes the theorem to alternative rings: every finite alternative division ring is a field.[1] History The original proof was given by Joseph Wedderburn in 1905,[2] who went on to prove it two other ways. Another proof was given by Leonard Eugene Dickson shortly after Wedderburn's original proof, and Dickson acknowledged Wedderburn's priority. However, as noted in (Parshall 1983), Wedderburn's first proof was incorrect – it had a gap – and his subsequent proofs appeared only after he had read Dickson's correct proof. On this basis, Parshall argues that Dickson should be credited with the first correct proof. A simplified version of the proof was later given by Ernst Witt.[2] Witt's proof is sketched below. Alternatively, the theorem is a consequence of the Skolem–Noether theorem by the following argument.[3] Let $D$ be a finite division algebra with center $k$. Let $[D:k]=n^{2}$ and $q$ denote the cardinality of $k$. Every maximal subfield of $D$ has $q^{n}$ elements; so they are isomorphic and thus are conjugate by Skolem–Noether. But a finite group (the multiplicative group of $D$ in our case) cannot be a union of conjugates of a proper subgroup; hence, $n=1$. A later "group-theoretic" proof was given by Ted Kaczynski in 1964.[4] This proof, Kaczynski's first published piece of mathematical writing, was a short, two-page note which also acknowledged the earlier historical proofs. Relationship to the Brauer group of a finite field The theorem is essentially equivalent to saying that the Brauer group of a finite field is trivial. In fact, this characterization immediately yields a proof of the theorem as follows: let k be a finite field. Since the Herbrand quotient vanishes by finiteness, $\operatorname {Br} (k)=H^{2}(k^{\text{al}}/k)$ coincides with $H^{1}(k^{\text{al}}/k)$, which in turn vanishes by Hilbert 90. Proof Let A be a finite domain. For each nonzero x in A, the two maps $a\mapsto ax,a\mapsto xa:A\to A$ are injective by the cancellation property, and thus, surjective by counting. It follows from the elementary group theory[5] that the nonzero elements of $A$ form a group under multiplication. Thus, $A$ is a skew-field. To prove that every finite skew-field is a field, we use strong induction on the size of the skew-field. Thus, let $A$ be a skew-field, and assume that all skew-fields that are proper subsets of $A$ are fields. Since the center $Z(A)$ of $A$ is a field, $A$ is a vector space over $Z(A)$ with finite dimension $n$. Our objective is then to show $n=1$. If $q$ is the order of $Z(A)$, then $A$ has order ${q}^{n}$. Note that because $Z(A)$ contains the distinct elements $0$ and $1$, $q>1$. For each $x$ in $A$ that is not in the center, the centralizer ${Z}_{x}$ of $x$ is clearly a skew-field and thus a field, by the induction hypothesis, and because ${Z}_{x}$ can be viewed as a vector space over $Z(A)$ and $A$ can be viewed as a vector space over ${Z}_{x}$, we have that ${Z}_{x}$ has order ${q}^{d}$ where $d$ divides $n$ and is less than $n$. Viewing ${Z(A)}^{*}$, $A^{*}$, and the ${Z}_{x}^{*}$ as groups under multiplication, we can write the class equation $q^{n}-1=q-1+\sum {q^{n}-1 \over q^{d}-1}$ where the sum is taken over the conjugacy classes not contained within ${Z(A)}^{*}$, and the $d$ are defined so that for each conjugacy class, the order of ${Z}_{x}^{*}$ for any $x$ in the class is ${q}^{d}-1$. ${q}^{n}-1$ and $q^{d}-1$ both admit polynomial factorization in terms of cyclotomic polynomials $\Phi _{f}(q).$ In the polynomial identities $x^{n}-1=\prod _{m|n}\Phi _{m}(x)$ and $x^{d}-1=\prod _{m|d}\Phi _{m}(x)$, we set $x=q$. Because each $d$ is a proper divisor of $n$, $\Phi _{n}(q)$ divides both ${q}^{n}-1$ and each ${q^{n}-1 \over q^{d}-1}$, so by the above class equation $\Phi _{n}(q)$ must divide $q-1$, and therefore $|\Phi _{n}(q)|\leq q-1.$ To see that this forces $n$ to be $1$, we will show $|\Phi _{n}(q)|>q-1$ for $n>1$ using factorization over the complex numbers. In the polynomial identity $\Phi _{n}(x)=\prod (x-\zeta ),$ where $\zeta $ runs over the primitive $n$-th roots of unity, set $x$ to be $q$ and then take absolute values $|\Phi _{n}(q)|=\prod |q-\zeta |.$ For $n>1$, we see that for each primitive $n$-th root of unity $\zeta $, $|q-\zeta |>|q-1|$ because of the location of $q$, $1$, and $\zeta $ in the complex plane. Thus $|\Phi _{n}(q)|>q-1.$ Notes 1. Shult, Ernest E. (2011). Points and lines. Characterizing the classical geometries. Universitext. Berlin: Springer-Verlag. p. 123. ISBN 978-3-642-15626-7. Zbl 1213.51001. 2. Lam (2001), p. 204 3. Theorem 4.1 in Ch. IV of Milne, class field theory, http://www.jmilne.org/math/CourseNotes/cft.html 4. Kaczynski, T.J. (June–July 1964). "Another Proof of Wedderburn's Theorem". American Mathematical Monthly. 71 (6): 652–653. doi:10.2307/2312328. JSTOR 2312328. (Jstor link, requires login) 5. e.g., Exercise 1.9 in Milne, group theory, http://www.jmilne.org/math/CourseNotes/GT.pdf References • Parshall, K. H. (1983). "In pursuit of the finite division algebra theorem and beyond: Joseph H M Wedderburn, Leonard Dickson, and Oswald Veblen". Archives of International History of Science. 33: 274–99. • Lam, Tsit-Yuen (2001). A first course in noncommutative rings. Graduate Texts in Mathematics. Vol. 131 (2 ed.). Springer. ISBN 0-387-95183-0. External links • Proof of Wedderburn's Theorem at Planet Math • Mizar system proof: http://mizar.org/version/current/html/weddwitt.html#T38
Wikipedia
Why are JPL using this expression to emulate Schwarzschild orbits? From the documentation "Formulation for Observed and Computed Values of Deep Space Network Data Types for Navigation " expression 4-61 on page 4-42 it can be seen that JPL uses the following expression to account for the effects of relativity under Schwarzschild conditions: $\frac{d\bar{v}}{dt}=-\frac{GM}{r^2}(1-\frac{4GM}{rc^2}+\frac{v^2}{c^2})\hat{r}+\frac{4GM}{r^2}(\hat{r}\cdot\hat{v})\frac{v^2}{c^2}\hat{r}$ When I use this expression in an integrator it replicates the "anomalous precession of perihelion" correctly. However, in the Schwarzschild solution in Schwarzschild coordinates you should get the same orbital velocity in a circular orbit as classically. Also, the initial acceleration as you drop an object from rest should be the same as classically (I believe). This JPL expression fails in accomplishing that†. Someone told me that JPL uses isotropic coordinates instead of Schwarzschild coordinates and that this could be an effect of that, but that seems strange to me. If you use the "relativistic mass" concept, that works quite well to calculate the relativistic acceleration of a charged particle under influence of the Lorentz force, on gravity you end up with: $\frac{d\bar{v}}{dt}=-\frac{GM}{r^2}(\hat{r}-\frac{v^2}{c^2}(\hat{r}\cdot\hat{v})\hat{v}) $ This can only generate one third of the perihelion shift, but the expression is better than the JPL expression in the sense that it reproduces correct values for the orbital velocity and the initial acceleration of an object at rest†. By cheating and inserting a factor of three: $\frac{d\bar{v}}{dt}=-\frac{GM}{r^2}(\hat{r}-3\frac{v^2}{c^2}(\hat{r}\cdot\hat{v})\hat{v}) $ you get an expression that reproduces the correct perihelion shift but also the correct orbital velocity of an object in circular orbit and the initial acceleration of an object at rest. †The condition for circular motion is $\mathbf{v} \cdot \mathbf{r}=0$, there is no radial part of the motion. Then you set the acceleration terms that do not vanish for $\mathbf{v} \cdot \mathbf{r}=0$ equal to $v^2/r$, the centrifugal acceleration and solve. You see that in the case of no motion,$v=0$, and the case of no radial motion the second and the third expression above reduces to the classical Newtonian gravitational acceleration, which is expected also from the Schwarzschild solution in Schwarzschild coordinates, but the "JPL expression" do not. I would be very happy if someone from JPL could tell me why you are using the first expression above. There is a rudimentary derivation of the expression in the documentation but it is rather high level and not so easy to comprehend. Note that according to JPL $v = \sqrt{GM/R}$ no longer holds true for a circular orbit but instead you have†: $v=\sqrt{\frac{GM}{r}\frac{(1-4GM/(rc^2))}{(1-GM/(rc^2))}}$ Also when dropping an object from rest, according to JPL, the acceleration goes as: $\frac{d\bar{v}}{dt}=-\frac{GM}{r^2}(1-\frac{4GM}{rc^2})\hat{r}$ From this last expression we actually see that JPL, in all of their ephemeris calculations, actually use a small "negative inverse r cube" gravitational term, which is a bit odd. 1.Why is JPL using the first expression above and not something similar to the third? 2.What is the correct expression for the orbital velocity for a body in circular motion according to JPL? 3.What is the correct initial acceleration of an object at rest according to JPL? I would be very happy to get some answers. Strong field orbits I did spend a lot of time, see old messy paper, trying to come up with some physical explanation for why, at least in the weak field limit, the third expression above should hold true by experimenting with a "general relativistic relativistic mass" of the type $\gamma(r,v)$ instead of just $\gamma(v)$ but I did not quite succeed. If you insert $\gamma=\frac{1}{\sqrt{1-\frac{2GM}{rc^2}-\frac{v^2}{c^2}}}\frac{1}{\sqrt{1-\frac{2GM}{rc^2}}}$ into $\frac{d(m\gamma\bar{v})}{dt}=-\frac{GMm\gamma}{r^2}\hat{r}$ you end up with $\frac{d\bar{v}}{dt}=-\frac{GM}{r^2}\left(\hat{r}-3\frac{v^2(\hat{r}\cdot\hat{v})\hat{v}}{c^2(1-\frac{2GM}{rc^2})} +\frac{v^4(\hat{r}\cdot\hat{v})\hat{v}}{c^4(1-\frac{2GM}{rc^2})^2}\right)$. In the strong field limits this expression results in orbits as shown below where the green circle represents the Schwarzschild radius and the red circle represents the radius of the "innermost stable circular orbit" located at a distance of three Schwarzschild radiuses. The result is similar to what is expected from GR. If you use the JPL-formula in the strong field limit you can get very strange "bouncing" effects as shown below, this is because of the repulsive inverse r-cube term: This is not at all what is expected from GR. I realized there is a higher order version of the JPL formula that includes an attractive inverse $r^4$ term as well as a repulsive inverse $r^5$ term. Still I think it is very strange to simulate GR by using a repulsive $r^3$ term and I do not really know the reason for why it is common practice to do just that. orbital-mechanics nasa jpl-horizons jpl Agerhell AgerhellAgerhell $\begingroup$ The first equation is extensively referenced in answers to How to calculate the planets and moons beyond Newtons's gravitational force? It's not just people who work at JPL who use it, use seems pretty widespread. $\endgroup$ – uhoh Mar 24 '19 at 13:27 $\begingroup$ I just thought most people can say "JPL uses this and it seems to work even though I do not exactly know why", but JPL is somehow "responsible" for their own expression. Since I could not find an email adress I wrote a paper mail to Theodore Moyer who wrote the official JPL documentation several years ago asking basically the same questions as above. I got one polite response before he got tired of me but he could not really tell me if he thought the differences I pointed out above should be there or if they are there because of approximation errors in deriving his expression. $\endgroup$ – Agerhell Mar 24 '19 at 14:40 $\begingroup$ For circular motion you start with $\bar{v}\cdot\bar{r}=0$, there is no radial part of the motion. Then you set the force equal to v^2/r, the centrifugal force and solve. You see that in the case of no motion and the case of no radial motion the second equation in the question reduces to the classical Newtonian gravitational accelerations. I did some comparizons in page 8 to 10 in this paper: vixra.org/pdf/1303.0004v1.pdf $\endgroup$ – Agerhell Mar 25 '19 at 5:33 $\begingroup$ @uhoh Do you have access to a reasonably good integrator? It would be interesting to know how much each of the four terms in the jpl formula contributes to the perihelion precession. It seems to me that they have complemented the classical inverse square gravitational acceleration with a small inverse cube part that causes most of the precession. That is a little strange since relativity is not suppose to be about adding an inverse cube part to the classical Newtonian acceleration. $\endgroup$ – Agerhell Mar 27 '19 at 15:12 $\begingroup$ I'm not really familiar with GR, but judging by eq. 4-60, they do indeed use isotropic coordinates, since the term depending on spatial coordinates in the expression for $ds^2$ is of the form $f(\bar{r})(dx^2+dy^2+dz^2)$. I guess that the approximation formula for $d\bar{v}/dt$ in a weak field is a result of this choice. $\endgroup$ – Litho May 27 '19 at 8:13 To understand the meaning of the formula, one needs to face its origins in general relativity. GR is not just about corrections to Newtonian expressions for gravitational forces and accelerations. It is more profound: The key point is that there is no longer a simple relationship between space and time coordinates and physical measurements of space and time. All physical processes, including the length of rulers and the ticking of clocks, are affected by the gravitational field (metric), in such a way that the laws of physics have the same form in every coordinate system. (The Schwarzschild solution is not a law of physics, but the field equation that it satisfies is.) It takes some work to figure out what is still a well-defined physical observable, given that the choice of coordinates has so much more freedom than in Newtonian physics (or even special relativity). This complication applies even in the post-Newtonian approximation of GR. The use of isotropic coordinates is a convention that affects all coordinate-based expressions but cannot affect the physics. In particular, we cannot take for granted what coordinate-based expressions for position, velocity, and acceleration mean unless we explicitly relate them to something observable (operationally defined). The perihelion shift is observable because it is defined relative to asymptotically flat spacetime at large distances (the "fixed stars"). As another example, by integrating an equation for the propagation of light rays in any given coordinates, we could predict well-known physical measurements of light bending and time delay. But the "speed" of a body in circular orbit has no unique or natural definition once we go beyond the Newtonian limit. Speed "should be" the circumference divided by the period. Is the circumference defined by placing a measuring tape around the orbit, or placing it radially to the sun and multiplying by $2\pi$? Is the period defined by clocks riding on the body, clocks held at rest on the orbit, or clocks at infinity? Whatever physical measurement we are interested in, GR can give us the prediction (independent of what coordinates we use), but they are all different. "Relativistic mass" reasoning is not valid in GR. Your hypothesized formula (with the factor of 3) might conceivably result from the Schwarzschild solution in some choice of coordinates, and that is the only way it would be justified. Absent this, it would be impossible to make physical predictions because we don't know how clocks, rulers, light, etc., behave relative to the coordinates in which the formula is written. nanomannanoman $\begingroup$ The derivation of the "JPL Schwarzschild approximation formula" is outlined on pages 4-22 to 4-24 in the mentioned documentation. It is a bit beyond me. In ephemeris simulations I think you basically start with the model of the accelerations then you fit your model to the revolution time. So "r" is not measured but fitted into the model. You could basically have ten different models, all prescribing different orbital velocity but the same perihelion shift. Whatever model you use to fit the inital "r" will give the the orbital velocity that fits the measurements. $\endgroup$ – Agerhell Mar 27 '19 at 13:19 $\begingroup$ If you use the "JPL Schwarzschild approximation formula" instead of the classical Newtonian expression to fit the radial distance between the Earth and the Sun you end up with the Earth being a little more then 2 kilometers closer to the sun. The difference is so small that I think it is hard to experimentally find out which model is more correct in the weak field solar system environment. Anyhow, if you have any information on the physical interpretation of the four terms in the JPL formula, please share it. It seems to me that they get the perihelion shift right by using inverse cube gravity $\endgroup$ – Agerhell Mar 27 '19 at 13:28 Not the answer you're looking for? Browse other questions tagged orbital-mechanics nasa jpl-horizons jpl or ask your own question. How to calculate the planets and moons beyond Newtons's gravitational force? Is this what station keeping maneuvers look like, or just glitches in data? (SOHO via Horizons) What (actually) is a 1:1 resonance, and is 2016 HO3 in one with the Earth? What are the problems that need to be taken in consideration when trying to find planetary position by using jpl ephemerides? Generating JPL HORIZONS parameters for use with JPL HORIZONS itself Help with my tensor tension; how to derive and calculate this rigid body gravity gradient torque? What information was stolen from JPL during the Raspberry Pi hack?
CommonCrawl
\begin{document} \begin{abstract} The paper explores the birational geometry of terminal quartic 3-folds. In doing this I develop a new approach to study maximal singularities with positive dimensional centers. This allows to determine the pliability of a $\mathbb{Q}$-factorial quartic with ordinary double points, and it shows the importance of $\mathbb{Q}$-factoriality in the context of birational geometry of uniruled 3-folds. \end{abstract} \maketitle \section*{Introduction} Let $X$ be a uniruled 3-fold, then $X$ is generically covered by rational curves. It is a common belief that both biregular and birational geometry of $X$ are somehow governed by these families of rational curves. In this paper I am interested in birational geometry of these objects. The Minimal Model Program states that such a $X$ is birational to a Mori fiber Space (Mfs). Roughly saying after some birational modification either $X$ can be fibered in rational surfaces or rational curves or it becomes Fano. For a comprehensive introduction to this realm of ideas as well as for the basic definitions and results see \cite{CR} and \cite{Co2}. In the attempt to tidy up the birational geometry of 3-fold Mori fiber Spaces we introduced the notion of pliability, \cite{CMp}. \begin{dfn}[Corti] If $X$ is an algebraic variety, we define the {\em pliability} of $X$ to be the set \[ \mathcal{P}(X)=\bigl\{ \text{Mfs} \; Y\to T\mid Y\;\text{is birational to} \;X \bigr\}/\text{square equivalence}. \] We say that $X$ is {\em birationally rigid} if $\mathcal{P}(X)$ consists of one element. \end{dfn} It is usually quite hard to determine the pliability of a given Mori Space, and not many examples are known. The first rigorous result dates back to Iskovskikh and Manin, \cite{IM}. The main theorem of \cite{IM} states, in modern terminology, that any birational map $\chi:X\flip Y$ from a smooth quartic $X\subset\mathbb{P}^4$ to a Mori fiber space is an isomorphism. This means that $\mathcal{P}(X)=\{X\}$ and $X$ is birationally rigid. On the other hand consider a quartic threefold $X\subset \mathbb{P}^4$ defined by \hbox{$\det M=0$,} where $M$ is a $4\times 4$ matrix of linear forms. One can define a map {\hbox{$f:X\flip\mathbb{P}^3$}} by the assignment $P\mapsto(x_0:x_1:x_2:x_3)$, where $(x_0,x_1,x_2,x_3)$ is a solution of the system of linear equations obtained substituting the coordinates of $P$ in $M$. For $M$ sufficiently general such a map is well defined and birational. In this case $f$ gives a rational parameterization of $X$. The singularities of $X$ correspond to points where the rank drops. It is not difficult to show that, for a general $M$, the corresponding quartic has only ordinary double points corresponding to points where the rank is 2. Thus a general determinantal quartic threefold has only ordinary double points and it is rational. From the pliability point of view this is discouraging. Minimal Model Theory requires to look at terminal $\mathbb{Q}$-factorial 3-folds and ordinary double points are the simplest possible terminal singularities. It would be unpleasant if a bunch of ordinary double points were to change a rigid structure to a rational variety. The point I want to stress in this paper is that the rationality of a determinantal quartic is due to the lack of $\mathbb{Q}$-factoriality and not to the presence of singularities. \begin{thm} Let $X_4\subset \mathbb{P}_{\mathbb{C}}^4$ be a $\mathbb{Q}$-factorial quartic 3-fold with only ordinary double points as singularities. Then $X$ is neither birationally equivalent to a conic bundle nor to a fibration in rational surfaces. Every birational map $\chi:X\flip Y$ to a Fano 3-fold is a self map, that is $Y\iso X$, in particular $X$ is not rational. This is to say that $X$ is birationally rigid. \label{th:C} \end{thm} \begin{rem} The case of a general quartic with one ordinary double point has been treated by Pukhlikov, \cite{Pu}. Observe that in this case $X$ is automatically $\mathbb{Q}$-factorial. More recently Grinenko studied the case of a general quartic containing a plane. \end{rem} A variety is said to be $\mathbb{Q}$-factorial if every Weil divisor is $\mathbb{Q}$-Cartier. Such an innocent definition is quite subtle when realized on a projective variety. It does depend both on the kind of singularities of $X$ and on their position. To my knowledge there are very few papers that tried to shed some light on this question, \cite{Cl} \cite{We}. In the case of a Fano 3-fold, $\mathbb{Q}$-factorial is equivalent to $\dim H^2(X,\mathbb{Z})=\dim H_4(X,\mathbb{Z})$, a global topological property, invariant for diffeomorphic Fano 3-folds. A recent paper of Ciliberto and Di Gennaro, \cite{CDG}, deals with hypersurfaces with few nodes. The general behavior is that the presence of few nodes does not break $\mathbb{Q}$-factoriality. This is not true even for slightly worse singularities, as the following example shows. \begin{exa}[Koll\'ar] Consider the linear system $\Sigma$, of quartics spanned by the following set of monomials $\{x_0^4,x_1^4, (x_4^2x_3+x_2^3)x_0,x_3^3x_1,x_4^2x_1^2\}$. Then a general quartic $X\in \Sigma$ has a unique singularity $P$ at $(0:0:0:0:1)$ and the quadratic term is a general quadric in the linear system spanned by $\{x_3x_0,x_1^2\}$, so that analytically $P\in X\sim (0\in (xy+z^2+t^l=0))$ and $P$ is a $cA_1$ point. The 3-fold $X$ is not $\mathbb{Q}$-factorial since the plane $\Pi=(x_0=x_1=0)$ is contained in $X$. The idea is that a general quartic containing a plane has 9 ordinary double points, the intersection of the two residual cubics. In the above case the two cubics intersect just in the point $P$. \end{exa} There is a slightly stronger version of Theorem \ref{th:C}. \begin{thm}\label{th:k} Let $X_4\subset \mathbb{P}_k^4$ be a $\mathbb{Q}$-factorial quartic 3-fold with only ordinary double points as singularities over a field $k$, not necessarily algebraically closed, of characteristic 0. Then ${\mathcal P}(X)=\{ X\}$. \end{thm} If one considers non algebraically closed fields then peculiar aspects of factoriality and its relation with birational rigidity appear. Theorem \ref{th:k} and its significance in this contest, were suggested by J\'anos Koll\'ar. \begin{exa} Consider the following quartic $Z$ $$(x_0^2+x_1^2)^2+(x_2^2+x_3^2)^2+x_4C=0.$$ Then $Z$ is not $\mathbb{Q}$-factorial over $\mathbb{C}$ because $(x_4=0)_{|Z}$ is a pair of quadrics, say $Q$ and $\overline{Q}$. For a general cubic $C$ the singular points of $Z$ are twelve distinct ordinary double points. The existence of Minimal Model Program for 3-folds implies that $Z$ is birational to some Mori Space $Y\neq Z$. Indeed this is the midpoint of a Sarkisov link. This can be easily seen with the unprojection method developed by Reid, \cite{un}. The equation of $Z$ is $$Q\overline{Q}+HC=0.$$ We can introduce the two ratios $y=Q/H=-C/\overline{Q}$ and $z=\overline{Q}/H=-C/Q$. These are both of degree one and unproject $Z$ to the following complete intersections $$\begin{array}{cc}X=\left\{\begin{array}{ll} yH= & Q\\ y\overline{Q}=& -C \end{array} \right.\subset\mathbb{P}^5 & X'=\left\{\begin{array}{ll} zH= & \overline{Q}\\ zQ=& -C \end{array} \right.\subset\mathbb{P}^5 \end{array} $$ $X$ and $X'$ are projectively equivalent, thus we have a Sarkisov self link, see \cite{Co2}, \[ \xymatrix{ &Y\ar[dl]\ar[dr]\ar@{.>}[rr]&&Y\ar[dl]\ar[dr]& \\ X& &Z&&X} \] In the paper I express similar self links with the following compact notation $$\self{X}{Z_4\subset\mathbb{P}^4}$$ In particular the Weil divisors group on $Z$ is generated by $Q$ and $\overline{Q}$. The two quadrics are conjugated under complex conjugation, so that over $\mathbb{R}$ they are not defined individually. In particular $Z/\mathbb{R}$ is $\mathbb{Q}$-factorial, hence birationally rigid by Theorem \ref{th:k}. Observe that $X$ is not defined over $\mathbb{R}$. \end{exa} Before explaining the proof of Theorem \ref{th:C}, let me just give a brief look at the determinantal quartic from the point of view of Sarkisov program. Let $X=(\det M=0)\subset \mathbb{P}^4$, with $M$ general. Consider a Laplace expansion of $\det M$ with respect to the $j$-th row. Then the equation of $X$ has the form $\sum_i l_iA_{ji}=0,$ where the $l_i$ are linear forms and the $A_{ji}$ are cubic forms. Then the $A_{ji}$s generate the ideal of a smooth surface $B_j^r$ of degree 6, a Bordiga surface. It is easy to see that $B_j^r$ passes through all singular points of $X$. The latter are the rank two points therefore any order three minor has to vanish. Therefore $X$ is not factorial and consequently not $\mathbb{Q}$-factorial (terminal Gorenstein $\mathbb{Q}$-factorial singularities are factorial). The symmetry between rows and columns, in the Laplace expansion, suggest that $X$ is a midpoint of a Sarkisov link. Indeed this is the case of a well known ``determinantal'' involution of $\mathbb{P}^3$, \cite{Pe}, \[ \self{\mathbb{P}^3}{X\subset\mathbb{P}^4} \] {\em Acknowledgments} This paper found his way throughout the darkness of my desktop drawer after motivating discussions with Miles Reid during a short visit at Warwick for the ``Warwick teach-in on 3-folds'' in January 2002. I am deeply indebted with Alessio Corti for advices, and much more. His frank criticism on a preliminary version helped to improve the paper. The referee, beside numerous suggestions and corrections, found a gap in the first version of Lemma \ref{le:ecubo} and suggested a patch. The actual content of Lemma \ref{le:nikos} was communicated to me by Nikos Tziolas. It is a pleasure to thank J\'anos Koll\'ar for the suggestions and comments he gave me during a very pleasant stay in Napoli, for the ``Current Geometry'' Conference 2002. \section{Maximal singularities and the main theorem} I start rephrasing Theorem \ref{th:C} in the following form. \begin{thm}\label{th:main} Let $X_4\subset \mathbb{P}_{\mathbb{C}}^4$ be a $\mathbb{Q}$-factorial quartic 3-fold with only ordinary double points as singularities. Then any birational map $\chi:X\flip V$ to a Mori space $V/T$ is a self map, i.e. $V\iso X$. \end{thm} To prove Theorem \ref{th:main} I use the Maximal singularities method combined with Sarkisov Program, as described in \cite{Co2}, \cite{CPR} and \cite{CM}. I rely on those papers for the very basic definitions like Mori fiber spaces, weighted projective spaces, Sarkisov program and links, and philosophical background. Here I quickly recall what is needed. \begin{dfn}[degree of $\chi$] \label{dfn:deg} Suppose that $X$ is a Fano 3-fold with the property that $A=-K_X$ generates the Weil divisor class group: $\WCl X=\mathbb{Z}\cdot A$ (this holds in our case under the $\mathbb{Q}$-factoriality assumption). Let $\chi\colon X\dasharrow V$ be a birational map to a given Mori fiber space $V\to T$, and fix a very ample linear system $\mathcal{H}_V$ on $V$; write $\mathcal{H}=\mathcal{H}_X$ for the birational transform $\chi^{-1}_*(\mathcal{H}_V)$. The {\em degree} of $\chi$, relative to the given $V$ and $\mathcal{H}_V$, is the natural number $n=\deg\chi$ defined by $\mathcal{H}=nA$, or equivalently $K_X+(1/n)\mathcal{H}=0$. \end{dfn} \begin{dfn}[untwisting] \label{dfn:un} Let $\chi\colon X\dasharrow V$ be a birational map as above, and $f\colon X\dasharrow X'$ a Sarkisov link. We say that $f$ {\em untwists} $\chi$ if $\chi'=\chi\circ f^{-1} \colon X'\dasharrow V$ has degree smaller than $\chi$. \end{dfn} \begin{dfn}[maximal singularity] \label{dfn:ms} Let $X$ be a variety and $\mathcal{H}$ a movable linear system. Suppose that $K_X+(1/n)\mathcal{H}=0$ and $K_X+(1/n)\mathcal{H}$ has not canonical singularities. A {\em maximal singularity} is a terminal (extremal) extraction $f:Y\to X$ in the Mori category, see \cite[\S 3]{CM}, having exceptional {\em irreducible} divisor $E$ such that $f^*(K_X+c\mathcal{H})=K_Y+c\mathcal{H}_Y$, where $c<1/n$ is the canonical threshold. The image of $E$ in $X$, or the center $C(X,v_E)$ of the valuation $v_E$, is called the {\em center} of the maximal singularity. \end{dfn} \begin{rem} In this paper all maximal singularities will be either the blow up of an ordinary double point, or generically the blow up of the ideal of a curve $\Gamma\subset X$. In both cases this is the unique possible maximal singularity with these centers. This is easy for curves, while for an ordinary double point it is due to Corti, \cite[Theorem 3.10]{Co2}. \label{re:uno} \end{rem} \begin{lem}[{\cite[Lemma 4.2]{CPR}}] \label{le:untw} Let $X$, $V/T$, $\mathcal{H}$ be as before, $\chi: X \dasharrow V$ a birational map. If $E\subset Z \to X$ is a maximal singularity, any link $X \dasharrow X'$, starting with the extraction $Z\to X$, untwists $\chi$. \end{lem} The above Lemma, together with Sarkisov program, allow to restrict the attention on maximal singularities. To study maximal singularities there is an invariant which is very often useful: the self intersection of the exceptional divisor. The next Lemma allow to compute $E^3$ when the center is smooth curve on $X$. To do this I have to determine the correction terms that are needed to make adjunction formula work in the presence of $cA_1$ singularities along $\Gamma$. This is done using the theory of Different developed in \cite[\S 16]{U2} and the following Lemma kindly suggested by Nikos Tziolas. In the statement and proof of the Lemma I need a notion of singularity for pairs curve and surface with $A_t$ points. \begin{dfn} Assume that $p\in S\sim(0\in (xy+z^{t+1}=0))$, for some $t\geq 1$, and $\Gamma\subset S$ is a smooth curve through $p$. Let $\nu:U\to S$ be a minimal resolution with exceptional divisors $E_i$, with $i=1,\ldots,t$. Here I mean that the rational chain starts with $E_1$, ends with $E_t$, and for $1<i<t$ the intersection $E_i\cdot E_j$ is non zero if and only $j=i\pm1$. I say that $(\Gamma,S)$ is an $A^k_t$ singularity if $C_U\cdot E_k=1$ (here and all through the paper I decorate with $_T$ the strict transform of objects on a variety $T$). Observe that since $C$ is smooth then $C_U\cdot E_i=0$ for any $i\neq k$. \label{df:pair} \end{dfn} \begin{lem}[\cite{Tzp}]\label{le:nikos} Let $(0\in X)$ be a $cA_1$ singularity and $0\in\Gamma\subset X$ a smooth curve through it. Let $f:Y\to X$ be a terminal extraction with center a smooth curve $\Gamma$ and exceptional divisor $E$. Then $f$ can be obtained from the diagram $$\xymatrix{ &Z\ar[dl]_{\phi}\ar[dr]^{\psi}&\\ W\ar[dr]_{\nu}&&Y\ar[dl]^{f}\\ &X&} $$ such that \begin{itemize} \item[i)] $W$ is the blow up of $X$ along $\Gamma$. The $\nu$-exceptional divisors are a ruled surface $E$ over $\Gamma$ and $F\iso\mathbb{P}^2$ aver the singular point. $Z$ is a $\mathbb{Q}$-factorialization of $E$ and $\psi$ contracts $F_Z\iso F$ to a point. \item[ii)] $S_Y=f^{-1}_*S\iso S$, where $S$ is a general section of $X$ through $\Gamma$. \item[iii)] $(\Gamma,S)$ is an $A^k_{2k-1}$ singularity. \end{itemize} \end{lem} \begin{proof} First prove that $W$ is $cA$, \cite{Ko}. Let $S$ be the general section of $X$ through $\Gamma$. Then one can assume, \cite{Tza}, that $S$ is given by $xy-z^{n+m}=0$ and $\Gamma$ by $x-z^n=y-z^m=0$, for some $n\leq m$, equivalently $S$ by $xy+xz^n+yz^m=0$ and $\Gamma$ by $x=y=0$. Then $X$ has the form $$xy+xz^n+yz^m+tg_1(x,y,z,t)+tg_{\geq 2}(x,y,z,t)=0$$ and $\Gamma$ is $x=y=t=0$. To have a $cA_1$ singularity the quadratic term $ xy+tg_1(x,y,z,t)$ must be irreducible. Now a straightforward explicit computation of the blow up of the maximal ideal of $\Gamma$ shows that $W$ is $cA$. Then by \cite{Tzb} it follows that $Z$ and hence $Y$, can be constructed in families. Therefore we may study the deformed equation $$xy+xz^n+yz^m+tg_1(x,y,z,t)+t^k=0$$ for $k\gg1$. The blow up computation and the irreducibility of the quadratic term yields that $W$ has isolated singularities along $E\cap F$. Therefore $Z$ is just the blow up of $E$ and hence $F_Z\iso F\iso\mathbb{P}^2$. This also proves that $F_Z$ is contracted to a point by $\psi$. To see the claim on $S_Y$ take a general member $S_W\in|-K_W|$. Then $S_W$ has $A_i$ singularities and avoids the singular points along $E\cap F$. Let $C=S_W\cap F$. The $C$ is contracted by $\nu$ and therefore $S_Y=\nu(S_W)\iso S$. Since $W$ is smooth on the generic point of $E\cap F$, \cite[Proposition 4.6]{Tza}, it follows that $(\Gamma, S)$ is an $A^k_{2k-1}$ singularity because in any other case $W$ would be singular at $E\cap F$. \end{proof} Next I derive the numerical result about self intersection from Lemma \ref{le:nikos}. \begin{lem} Let $f:Y\to X$ be a terminal extraction with center a smooth curve $\Gamma$ and exceptional divisor $E$. Assume that $X$ has only $cA_1$ points along $\Gamma$. Let $\Sigma$ be any linear system with $\Bsl\Sigma=\mathcal{I}_{\Gamma}$ and $S\in \Sigma$ a general element. Assume that $S$ is normal. Then $f_{|S_Y}:S_Y\to S$ is an isomorphism, and $$E^3=-S\cdot\Gamma-(\Gamma\cdot\Gamma)_S$$ or equivalently $$E^3=K_X\cdot\Gamma-2g(\Gamma)+2-\Diff(\Gamma,S)$$ \label{le:ecubo} \end{lem} \begin{rem} In the hypothesis of Lemma \ref{le:ecubo} one can define the different of $\Gamma$ in $X$ as $$\Diff(\Gamma,X):=K_X\cdot\Gamma-E^3-2g(\Gamma)+2$$ This suggests the possibility to extend the theory of Different, \cite[\S 16]{U2}, to higher codimension subvarieties. \end{rem} \begin{proof}We already proved, Lemma \ref{le:nikos}, that $S_Y\iso S$. By hypothesis $f^*(S)=S_Y+E$, and consequently $$E^3=f^*(S)\cdot E^2-S_Y\cdot E^2$$ Projection formula yields $f^*S\cdot E^2=-S\cdot \Gamma$. By Lemma \ref{le:nikos} $E_{|S_Y}=\Gamma$, therefore $S_Y\cdot E^2= (E_{|S_Y}\cdot E_{|S_Y})_{S_Y}=(\Gamma\cdot\Gamma)_S$. Note that $K_S=(K_X+S)_{|S}$, therefore $$(\Gamma\cdot \Gamma)_S=2g(\Gamma)-2-K_X\cdot\Gamma-S\cdot\Gamma+\Diff(\Gamma,S)$$ by adjunction formula compensated by the Different, \cite[Chapter 16]{U2}. \end{proof} I now go back to Theorem \ref{th:main}. The first task is to recognize birational maps. The geometry of $X$ suggests the existence of some birational self maps, the ``Italian'' approach, according to \cite{CPR}: \begin{itemize} \item the reflection through a singular point $p$ \item the elliptic involution associated to a line $l$ containing some singular point. \end{itemize} The general line through $p$ intersect the quartic in two more points $Q_1$ and $Q_2$. The self map suggested is $Q_1\mapsto Q_2$. A general plane containing $l$ has a smooth cubic $C$ as residual intersection with $X$. Furthermore a singularity, say $P$, provides the family of these cubics of a section, namely a common origin to the group structure. The self map suggested is $R\mapsto -R$ where $-R$ is the inverse of $R$ in the group structure on $C$ with origin $P$. Then I describe those maps in terms of Sarkisov links. \noindent After \cite{Co2}, \cite{CPR} and \cite{CM} this is now a nice and pleasant exercise. Indeed the only possibility that is not yet described in neither \cite{Co2} nor \cite{CM} is the one of a line with three singularities along it. Assume that $l\subset X$ is a line with three distinct singular points along it. Note that this is the maximum number of singular points along a line on a quartic with isolated singularities. After a coordinate change we can assume that $l=(x_2=x_3=x_4=0)$ and the equation of $X$ has the following form $$F=L(x_0^2x_1+x_1^2x_0)+Q_1x_0^2+Q_2x_1^2+Q_3x_0x_1+C_1x_0+C_2x_1+D$$ Let $f:Y\to X$ be the unique terminal extraction with center $l$ and exceptional divisor $E$. I want to understand the anticanonical ring of $Y$. Let $$H_i=(x_i=0)_{|X}$$ It is immediate that $H_{iY}\in|-K_Y|$. Since $l=(x_2=x_3=x_4=0)$ and $f$ is generically the blow up of the maximal ideal then $-K_Y$ is nef. The linear form $\tilde{H}:=L_{|X}$ has multiplicity two along $l$. Therefore a general plane section of $\tilde{H}$ through $l$ has residual intersection a conic, say $C$, that, generically, intersects $l$ in two points. In particular $C_Y\cdot K_Y=0$ and $NE(Y)=\langle e,C\rangle$, where $e\subset E$ is $f$-exceptional. Note that the special hyperplane section $\tilde{H}_Y$ is covered by curves proportional to $C$. Therefore the ray spanned by $[C]$ is not small and by the two ray game I conclude that there is no Sarkisov link starting from the extraction $f:Y\to X$. This is usually called a bad link, \cite{CPR}, \cite{Co2}. The only Sarkisov links that have center either a singular point or a line through a singular point are therefore the following: \begin{itemize} \item[$\rho_x$] for any singular point $x\in X$ \[ \self{X}{Z_6\subset\mathbb{P}(1,1,1,1,3)} \] \item[$\f^l_1$] for any line $l\subset X$ passing through one singular point \[ \self{X}{Z_{12}\subset\mathbb{P}(1,1,1,4,6)} \] \item[$\f^l_2$] for any line $l\subset X$ passing through two singular points \[ \self{X}{Z_{8}\subset\mathbb{P}(1,1,1,2,4)} \] \end{itemize} Note that to a line with more than one singularity are associated different elliptic involutions. I can choose any singular point as origin on the elliptic curves. But still, by Sarkisov theory, the maximal singularity with center the line is unique. This is because the elliptic involution, in this case, is a composition elementary links. To prove Theorem \ref{th:main} it is now enough to show that any birational map can be factored by the self maps described. It is now standard, see \cite[\S 3]{CPR}, that this is equivalent to the following. \begin{thm} Let $X_4\subset\mathbb{P}_{\mathbb{C}}^4$ be a $\mathbb{Q}$-factorial quartic 3-fold with only ordinary double points and $E$ a maximal singularity. Then either: \begin{itemize} \item[-] the center $C(X,v_E)=p$ is a singular point, or \item[-] the center $C(X,v_E)=l$ is a line through some singular point. \end{itemize} In both cases the assignment identifies the maximal singularity, hence the Sarkisov link, uniquely, see Remark~\ref{re:uno}. \label{th:center} \end{thm} The proof of Theorem \ref{th:center} is the core of the next section. \section{Exclusion} A maximal center on a Fano 3-fold is either a point or a curve. \noindent The case of smooth points can be treated with many different techniques. The main result of \cite{IM} is indeed that a smooth point is not a maximal center on a quartic. Corti, \cite{Co2}, gave an amazingly simple proof using numerical properties of linear system on surfaces. The recent classification of Kawakita, \cite{Kw}, gives a third possible proof based on terminal extractions, see \cite[Conjecture 4.7]{Co1}. I am therefore bound to study centers of positive dimension. I can actually prove a stronger statement. \begin{thm}\label{th:cA1} Let $X_4\subset \mathbb{P}_{\mathbb{C}}^4$ be a $\mathbb{Q}$-factorial quartic 3-fold with only $cA_1$ points. Assume that a curve $\Gamma$ is a center of maximal singularities. Then $\Gamma$ is a line through some singular point. \end{thm} \begin{rem} The Theorem is an important step in the direction of \cite[Conjecture 1.3]{CMp} \end{rem} \begin{proof} From now on $\Gamma\subset X$ will be an irreducible curve assumed to be the center of a maximal singularity. The unique terminal extraction is then generically the blow up of the ideal of $\Gamma$ in $X$. Therefore the linear system $\mathcal{H}\subset|\mathcal{O}(n)|$, associated to the extraction, satisfies $$\gamma=\mult_{\Gamma}\frac1n\mathcal{H}> 1.$$ We prove the theorem in several steps \begin{description} \item[Step 1] A raw argument shows that $\deg \Gamma \leq 3$. \item[Step 2] $\Gamma$ can not be a space curve. \item[Step 3] If $\Gamma$ is a plane curve then it is a line through some singular point. \end{description} \paragraph*{\textsc{Step 1}} Choosing general members $H_1$, $H_2$ of $\mathcal{H}$ and intersecting with a general hyperplane section $S$ we obtain \[ 4n^2=H_1\cdot H_2\cdot S>\gamma n^2\deg\Gamma. \] This implies that $\deg \Gamma \leq 3$. \paragraph*{\textsc{Step 2: space curves}} If $\Gamma$ is a space curve, then by Step~1 it must be a rational normal curve of degree 3, contained in a hyperplane $\Pi \cong \mathbb{P}^3 \subset \mathbb{P}^4$. Let $S \in |I_{\Gamma,X}(2)|$ be a general quadric vanishing on $\Gamma$,$\mathcal{L}$ the mobile part of $\mathcal{H}_{|S}$; write \[ \mathcal{O}_S(1)=\frac{1}{n}\mathcal{H}_{|S}=L+\gamma\Gamma, \] where $L=(1/n)\mathcal{L}$ is nef. Note that, because $I_\Gamma$ is cut out by quadrics, \begin{displaymath} \mult_\Gamma \mathcal{H} = \mult_\Gamma \mathcal{H}_{|S} = n \gamma > n. \end{displaymath} Let $f:Y\to X$ be the maximal singularity, with exceptional divisor $E$, and center $\Gamma$. By Lemma \ref{le:ecubo} we can compute $E^3$ by means of $\Diff(\Gamma,S)$. Assume that $(C,U)$ is an $A^k_t$ singularity, keep in mind Definition \ref{df:pair}, let $\nu^*(C)=C_W+\sum d_i E_i$ then it is a straightforward check on the intersection matrix of an $A_t$ singularity, see for instance \cite[pg 16]{Ja}, that \begin{equation} \label{eq:akn} (t+1)d_i=\left\{\begin{array}{ll} i(t-k+1) & {\rm\ if\ } i\leq k \\ (t-i+1)k& {\rm\ if \ }i\geq k \end{array} \right. \end{equation} Incidentally observe that $\Diff(C,U)_x=C_W\cdot\sum d_iE_i=d_k$. I now come back to our original situation by Lemma \ref{le:nikos} part iii) $(\Gamma,S)$ is an $A^k_{2k-1}$ singularity for some $k$, with $l/2\geq k\geq 1$. In particular the Different is \begin{equation} \label{eq:k2} \Diff(\Gamma,S)_p=k/2 \end{equation} This proves, together with Lemma \ref{le:ecubo}, that \begin{equation} \label{eq:ecubo} E^3=-3+2-\sum_{p_i\in \Sing(S)} k_i/2 \end{equation} Then we need to bound the contributions of the singularities globally. \begin{lem}In the above notation $\sum_{p_i\in \Sing(S)} k_i\leq 7$. \label{le:bound} \end{lem} \begin{proof} To prove the bound we need the following reinterpretation of the $k_i$'s, see also \cite{Tza}. By Lemma \ref{le:nikos} part iii) we can realize $p_i\in\Gamma\subset S\subset \mathbb{Q}^3$ analytically as $$0\in(x=y=0)\subset (xy+yz^{k_i}+xz^{k_i}=0)\subset \mathbb{C}^3,$$ see for instance \cite[pg 13]{Ja}. Let $\mu_i:Z\to\mathbb{C}^3$ be the blow up of $(x=y=0)$, with exceptional divisor $E_Z$ and $F_i=\mu_i^{-1}(0)$. Then $S_{Z|E_Z}=k_iF_i+{\rm effective}$. Let $\nu:W\to \mathbb{P}^4$ be the blow up of $\Gamma$, with exceptional divisor $E_W$, and $F_i=\nu^{-1}(p_i)$. Since $\Bsl|\mathcal{I}_{\Gamma,\mathbb{P}^4}(2)|=\Gamma$ then $X_{W|E_W}=k_i F_i+{\rm effective}$. For any divisor $D\subset\mathbb{P}^4$ such that $D\supset\Gamma$ and $D_{|X}$ is smooth on the generic point of $\Gamma$ we have $$(D_Z\cap X_Z)_{|E_Z}=h_i F_{i|D_Z}+{\rm effective},$$ for some \begin{equation} \label{eq:quad} h_i\geq k_i \end{equation} Thus to bound the global contribution it is enough to understand the normal bundle of $\Gamma$ in some smooth divisor $D$ such that $D_{|X}$ is smooth on the generic point of $\Gamma$. Let $H=\Pi_{|X}$ be the unique hyperplane section containing $\Gamma$, and $H_{|S}=\Gamma+\Delta$. I claim that $\Delta\not\supset\Gamma$. Assume the opposite and let $H_{|S}=2\Gamma+R$. Then $\deg R=2$ and $R$ is a pair of skew lines, say, $l_1$, $l_2$, secant to $\Gamma$. Since $S$ is general then $l_i\not\subset \Bsl\mathcal{H}$ and $l_i\cap\Gamma\not\subset (\Sing(X)\cap\Gamma)$. Then we derive the impossible $$1=\mathcal{H}/n\cdot l_i\geq\gamma\Gamma\cdot l_i>\gamma.$$ We can therefore choose $D=\Pi$. It is well known, \cite{Hu}, that $N_{\Gamma/\mathbb{P}^3}\iso\mathcal{O}(5)\oplus\mathcal{O}(5)$. Let $\nu:W\to D$ be the blow up of $\Gamma$ with exceptional divisor $E_W\iso \mathbb{F}_0$. Then $X_{W|E_W}\equiv f_0+7f_1$, where $f_1$ is a fiber of $\nu$. The inequality $\sum k_i\leq \sum h_i\leq7$ is obtained. \end{proof} Consider again, the hyperplane section $H=\Pi_{|X}$ and the maximal singularity $f:Y\to X$. Let $D_Y\subset Y$ be any effective irreducible divisor distinct from $E$. Then $D_Y=f^*D-\alpha E$, for some positive $\alpha\in\mathbb{Q}$ and $D\in |\mathcal{O}(d)|$. The divisor $D_Y$ is numerically equivalent to $d H_Y+(d-\alpha)E$. To conclude the step it is, therefore enough to prove that the cone of effective divisors on $Y$ is generated by $H_Y$ and $E$. This is the content of the next Lemma. \begin{lem} $NE^1(Y)=\langle H_Y,E\rangle$. \label{le:cono} \end{lem} \begin{rem} This is just a rewriting of the usual exclusion trick. I prove that a linear system like $\mathcal{H}$ has to have a fixed component, in this case $H$. I hope that in this way it is easier to digest and maybe generalize. See also Remark \ref{rem:prob}. \end{rem} \begin{proof} Let $B_Y\subset Y$ be any effective irreducible $\mathbb{Q}$-divisor distinct from $E$ and $H_Y$. Then $B_Y=f^*B-\beta E$, for some positive $\beta\in\mathbb{Q}$ and $B\in |\mathcal{O}(b)|$. Actually $\beta\in \mathbb{Z}$ since $X$ has index 1 and is $\mathbb{Q}$-factorial. I have to prove that $\beta\leq b$. By Lemma \ref{le:ecubo} $\dim \Bsl|S_Y|\leq 0$, hence the cycle $B_Y\cdot H_Y\cdot S_Y$ is effective. The following inequality is satisfied $$\begin{array}{rl} 0\leq& B_Y\cdot H_Y\cdot S_Y=(f^*(B)-\beta E)(f^*(H)-E)(f^*S-E)\\ =& 8b-3b-9\beta-\beta E^3=5b-(8-\sum k_i/2)\beta \end{array} $$ This proves the claim for $\sum k_i\leq 6$. Assume that $\sum k_i=7$. First I need to better understand this special configuration of singularities. Let $\nu:W\to \Pi$ be the blow up of $\Gamma$ with exceptional divisor $E_W\iso \mathbb{F}_0$, $g$ an ``horizontal'' ruling of $E_W$, and $f_i$ fibers of $\nu$. Then the assumption on singularities yield $H_{W|E_W}=g+\sum_1^7 f_i$, where the $f_i$ are not necessarily distinct. Note that for each point $y\in\Gamma$ there is a quadric cone $Q^y\subset\Pi$ containing $\Gamma$ and with vertex $y$. Then $Q^y_{W|E_W}=g_y+f$. In particular for any $g\subset E_W$ ``horizontal'' ruling there exists a quadric cone $Q^g\subset \Pi$ such that $Q^g_{W}\supset g$. This proves that there exists a quadric cone, say $\tilde{Q}\subset\mathbb{P}^3$, such that $\tilde{Q}_{|H}=2\Gamma+C$, for some conic $C$. Similarly there exists a cubic surface $\tilde{M}$ such that $\tilde{M}_{|H}=2\Gamma+R$ and $\tilde{M}_{|\tilde{Q}}=2\Gamma$. Therefore the equation of $H$ can be written as $$\tilde{Q}K+\tilde{M}P=0,$$ where $K$ is a quadric and $P$ is a linear form. Assume that $\Pi=(x_4=0)$. Let $\Sigma$ be the linear system of quadrics spanned by $\{\tilde{Q},x_4x_0,\ldots,x_4x_3\}$. Fix $\overline{S}\in\Sigma_{|X}$ a general element. By construction we have $H_{|\overline{S}}=2\Gamma+C$. Let $$H_{Y|E}=\Gamma_0+F,$$ where $F$ is $f$-exceptional. Then for effective, $f$-exceptional divisors $F'$ and $G$, we have $$\overline{S}_{Y|E}=\Gamma_0+F^{\prime},\ \ \overline{S}_{Y|H_Y}=\Gamma_0+C+G$$ and \begin{equation} \label{eq:rel} (\overline{S}_Y-E)\cdot H_Y=C+G-F \end{equation} \begin{claim} $F-G\equiv\mathcal{O}_E$\label{cl:caga} \end{claim} \begin{proof}[Proof of the Claim] The cycle $F$ is the $f$-exceptional part of $H_{Y}$. The cycle $G$ is $f$-exceptional and it is contained in $\Bsl\Sigma_{Y}$ therefore $F-G$ is effective. Let $\phi:=f_{|E}:E\to\Gamma$ be the restriction morphism and $E^0=E\setminus\phi^{-1}(\Sing(X)\cap\Gamma)$. In our notation we have $\Bsl(\Sigma_{Y|E})=\Gamma_0+G$, thus we can assume that $F'= G+M$, for some divisor $M=\phi^*A$ supported on $E^0$. Let me interpret this divisor in a different way. Let $\overline{Q}\in\Sigma$ be the quadric whose to $X$ is $\overline{S}$. Since $\tilde{Q}=\overline{Q}_{|\Pi}$ is a cone then $N_{\Gamma/\overline{Q}}\iso\mathcal{O}(2)\oplus\mathcal{O}(5)$. Let $\nu:W\to\overline{Q}$ be the blow up of $\Gamma$, with exceptional divisor $E_W$. Then a computation similar to that of Lemma \ref{le:bound} yields $$X_{W|E_W}\equiv \Gamma_0+\nu_{|E_W}^*\mathcal{O}(10)\ {\rm and\ }X_{W|E_W}=\Gamma_0+\sum h_i f_i+{\rm effective} $$ where the $h_i\geq k_i$ and $\nu(f_i)\in\Sing(X)$. This proves that $\deg A\leq 10-\sum k_i=3$. Taking into account a reducible quadric in $\Sigma$, we have $F'\equiv F+\phi^*\mathcal{O}(3)$. This shows that $F-G\equiv \phi^*(A-\mathcal{O}(3))$ and together with the bound on the degree of $A$ the desired $F-G\equiv\mathcal{O}_E$. \end{proof} Projection formula and equation (\ref{eq:ecubo}) at page \pageref{eq:ecubo} yield $$(\overline{S}_Y-E)\cdot H_Y\cdot E=6-2H_Y\cdot E^2=6-2(f^*H\cdot E^2-E^3)=3$$ Then by Claim \ref{cl:caga} and equation (\ref{eq:rel}) I derive \begin{equation} \label{eq:3b} E\cdot C=3 \end{equation} Note that if $C$ is reducible, then each irreducible component $C_i$ is a line. In this case the inequality $E\cdot C_i\leq 2$ is immediate. Thus we proved that for any irreducible component $C_i\subset C$ \begin{equation} \label{eq:finalmente} E\cdot C_i\geq \deg C_i \end{equation} Let us assume that $C$ is irreducible, the reducible case is similar and left to the reader. Assume that $B_{Y|H_Y}=aC+\Delta$, for some effective divisor $\Delta$, with $\Delta\not\supset C$. The above construction gives $$(f^*(B-a\overline{S})-(\beta-2a)E)_{|H_Y}=\Delta+a(F-G)$$ This proves that $(f^*(B-a\overline{S})-(\beta-2a)E)\cdot C\geq 0$ and we conclude by equation (\ref{eq:finalmente}) that $$(b-2a)\geq (\beta-2a)$$ \end{proof} \paragraph*{\textsc{Step 3: plane curves}} Here we assume that $\Gamma$ is a plane curve of degree $d$ (by Step~1, $d\leq 3$), other than a line passing through some singular point. Let $\Pi \subset \mathbb{P}^4$ be the plane spanned by $\Gamma$. Fix $S$, $S'$ be general members of the linear system $|\mathcal{I}_{\Gamma,X}(1)|$. Here it is helpful and convenient to treat two cases, namely: \begin{description} \item[Case 3.1] $\Gamma\cap \Sing(X)=\emptyset$, and $1\leq d \leq 3$. \item[Case 3.2] $\Gamma\cap \Sing(X)\neq\emptyset$ and $2\leq d \leq 3$. \end{description} \subparagraph*{\textsc{Case 3.1}} I first deal with the easy, and well known, case of curves in the smooth locus. Let $f:Y\to X$ be the maximal singularity with center $\Gamma$, and exceptional divisor $E$. Then $Y$ is just the blow up of $\mathcal{I}_{\Gamma}$ and by Cutkosky's classification, \cite{Cu}, of terminal extraction $$E^3= K_X\cdot\Gamma-2p_a(\Gamma)+2$$ \begin{lem} $NE^1(Y)=\langle S_Y,E\rangle$ \end{lem} \begin{proof} Let $B_Y\subset Y$ be any effective irreducible $\mathbb{Q}$-divisor distinct from $E$ and $S_Y$. Then $B_Y=f^*B-\beta E$, for some positive $\beta\in\mathbb{Z}$ and $B\in |\mathcal{O}(b)|$. The claim is equivalent to prove that $\beta\leq b$. Consider a general element $ D\in |I_{\Gamma, X}(d)|$. The cycle $B_Y\cdot S_Y\cdot D_Y$ is effective, thus $$\begin{array}{rl} 0\leq& B_Y\cdot S_Y\cdot D_Y=(f^*B-\beta E)(f^*S-E)(f^*B-E)\\ =&4bd-bd-d\beta -d^2\beta-\beta E^3=3bd-d^2\beta+(2p_a(\Gamma)-2)\beta \end{array} $$ It is a simple check that for any possible pair $(d,p_a(\Gamma))$ the equation gives $\beta\leq b$. \end{proof} The Lemma finish off the Case~3.1. \subparagraph*{\textsc{Case 3.2}} From now on we assume that there are singular points along $\Gamma$ and $\Gamma$ is not a line. We work with the linear system $\Sigma=|S, S'|$, even though $\Gamma$ is usually only a component of its base locus $C=S\cap S' = \bs \Sigma= X \cap \Pi$. Write \[ C = \mu \Gamma + \sum \mu_i \Gamma_i \] We are assuming that $X$ is $\mathbb{Q}$-factorial. This implies that $\Pi$ can not be contained in $X$, and $C$ is a \emph{curve}. Assume first that the intersection $S \cdot S'$ is reduced then $\mult_\Gamma \mathcal{H} =\mult_\Gamma \mathcal{H}_{|S}$ and $\mult_{\Gamma_i} \mathcal{H} =\mult_{\Gamma_i} \mathcal{H}_{|S}$. \label{pag:plane} We always restrict to $S$ and write $$ \begin{array}{rl} A :=(1/n)\mathcal{H}_{|S}&= L+\gamma\Gamma+\sum \gamma_i\Gamma_i \\ S_{|S}^{\prime}&= C=\Gamma+\sum \Gamma_i \end{array} $$ The technique consists in selecting a ``most favorable'' component of $C$, performing an intersection theory calculation using that $L$ is nef, and get that $\gamma\leq 1$. This inequality contradicts the hypothesis that $\Gamma$ is a maximal singularities. Indeed, keeping in mind Remark \ref{re:uno}, we have $$n<\mult_{\Gamma}\mathcal{H}=\mult_{\Gamma}\mathcal{H}_{|S}=\gamma n\leq n$$ \begin{rem} \label{rem:prob} This is similar to what I did with the twisted cubic with $\sum k_i=7$, see Lemma \ref{le:cono}. I believe that the $E^3$ approach works also in this case, but I did not check it. On the other hand each different configuration needs different calculations. For this reason I developed a unified approach with more emphasis on the intersection theory on $S$. \end{rem} Because $\Gamma$ is a center of a \emph{maximal} singularity, $\gamma \geq \gamma_1$, $\gamma_2$, hence possibly after relabeling components of $C$, we can assume that: \[ \gamma\geq \gamma_2\geq \gamma_1. \] Consider now the effective $\mathbb{Q}$-divisor \[ (A-\gamma_1 S')_{|S}=L+(\gamma-\gamma_1) \Gamma+(\gamma_2-\gamma_1)\Gamma_2. \] I now show that $(\Gamma\cdot\Gamma_1)_{S}\geq \deg \Gamma_1$; together with the last displayed equation this implies that $\gamma \leq 1$ and finishes the proof. The curve $\Gamma_1$ is either a line or a conic. Let $D\in|\mathcal{I}_{\Gamma_1,\mathbb{P}^4}(\deg \Gamma_1)|$ a general element, and $D_{|S}=\Gamma_1+F$. Since $D\cap\Pi=\Gamma_1$ is a complete intersection, then $F$ intersects $\Gamma$ only at $\Sing(X)\cap\Gamma_1\cap\Gamma$. Fix a point $p\in \Sing(X)\cap\Gamma\cap\Gamma_1$. Assume that $p\in X\sim (0\in (xy+z^2+t^l=0))$. Let $f_1:X_1\to X$ be the blow up of $p\in X$ with exceptional divisor $E_1$. Then $S_{1|E_1}=C_1$ is a conic and since $\Bsl\Sigma=\Pi$ then $C_1$ is reduced. This proves that either $S_{1|E_1}$ is smooth or it has one singular point only, say $x_1$, and $C$ is a pair of lines. Let $f_2:X_2\to X_1$ be the blow up of $x_1$, with exceptional divisor $E_2$. If $p_1\in X_1$ is a smooth point then $E_2$ is a plane, and $\Bsl\Sigma_2$ is contained in a line. The surface $S_2$ is smooth and already $S_1$ was non singular. Otherwise $p_1\in X_1\sim (0\in (xy+z^2+t^{l-2}=0))$ and we simply repeat the same argument. This gives a morphism $\nu:W\to X$, with exceptional divisors $G_i$, for $i=1,\ldots,g$. Such that $\nu_{|S_W}:S_W\to S$ is a minimal resolution. Moreover $S_W\cap G _i=L_i\cup R_i$ is a pair of disjoint (-2)-curves, for any $i<g$, and $S_W\cap G_g=T$ is either a (-2)-curve or a pair of (-2)curves intersecting in a point. This proves that $p\in S_1$ is an $A_m$ point, with $m\leq l$. Furthermore $F$ is smooth at $x$. Number all irreducible components of the resolution $\nu_{|S_W}$ from 1 to $m=2g-\epsilon$, where $\epsilon=1,0$, according to the parity of $m$. Start with $L_1=:E_1$, then $L_i=:E_i$ and $R_i=E_{m+1-i}$ for any $i<g$. Similarly let $T=L_g\cup R_g=:E_g\cup E_{g+1}$, where $E_g\cdot E_{g+1}=1$, if it is reducible and $T=E_g$ if it is irreducible. As our aim is to calculate an intersection product we need to understand the pairs $(\Gamma,S)$, $(\Gamma_1,S)$, and $(F,S)$. If $(\Gamma_1)_W\cap T=\emptyset$ then there exists an index $j<g$ such that $(\Gamma_1)_W\cdot E_j=1$ and $F_W\cdot E_{m+1-j}=1$. If $(\Gamma_1)_W\cap T\neq\emptyset$ and $T=L_g\cup R_g$ is reducible we labeled the component in such a way that $(\Gamma_1)_W\cdot E_g=1$ and $F_W\cdot E_{g+1}=1$. Finally for $(\Gamma_1)_W\cap T\neq\emptyset$ and $T$ is irreducible then $(\Gamma_1)_W\cdot E_g=F_W\cdot E_g=1$. In any case $(\Gamma_1,S)$ is of type $A_m^j$, for some $j\leq m+1-j$. While $(F_Z,S)$ is of type $A_m^{m+1-j}$. Let $$\nu_{|S_W}^*(\Gamma_1)=(\Gamma_1)_W+\sum r_i E_i$$ and $$\nu_{|S_W}^*(F)=F_W+\sum f_i E_i.$$ Then the $r_i$s and the $f_i$s are completely determined by equation (\ref{eq:akn}) at page \pageref{eq:akn}. The index $j$ satisfies the inequality $j\leq m+1-j$ by hypothesis. Assume that $i\leq m+1-i$ is also true, then $m+1-j\geq i$. Thus for any index $i$ such that $i\leq m+1-i$ we have, $$ (m+1)(r_i-f_i)=\left\{\begin{array}{ll} (m+1-2j)i & {\rm\ if \ }i\leq j \\ (m+1-2i)j& {\rm\ if \ }i\geq j \end{array} \right. $$ These yield \begin{equation} \label{eq:conto}r_i\geq f_i \mbox{\rm \ for any $i\leq m+1-i$.} \end{equation} The curve $\Gamma\subset\Pi$ has at most a simple node or a simple cusp then \begin{equation} \label{eq:left} \Gamma\cdot E_i=0 \ \mbox{for any $i>g$ i.e. $i>m+1-i$} \end{equation} By construction $F_W\cdot \Gamma_W=0$ therefore by projection formula $$(\Gamma_1\cdot\Gamma-F\cdot \Gamma)_x\geq \sum_i(r_i-f_i)\Gamma_W\cdot E_i.$$ By equation (\ref{eq:left}) we can restrict the summation on indexes satisfying $i\leq m+1-i$ and equation (\ref{eq:conto}) yields $$(\Gamma_1\cdot\Gamma)_x-(F\cdot \Gamma)_x\geq 0.$$ Finally all contributions coming from singular points give $$\deg \Gamma_1 \deg\Gamma=D\cdot \Gamma=((\Gamma_1+F)\cdot\Gamma)_{S}\leq 2 (\Gamma_1\cdot\Gamma)_{S},$$ and consequently the needed bound since $\deg\Gamma\geq 2$. Next we consider the case in which $S_{|S}^{\prime}=\Gamma+2l$, where $\Gamma$ is a conic and $l$ is a line. Again $S$ is smooth at $(\Gamma\cap\Gamma_1)\setminus(\Sing(X)\cap\Gamma)$ as well as on the generic point of $l$. Indeed we are just fixing a plane, therefore we can always choose an hyperplane containing $\Pi$, and not tangent to $X$ at both $(\Gamma\cap\Gamma_1)\setminus(\Sing(X)\cap\Gamma)$ and at the generic point of $l$. Then $\mathcal{H}_{|S}=\mathcal{L}+\gamma\Gamma+\alpha l$. Consider the $\mathbb{Q}$-divisor $$(\mathcal{H}-(\alpha/2)S')_{|S}=\mathcal{L}+(\gamma-\alpha/2)\Gamma.$$ Then $$(1-(\alpha/2))\geq (\gamma-\alpha/2)\Gamma\cdot l.$$ To exclude this case we argue exactly as before that $\Gamma\cdot l\geq 1$. Keep in mind that also in this case $S$ has only isolated singularities. Therefore locally around $x$ all the calculations are the same. Finally we have to treat the double conic case. That is assume that $S_{|S}^{\prime}=2\Gamma$. If there exists an hyperplane section $\tilde{S}$ such that $\mult_{\Gamma} \tilde{S}=2$ then for a general hyperplane section $H$ $$4=H\cdot\frac{\mathcal{H}}n\cdot\tilde{S}\geq \frac4n\mult_{\Gamma}\mathcal{H}=4\gamma.$$ We can therefore assume that the tangent space to $X$ along $\Gamma\setminus(\Gamma\cap \Sing(X))$ is not fixed. It is immediate to observe that for any smooth point $p\in\Gamma$ the embedded tangent space contains $\Pi$. Let us assume the following notations: \begin{itemize} \item[-] $\Gamma\subset\Pi\subset\mathbb{P}^4\sim (x_0x_4+x_3^2=0)\subset (x_1=x_2=0)\subset\mathbb{P}^4$, \item[-] $x\equiv(1:0:0:0:0)\not\in \Sing(X)$, \item[-] $T_xX=(x_1=0)$, \item[-] $S=(x_1=0)_{|X}$, \item[-] $S'=(x_2=0)_{|X}$. \end{itemize} By construction $\mathcal{H}_{|S}=g\Gamma+\mathcal{L}$, where $\mathcal{L}$ is a linear system without fixed components and $g\geq \gamma$. Up to consider $2\mathcal{H}$ we can further assume that $g=2k$ is even. Since $S\cdot S'=2\Gamma$ then a general divisor $H\in\mathcal{H}$ has an equation of type $$H=(x_2^kL_1+x_1^kL_2)_{|X},$$ where $\deg L_i\geq 1$. Let $y\equiv(0:0:0:0:1)$, we can assume without loss of generality that $y\not\in \Sing(X)$, $T_yX=(x_1+x_2)$ and $L_1(y)\neq 0$, $L_2(y)\neq0$. The equation of $X$ is of the form $$(x_0x_4+x_3^2)^2+x_1F_1+x_2F_2=0,$$ to express $X$ at the point $y$, in a better way, we can rewrite it as follows $$x_4^3(x_1+x_2)+x_4^2(x_0^2+x_1R_1+x_2R_2)+x_4C+D=0.$$ Let $F=(x_2^kL_1+x_1^kL_2=0)$ I claim that due to the monomial $x_0^2x_4^2$ $$\mult_y F_{|X}\leq k+1\leq \deg F.$$ Indeed let $\nu:Y\to\mathbb{P}^4$ be the blow up of the point $y$. Let $y_i$ be the coordinates in the exceptional divisor $E_0$ of equation $(x_3=0)$ in the affine piece $y_3\neq 0$. Then $$F_Y=(y_2^kL_1^{\prime}+y_1^kL_2^{\prime}=0){\rm,\ and}\ \ X_Y=(y_1+y_2+(y_0^2+y_1R_1^{\prime}+y_2R_2^{\prime})x_3+G^{\prime}x_3^2),$$ and $$\mult_y F=k.$$ Let $\mu:W\to Y$ be the blow up of $G=X_{Y|E_0}$, with equations $x_3=y_1+y_2=0$. Let $t$ be the coordinate in the exceptional divisor $E_1$ of equation $(x_3=0)$. The polynomial $L_i$ does not vanish at $y$ then $F_{Y|E_0}=(\alpha y_1^k+\beta y_2^k)$, for some non zero numbers $\alpha$ and $\beta$. Therefore $(y_1+y_2)^2$ does not divide $(\alpha y_1^k+\beta y_2^k)$, and $$\mult_{G} F_Y\leq 1.$$ If $\mult_{G} F_Y= 1$ then $$F_W=(tA_0+y_1A_1+y_2A_2+x_3 B)\ \ {\rm and}\ \ X_W=(t+y_0^2+M+Nx_3),$$ for non zero polynomials $A_i$, and $B$, and due to the presence of the monomial $y_0^2$ the divisor $F_{W|E_1}$ does not contain $X_{W|E_1}$ and consequently $\mult_x F_{|X}\leq k+1$. This inequality concludes the proof. \end{proof} \begin{rem} I proved that a double conic is never the center of maximal singularities on any terminal $\mathbb{Q}$-factorial quartic. This relax the assumptions in \cite[Theorem 1.1]{CMp}. \end{rem} It is still left to adapt the proof to arbitrarily fields of characteristic 0. \begin{proof}[Proof of Theorem \ref{th:k}] Let again $\Gamma$ be a center of maximal singularities for the linear system $\mathcal{H}\subset|\mathcal{O}(n)|$. If $\Gamma$ is defined over $k$ then all the proof works exactly as in the algebraically closed field case. The only observations I want to add are the following. When $\Gamma$ is a twisted cubic then $\Pi\iso\mathbb{P}^3\supset \Gamma$ is defined over $k$. Moreover $H=\Pi_{|X}$ has to be smooth on the generic point of $\Gamma$, as in the proof of Lemma \ref{le:bound}, and hence irreducible, by $\mathbb{Q}$-factoriality. When $\Gamma$ is a plane curve of degree greater than 1, the plane $\Pi\supset\Gamma$ is defined over $k$, and $\Pi\cap X$ is a curve. Assume that $\Gamma$ is not defined over $k$, and let $r=\deg[k(\Gamma):k]$. If $\Gamma=P$ is a point then $4n^2=\mathcal{H}^2\cdot\mathcal{O}(1)\geq r(\mult_P\mathcal{H})^2$. If $P$ is smooth then $\mult_P\mathcal{H}^2>4n^2$, \cite[Theorem 3.1]{Co2}, while for singular $P$, $\mult_P\mathcal{H}>n$, \cite[Theorem 3.10]{Co2}, and consequently $\mult_P\mathcal{H}^2>2n^2$, the exceptional divisor is a quadric. This proves that when $r\geq 2$ no point can be a center of maximal singularities. If $\Gamma$ is a curve then again by numerical reasons $$4n=\mathcal{H}\cdot\mathcal{O}(1)^2\geq r\deg\Gamma\mult_{\Gamma}\mathcal{H}>2n\deg\Gamma,$$ so that $\Gamma$ is a line and $r\leq 3$. Let $\Gamma_i$ the conjugate lines over $k$. First observe that $\Gamma\cap\Gamma_i\neq \emptyset$. Indeed they are both centers of maximal singularities on $\overline{k}$ and we can untwist $\Gamma$ over $\overline{k}$. If $\Gamma_i$ is disjoint from $\Gamma$ the untwist is an isomorphism on the generic point of $\Gamma_i$. This is very clear from our description in terms of Sarkisov links. Then its strict transform is a curve, say $\Gamma_i^{\prime}$, of degree $g>1$. Let $\mathcal{H}'$ be the untwist of $\mathcal{H}$, then by Lemma \ref{le:untw}, $\mathcal{H}'\in|\mathcal{O}(n')|$, for some $n'<n$. But then $\mult_{\Gamma_i^{\prime}}\mathcal{H}'>n$ and this is not allowed by the proof of Theorem \ref{th:cA1}. To conclude we have to study conjugate lines intersecting in a point. Assume that $r=2$, denote $\Pi \subset \mathbb{P}^4$ the plane spanned by $\Gamma$ and $\Gamma_1$. Let $S$, $S'$ be general members of the linear system $|\mathcal{I}_{\Pi,X}(1)|$. Observe that $\Pi$ is defined over $k$, therefore $\Pi\cap X=(\Gamma+\Gamma_1)+ \Delta$ is a curve. By the proof of Theorem \ref{th:center}, since $X$ has only ordinary double points, all singular points of $S$ are of type $0\in(xy+z^{t+1}=0)$, with $t\leq 2$. Assume that $\Pi_{|X}\neq 2(\Gamma+\Gamma_1)$ then following the same arguments of page \pageref{pag:plane} we have to prove that for any irreducible curve $C\subset\Delta$ $$(\Gamma+\Gamma_1)\cdot C\geq \deg C$$ Fix a point $p\in C\cap\Gamma$. Since both $C$ and $\Gamma$ are curves contained in $\Pi$ and $p\in S\sim(0\in (xy+z^t=0))$, with $t\leq 3$, then $$(C\cdot\Gamma)_x\geq \frac12.$$ Similarly for $\Gamma_1$, so that $$C\cdot(\Gamma+\Gamma_1)\geq 2\frac12 \deg C\geq\deg C.$$ If $\Pi\cap X=2(\Gamma+\Gamma_1)$ then, up to a projectivity, we can write the equation of $X/\overline{k}$ as $$x_0^2x_4^2+x_1F_1+x_2F_2=0.$$ Then we derive a contradiction as in the double conic case. Keep in mind that the crucial point was the presence of the monomial $x_0^2x_4^2$. Note that the two lines are centers of maximal singularities on $\overline{k}$. Here we proved that they are not centers of maximal singularities with the same associated linear system. The case $r=3$ is similar. If all lines stays on the same $k$-plane I conclude as above. If they span a $\mathbb{P}^3$ say $\Pi$, then $\Pi$ is defined over $k$. Moreover $H=\Pi_{|X}$ has to be smooth on the generic point of the lines, as in the proof of Lemma \ref{le:bound}. Therefore the plane spanned by each pair of lines is not contained in $X$ and I conclude as before. \end{proof} \end{document}
arXiv
Short-term stratospheric ozone fluctuations observed by GROMOS microwave radiometer at Bern Lorena Moreira ORCID: orcid.org/0000-0002-4791-85001, Klemens Hocke1,2 & Niklaus Kämpfer1,2 Earth, Planets and Space volume 70, Article number: 8 (2018) Cite this article The ground-based millimeter wave ozone spectrometer (GROMOS) has been continually measuring middle atmospheric ozone volume mixing ratio profiles above Bern, Switzerland (\(46.95^{\circ }\hbox {N}\), \(7.44^{\circ }\hbox {E}\), 577 m), since 1994 in the frame of NDACC. The high temporal resolution of GROMOS (30 min) allows the analysis of short-term fluctuations. The present study analyses the temporal perturbations, ranging from 1 to 8 h, observed in stratospheric ozone from June 2011 to May 2012. The short-term fluctuations of stratospheric ozone are within 5%, and GROMOS appears to have relative amplitudes stable over time smaller than 2% at 10 hPa (32 km). The strongest natural fluctuations of stratospheric ozone (about 1% at 10 hPa) above Bern occur during winter due to displacements and deformations of the polar vortex towards mid-latitudes. Even though ozone is a minor constituent in the atmosphere, it is a component of major interest. Stratospheric ozone filters most of the UV-B radiation from the Sun and thus allows life on Earth. High-resolution observations of minor constituents in the middle atmosphere often reveal small-scale perturbations (present in the horizontal, in the vertical and in time) overlaid upon the mean background distribution (Eckermann et al. 1998). Atmospheric waves were identified as an important source of such variability (Zhu and Holton 1986; Appenzeller and Holton 1997; Eckermann et al. 1998; Calisesi et al. 2001; Fritts and Alexander 2003; Moustaoui et al. 2003; Hocke et al. 2006; Noguchi et al. 2006; Flury et al. 2009; Chane et al. 2016). For example, the wintertime Arctic middle atmosphere is characterised by the presence of large amplitude planetary Rossby waves that often interact with the stratospheric polar vortex and trigger sudden stratospheric warming (SSW) events (Chandran et al. 2013). These events are characterised by a sudden increase in the temperature and a reversal of the zonal wind. The effect in ozone at mid-latitudes is depletion in the lower and upper stratosphere. The depletion in lower stratospheric ozone is due to transport of ozone poor air from the polar vortex, whereas the ozone depletion in the upper stratosphere is caused by the sudden increase in temperature (Flury et al. 2009). Gravity waves (GW) may also play a role in forcing of SSW when propagating into the stratosphere as a consequence of variations in the tropopause jet during instabilities in the upper troposphere (Flury et al. 2010; Yamashita et al. 2010). Mid-latitude gravity waves produce periodic fluctuations of ozone volume mixing ratio in the upper stratosphere and lower mesosphere (Hocke et al. 2006) possibly due to vertical advection of air parcels by gravity waves (Zhu and Holton 1986; Eckermann et al. 1998). Another example is, for instance, horizontal mixing processes through the transport barriers, either by small-scale structures (filaments or laminae) or by large-scale structures (streamers), are thought to play a significant role in the variability of atmospheric trace gases in the middle stratosphere (Krüger et al. 2005). Atmospheric soundings at middle and high latitudes frequently disclose enhancements or depletions of lower stratospheric ozone confined to narrow vertical layers (Eckermann et al. 1998). These structures are known as filaments or laminae, and are mainly observed at the edge of the polar vortex in the lower stratosphere (tropopause–25 km), generated by planetary Rossby wave breaking (Eckermann et al. 1998; Krüger et al. 2005). Regarding the large-scale structures, there are two types of stratospheric streamers: the tropical–subtropical streamer and the polar vortex streamer (Waugh 1993; Offermann et al. 1999; Manney et al. 2001; Eyring et al. 2003; Krüger et al. 2005). These structures transport tropical–subtropical and polar vortex air masses into mid-latitudes more frequently during Arctic winter. The breaking of planetary Rossby waves at the edge of the polar vortex (polar vortex streamers) or at the edge of the tropics (tropical–subtropical streamers) seems to be linked with the transport of air masses into mid-latitudes (Krüger et al. 2005). The observed effect in ozone is the meridional mixing of ozone (Eyring et al. 2003). The stratospheric streamers occur preferentially at higher altitudes above 20 km, in the middle stratosphere, in contrast to filaments (laminae) which occur below 25 km, in the lower stratosphere (Krüger et al. 2005). Nevertheless, a streamer can eventually develop into a filament-like structure (Waugh 1993; Krüger et al. 2005). Many of the instruments used for measuring the composition of the atmosphere make use of the spectral properties of its constituent gases (Parrish 1994). Millimeter wave radiometry is a well-established tool for the monitoring of atmospheric species. It is a passive remote sensing technique which detects radiation emitted by rotational transitions of molecules in the atmosphere. The spectral analysis of the pressure broadened lines emitted by the species under analysis permits the retrieval of the vertical profile from the lower stratosphere to the mesosphere (20–70 km) (Kämpfer 1995). The technique enables day-round measurements as there is no reliance on the Sun as a source and under nearly all weather conditions. In addition, microwave radiometry provides high temporal resolution. In the present study, we take advantage of this feature and we analyse the short-term fluctuations (1–4 h) of stratospheric ozone measured by the GROMOS radiometer. The aim of this analysis is to initiate a new field of study regarding the short-term stratospheric perturbations in trace constituents, ozone in our case, since we are not aware of any other studies on this topic, except for Hocke et al. (2006), but this study is restricted to mesospheric fluctuations. The characterisation of the short-term ozone fluctuations can lead to a better understanding of the role of atmospheric waves and nonlinear wave-wave interactions to induce perturbations in trace gas profiles. "The GROMOS radiometer" section describes the GROMOS instrument along with an overview of the retrieval method, which has been modified to enable the study of these small-scale perturbations. In "Method" section is explained the method used for this purpose. "Results and discussion" section shows the results obtained along with a discussion on the geophysical causes of short-term mid- and upper stratospheric ozone fluctuations. And finally "Conclusions" section offers some concluding remarks. The GROMOS radiometer GROMOS (ground-based millimeter wave ozone spectrometer) was constructed by the Institute of Applied Physics of the University of Bern. The instrument has been operated in Bern (\(46.95^{\circ }\hbox {N}\), \(7.44^{\circ }\hbox {E}\), 577 m) since November 1994 in the scope of the network for the detection of atmospheric composition change (NDACC). NDACC is an international global network of more than 80 stations making high-quality measurements of atmospheric composition that began official operations in 1991 (Mazière et al. 2017). The GROMOS microwave radiometer detects the thermal emission of the pressure broadened rotational transition of ozone at 142.175 GHz. The spectrum measured by the instrument is given in terms of brightness temperature. The brightness temperature is the physical temperature at which a perfect blackbody would emit the same power as it is measured by the instrument. For a review of technical details on the instrument, we refer to, for instance, Moreira et al. (2015) or Peter (1997). Retrieval technique The shape of the spectral line measured by the instrument contains information on the vertical distribution of the emitting molecule, because of the pressure broadening (Parrish 1994). Therefore, the vertical distribution of ozone VMR can be retrieved from the observed emission line shape by means of radiative transfer in a model atmosphere and an optimal estimation method. The atmospheric radiative transfer simulator (ARTS2) (Eriksson et al. 2011) is used as forward model to simulate atmospheric radiative transfer and calculate an ozone spectrum for the modelled atmosphere by using an a priori ozone profile. The inversion is done through the accompanying MATLAB package Qpack2 (Eriksson et al. 2005), which uses the optimal estimation method (OEM) (Rodgers 1976) to derive the best estimate of the vertical profile by combining the measured and modelled spectra. During the inversion process, a priori information is required. The a priori ozone profiles are from a monthly climatology from reanalysis data of the European Centre for Medium-range Weather Forecasts (ECMWF) up to 70 km and extended above by a monthly ozone climatology from observations close to Bern of the satellite microwave limb sounder Aura/MLS. In the present study, we use the retrieval version 111 which is optimised for the retrieval of short-term ozone fluctuations since we take into account uncertainties of the retrieved ozone resulting from the tropospheric opacity as described later in more detail. The a priori covariance matrix of retrieval version 111 is 2 ppm for the diagonal elements, and the values decay exponentially with a correlation length of 3 km for the off-diagonal elements. Figure 1 shows the a priori (green line–left panel) and the retrieved profile (blue line–left panel) recorded at noon the 28 August 2011 obtained by the retrieval version 111. The averaging kernels (AVKs) and the area of the AVKs, the measurement response (MR), are represented in the middle panel. The AVKs are multiplied by 4 in order to be displayed along with the MR (red line–middle panel). The AVK-lines are the grey lines except for some altitude levels, which are in different colours: orange for 20 km, green for 30 km, magenta for 40 km, cyan for 50 km, black for 60 km and blue for 70 km. The a priori contribution to the retrieved profile can be estimated by 1 minus the area of the AVKs, the so-called measurement response (MR–middle panel). A reliable altitude range for the retrieval is considered where the MR is larger than 0.8, which corresponds to an a priori contribution smaller than 20%. The vertical resolution (cyan line–right panel) is quantified by the full width at half maximum of the AVKs. The vertical resolution of GROMOS lies from 10 to 18 km in the stratosphere and up to 20 km in the middle mesosphere. The same panel shows the altitude peak (magenta line–right) of the corresponding kernels, and as it can be observed in the coloured AVK-lines, the AVKs peak at its nominal altitude for the considered vertical range. The signal from the stratosphere detected by the instrument is attenuated by the troposphere, mainly due to water vapour content. Tropospheric water vapour significantly influences the measured ozone spectrum by increasing the continuum emission and attenuating the stratospheric ozone emission line. Accordingly, it is important to correct the measured spectra for the tropospheric effect. The tropospheric correction depends upon the opacity of the troposphere. The transmission factor: $$\begin{aligned} e^{-\tau }=\frac{T_{\rm b}(z_{0})-T_{\rm trop}}{T_{\rm b}(z_{\rm trop})-T_{\rm trop}} \end{aligned}$$ $$\begin{aligned} T_{\rm trop}=T_{\rm surface}+\Delta T \end{aligned}$$ where \(\tau\) is the tropospheric opacity that can be calculated from the wings of the measured spectrum where the wings are about 0.5 GHz away from the ozone line centre. \(T_{\rm trop}\) is the mean tropospheric temperature (Eq. 2), which is estimated according to a linear model proposed by Ingold et al. (1998), considering the surface air temperature (\(T_{\rm surface}\)) measured at a nearby weather station and a temperature offset \(\Delta T=-\,10.4\) K, depending on the frequency range (142 GHz) and on the altitude (577 m.a.s.l). \(T_{\rm b}(z_{\rm trop})\) is the brightness temperature that the radiometer would measure at the tropopause level. \(T_{\rm b}(z_{0})\) is the brightness temperature measured at ground level, and it is estimated from the off-resonance emission at the wings of the spectrum. The brightness temperature of the wings corresponds to the continuum emission due to tropospheric oxygen and water vapour. The larger (smaller) the tropospheric opacity or the smaller (larger) the transmittance of the signal through the troposphere, the larger (smaller) the tropospheric correction. The correction of the tropospheric contribution consists of scaling the amplitude of the measured line spectrum, as if it would be measured at tropopause level in an isothermal troposphere with a mean tropospheric temperature, \(T_{\rm trop}\). Nevertheless, because of the tropospheric correction the noise in the spectrum is magnified. In Fig. 2 are shown two measurements of the GROMOS spectra binned in frequency, one for a clear sky case and another for a cloudy sky case. The GROMOS spectra are binned in time and frequency. The binning in time is 30 min. The fast Fourier transform spectrometer (FFTS) has around 32,768 channels, and after the binning in frequency the number of points in frequency is 54 with higher frequency resolution in the line centre compared to the line wings. In the clear sky case, the brightness temperature \((T_{\rm b})\) is around 90 K at the peak (142.175 GHz), whereas in the cloudy sky case the brightness temperature is quite high, 200 K at the peak and 192 K on the wings. Therefore, when at a glance the sky is cloudy, the instrument measures higher brightness temperatures. In the second row are represented the spectra \((T'_{\rm b})\) of both cases after the application of the tropospheric correction. The consequence of a larger tropospheric correction is what we observe in the third row, where the error of the brightness temperature \((\Delta T'_{\rm b})\) for both cases is represented. \(\Delta T'_{\rm b}\) in the cloudy sky case is larger than in the clear sky case, due to the amplification of the error by the tropospheric correction. Accordingly, the retrieval error of the measurements is larger when the tropospheric correction is larger, i.e. smaller tropospheric transmittance. Normally, the GROMOS retrieval is performed with a constant error of \(\Delta T'_{\rm b}=0.8\hbox{K}\), but retrieval version 111 of this study has a variable error depending on the tropospheric transmission: $$\begin{aligned} \Delta T'_{\rm b}=0.5 {\rm K} +\frac{\Delta T_{\rm b}}{e^{-\tau }} \end{aligned}$$ the error of the measured brightness temperature, \(\Delta T_{\rm b}\), is given by the radiometer equation: $$\begin{aligned} \Delta T_{\rm b}=\frac{ T_{\rm b}+T_{\rm rec}}{\sqrt{\Delta f \cdot t_{\rm int}}} \end{aligned}$$ The radiometer equation gives the resolution of the radiation measured, which is determined by the bandwidth of the individual spectrometer channels (\(\Delta f\)), by the integration time (\(t_{\rm int}\)) and by the total power measured by the spectrometer. A constant error of 0.5 K is considered as a systematic bias of the spectra, due to spectroscopic errors and the water vapour continuum. As it is shown in Fig. 2, the error of the brightness temperature (\(\Delta T_{\rm b}\)) is of the order of a few Kelvins in the line centre and 0.5 K in the line wings of the spectrum. Therefore, the measurement noise (\(\Delta T'_{\rm b}\)) depends on the frequency due to different spectral binning and on the tropospheric transmittance. \(\Delta T'_{\rm b}\) is larger in the ozone line centre since the binning is only over a few channels at the ozone line centre, while the binning in the line wings is over several hundred channels or more. Thus, the thermal noise is reduced in the ozone line wings by averaging over a high number of channels. This is a more realistic approach for the retrieval than considering a constant measurement noise, resulting in an improvement in the retrieved ozone VMR in the lower stratosphere. The present study analyses the short-term fluctuations in stratospheric ozone measured by the GROMOS radiometer from June 2011 to May 2012. We selected this time interval since it covers a full year with the winter in the centre. Further, the disturbance of the Northern polar vortex was relatively simple in winter 2011/2012 which is mainly characterised by a minor sudden stratospheric warming in mid-January 2012 (Chandran et al. 2013). Thus, the attribution between the polar vortex disturbance and the behaviour of the short-term ozone fluctuations above Bern is easier in this year compared to other years. We have used the standard deviation, calculated after the removing of the linear trend, of 8 consecutive ozone profiles within a time window of about 4 h as a proxy for the strength of the fluctuations. The standard deviation is a measure that is used to quantify the amount of dispersion of a set of data from its mean. The deviation is higher when the data are spread out to the mean, and in our case indicates stronger fluctuations. Since the sampling rate is 30 min, oscillations with periods of 1 h (Nyquist period) to about 8 h will contribute to the calculated standard deviation. An example of these standard deviations is presented in Fig. 3. First panel of Fig. 3 shows the ozone VMR (ppm) through the blue line, the mean of 8 consecutive ozone values (red line) and the standard deviation calculated every 8 ozone values after the removal of the linear trend (red area) at 10 hPa for a time interval of nearly 2 days at the beginning of June 2011. The second panel of Fig. 3 displays the tropospheric transmittance observed for the same interval as the upper panel. This tropospheric transmittance corresponds to cloudy sky cases, in which tropospheric corrections were performed. We cannot find a clear relation between the tropospheric transmittance (green line) and the fluctuations (red area). Later, we quantify that the uncertainty of the ozone retrieval only depends marginal on the tropospheric transmission. We conclude from Fig. 3 that GROMOS measurements are stable over time with a standard deviation around 2% (0.15 ppm) at 10 hPa (32 km). Generally, the relative amplitudes are stable over time within 5% in the stratosphere (from 20 to 50 km altitude). In addition, Fig. 3 shows that the amplitudes of natural short-term ozone fluctuations are smaller than 2% at 10 hPa for the time interval shown, since the fluctuations also contain the influence of the random retrieval error. The resulted time series are due to natural short-term fluctuations and to some random retrieval errors. The random retrieval error includes the thermal noise on the spectra due to the receiver noise, which propagates into the ozone profiles. Unfortunately, this contribution is impossible to discriminate from the retrieval error; nevertheless, we did not find any artificial periodicity in the temporal range of our study (from 1 up to 8 h). Since the mid-latitude stratosphere is known to be quiet during summer, it can be assumed that the retrieval error is mainly due to the random retrieval error (thermal receiver noise) in summer. We focus our study on the 10 hPa pressure level in the middle stratosphere where the ozone retrieval is most reliable since the ozone volume mixing ratio is high at 10 hPa, and the influence of the water vapour continuum is smaller at 10 hPa than in the lower stratosphere at 50 hPa. The goal is to investigate whether natural short-term ozone fluctuations can occur exceeding the 2% standard deviation level. These disturbances are believed to occur naturally in the stratosphere primarily due to atmospheric waves propagating from the troposphere into the stratosphere during winter, since the winds are eastward at all altitudes levels and also due to the winter stratosphere which is more dynamically driven than radiatively driven (Zhu and Holton 1986; Appenzeller and Holton 1997; Eckermann et al. 1998; Calisesi et al. 2001; Fritts and Alexander 2003; Moustaoui et al. 2003; Hocke et al. 2006; Noguchi et al. 2006; Flury et al. 2009; Chane et al. 2016). The short-term stratospheric ozone fluctuations (\(\sigma _{\rm strato}\)) can also be affected by the tropospheric correction, since the retrieval error of the measurements is larger when the tropospheric correction is larger. This contribution can be estimated; therefore, we have calculated the influence of the tropospheric correction in the retrieved profiles. To bring about this requirement, we consider during the retrieval procedure that the brightness temperature error depends on the transmission factor (Eq. 3). In Fig. 4, we can observe the effect of the tropospheric transmittance in the random retrieval error of the profile, which is provided by the optimal estimation method (the smoothing error is not considered), at different pressure levels for the period from June 2011 to May 2012. The retrieval error is smaller when the tropospheric transmittance is larger. However, this effect is smaller than 0.02 ppm at 10 hPa. The green lines are the mean values of the retrieval error and of the tropospheric transmittance, and the red lines are the values of the second-degree polynomial regression of both variables evaluated at the tropospheric transmittance values. In order to quantify the fluctuations generated by the tropospheric correction (\(\bar{\sigma }_{\rm retrieval}\)), we performed a second-degree polynomial regression (\(\sigma _{\rm retrieval}(t)=p_{1} t^{2}+p_{2}t+p_{3}\)) between the retrieval error (\(\sigma _{\rm retrieval}\)) and the tropospheric transmittance (t). The coefficients (\(p_{1}, p_{2}, p_{3}\)) resulted are evaluated at the mean of 8 consecutive values of the tropospheric transmittance (\(\bar{t}\)) to obtain the \(\bar{\sigma }_{\rm retrieval}(\bar{t})=p_{1} \bar{t}^{2}+p_{2}\bar{t}+p_{3}\). Although \(\bar{\sigma }_{\rm retrieval}\) basically plays no role and it could be neglected, we consider it in the calculation of the stratospheric fluctuations \(\left( \sigma _{\rm strato}=\sqrt{\sigma _{\rm total}^{2}-\bar{\sigma }_{\rm retrieval}^{2}}\right)\). Figure 5 shows in the first panel the total short-term stratospheric ozone relative fluctuations contained in the GROMOS data in magenta (\(\sigma _{\rm total}\)), which is covered by the stratospheric relative fluctuations (\(\sigma _{\rm strato}\)) in blue, as they are practically identical, and the relative fluctuations caused by the tropospheric opacity (\(\bar{\sigma }_{\rm retrieval}\)) in red, in per cent at 10 hPa (32 km) for the period from June 2011 to May 2012. \(\bar{\sigma }_{\rm retrieval}\) is rather small, and hence, the tropospheric transmission on the retrieval noise has a minor effect on the observed temporal fluctuations of stratospheric ozone. Nevertheless, a random retrieval error is still included in \(\sigma _{\rm strato}\). Therefore, the blue line is the sum of the natural ozone fluctuations and the unknown influence of the random noise on the ozone time series. However, we learn from Fig. 5 that an upper limit to this contribution can be set, around 2%. In addition, there seems to be a temporal evolution of \(\sigma _{\rm strato}\), which is more likely due to natural ozone fluctuations, since the receiver noise is constant in time, usually around 2500 K. The ozone fluctuations are shown in the middle stratosphere because at this level the random retrieval error is about 2%, while the disturbed polar vortex edge often reaches mid-latitudes. Thus, we expect enhanced ozone fluctuations above Switzerland during times of a disturbed polar vortex. We can state from Fig. 5 that the strongest fluctuations occur during December and January, with an increase in \(\sigma _{\rm strato}\) of about 1%. Up to now, the magnitude of short-term ozone fluctuations was unknown. Our study gives the result that the relative standard deviation of short-term ozone fluctuations in the vicinity of the polar vortex is about 1%. Thus, our study showed that short-term ozone fluctuations of the mid-stratosphere are quite small even at the polar vortex edge. We obtained similar results for other years too. In the second panel is represented the ozone VMR in ppm at 10 hPa (32 km) with the aim to observe its behaviour during the period under assessment. The standard deviations and the ozone VMR displayed in first and second panel of Fig. 5 have been smoothed in time by a moving average over an interval of 3–4 days. The third panel shows the geopotential height (GPH) at 10 hPa (32 km) from June 2011 to May 2012 from ECMWF reanalysis data. The stratospheric ozone fluctuations (\(\sigma _{\rm strato}\)) are larger when the GPHs are lower above our location. These lower GPHs are associated with deformations and southward excursions of the polar vortex, which are caused by planetary Rossby wave activity (Calisesi et al. 2001). The fourth panel of Fig. 5 shows the vertical wind from ECMWF reanalysis data. The strong vertical wind oscillations during December and January occur presumably because of the planetary Rossby wave breaking. In fact, Chandran et al. (2013) have reported a minor sudden stratospheric warming (SSW) in mid-January 2012. This SSW is also seen in the bottom panel which shows an increase in the relative fluctuations of potential vorticity at 10 hPa at Bern. It is considered a minor SSW since the zonal mean wind reversal did not reach the 10 hPa (32 km) level. However, we can observe its effect in the short-term stratospheric ozone fluctuations (first panel of Fig. 5) and also in ozone at 10 hPa from ECMWF operational data (Fig. 6). Figure 6 displays plots of potential vorticity, temperature and ozone from ECMWF operational data at 10 hPa for the 15 January 2012. From the potential vorticity plot, we know that the polar vortex is shifted southward and Bern is located inside the polar vortex. We notice an increase of about 1% in the stratospheric ozone fluctuations during winter, when the polar vortex makes its incursions towards the mid-latitudes. Enhanced relative standard deviations, i.e. stronger fluctuations (Fig. 5), are found when GROMOS is measuring inside the polar vortex (Fig. 6). Thus, this minor SSW seems to be the reason for the enhancement of short-term stratospheric ozone fluctuations during January 2012. Nevertheless, the short-term stratospheric ozone fluctuations are not getting stronger than 3% in average. We would have expected that stronger amplitudes would occur at the polar vortex edge during times of breaking planetary Rossby waves. The short temporal perturbations in stratospheric ozone were investigated through the data recorded by the GROMOS ground-based microwave radiometer at Bern from June 2011 to May 2012. In the present study, the retrieval takes into account the variable noise of the tropospheric corrected spectra. Accordingly, we can estimate the influence of the random retrieval error on the temporal ozone fluctuations. These ozone fluctuations only weakly depend upon the tropospheric transmittance. We find that the effect of tropospheric transmittance on the retrieval error is less than 0.02 ppm at 10 hPa. The contribution to the stratospheric fluctuations is due to the random retrieval error (about 2% at 10 hPa) and to natural short-term ozone fluctuations. We find that during times of a disturbed vortex the short-term ozone fluctuations can reach a standard deviation of about 1% superposed on the random retrieval error. This is a new result which quantifies the magnitude of short-term ozone fluctuations in the wintertime mid-stratosphere at mid-latitudes. Example of a retrieved (blue line) and an a priori (green line) ozone VMR profile, AVKs (grey and colour lines in the middle panel), the measurement response (MR) (red line in the panel), vertical resolution (cyan line in the right panel) and altitude peak (magenta line in the right panel) of the GROMOS retrieval version 111 for 28 August 2011 with an integration time of 30 min. The xlabel "altitude" of the right panel also stands for the vertical resolution Binned spectra of a clear sky case and a cloudy sky case. In the first row are represented the brightness temperatures for both cases, whereas in the second row are the spectra corrected for the tropospheric contribution. The third row shows the brightness temperature error for both cases Ozone VMR (ppm) is represented by the blue line with a time resolution of 30 min as function of day and day fraction in June 2011 . The mean of 8 consecutive ozone values is the red line, and the red area is the standard deviation calculated every 8 values after removing its linear trend at 10 hPa for the time interval of nearly 2 days at the beginning of June 2011. The second panel shows the tropospheric transmittance observed for the same interval as the upper panel Scatter plot of the tropospheric transmittance and retrieval error at different pressure levels during the time interval from June 2011 to May 2012. The green lines are the mean values of both variables and the red lines are the values of the second-degree polynomial regression of both variables evaluated at the tropospheric transmittance values Standard deviation of relative short-term ozone fluctuations, ozone VMR, geopotential height, vertical wind and relative fluctuation of potential vorticity at 10 hPa above Bern from June 2011 to May 2012. The standard deviations and the ozone VMR have been smoothed in time by a moving average over an interval of 3–4 days. The magenta line of \(\sigma _{\rm total}\) is just below the blue line of \(\sigma _{\rm strato}\) ECMWF plots of potential vorticity, temperature and ozone at 10 hPa for the northern hemisphere, for the 15 January 2012. The black dot shows the location of Bern, Switzerland Appenzeller C, Holton JR (1997) Tracer lamination in the stratosphere: a global climatology. J Geophys Res Atmos 102(D12):13555–13569. https://doi.org/10.1029/97JD00066 Calisesi Y, Wernli H, Kämpfer N (2001) Midstratospheric ozone variability over bern related to planetary wave activity during the winters 1994–1995 to 1998–1999. J Geophys Res Atmos 106(D8):7903–7916. https://doi.org/10.1029/2000JD900710 Chandran A, Garcia RR, Collins RL, Chang LC (2013) Secondary planetary waves in the middle and upper atmosphere following the stratospheric sudden warming event of January 2012. Geophys Res Lett 40(9):1861–1867. https://doi.org/10.1002/grl.50373 De Mazière M, Thompson AM, Kurylo MJ, Wild J, Bernhard G, Blumenstock T, Hannigan J, Lambert J-C, Leblanc T, McGee TJ, Nedoluha G, Petropavlovskikh I, Seckmeyer G, Simon PC, Steinbrecht W, Strahan S, Sullivan JT (2017) The network for the detection of atmospheric composition change (ndacc): history, status and perspectives. Atmos Chem Phys Discuss 2017:1–40. https://doi.org/10.5194/acp-2017-402 Eckermann SD, Gibson-Wilde DE, Bacmeister JT (1998) Gravity wave perturbations of minor constituents: a parcel advection methodology. J Atmos Sci 55(24):3521–3539. https://doi.org/10.1175/1520-0469(1998)055%3c3521:GWPOMC%3e2.0.CO;2 Eriksson P, Jiménez C, Buehler SA (2005) Qpack, a general tool for instrument simulation and retrieval work. J Quant Spectrosc Radiat Transf 91(1):47–64. https://doi.org/10.1016/j.jqsrt.2004.05.050 Eriksson P, Buehler SA, Davis CP, Emde C, Lemke O (2011) Arts, the atmospheric radiative transfer simulator, version 2. J Quant Spectrosc Radiat Transf 112(10):1551–1558. https://doi.org/10.1016/j.jqsrt.2011.03.001 Eyring V, Dameris M, Grewe V, Langbein I, Kouker W (2003) Climatologies of subtropical mixing derived from 3d models. Atmos Chem Phys 3(4):1007–1021. https://doi.org/10.5194/acp-3-1007-2003 Flury T, Hocke K, Haefele A, Kämpfer N, Lehmann R (2009) Ozone depletion, water vapor increase, and PSC generation at midlatitudes by the 2008 major stratospheric warming. J Geophys Res Atmos. https://doi.org/10.1029/2009JD011940 Flury T, Hocke K, Kämpfer N, Wu DL (2010) Enhancements of gravity wave amplitudes at midlatitudes during sudden stratospheric warmings in 2008. Atmos Chem Phys Discuss 10:29971–29995. https://doi.org/10.5194/acpd-10-29971-2010 Fritts DC, Alexander MJ (2003) Gravity wave dynamics and effects in the middle atmosphere. Rev Geophys. https://doi.org/10.1029/2001RG000106 Hocke K, Kämpfer N, Feist DG, Calisesi Y, Jiang JH, Chabrillat S (2006) Temporal variance of lower mesospheric ozone over Switzerland during winter 2000/2001. Geophys Res Lett. https://doi.org/10.1029/2005GL025496 Ingold T, Peter R, Kämpfer N (1998) Weighted mean tropospheric temperature and transmittance determination at millimeter-wave frequencies for ground-based applications. Radio Sci 33(4):905–918. https://doi.org/10.1029/98RS01000 Kämpfer N (1995) Microwave remote sensing of the atmosphere in Switzerland. Opt Eng 34(8):2413–2424. https://doi.org/10.1117/12.205666 Krüger K, Langematz U, Grenfell JL, Labitzke K (2005) Climatological features of stratospheric streamers in the fub-cmam with increased horizontal resolution. Atmos Chem Phys 5(2):547–562. https://doi.org/10.5194/acp-5-547-2005 Manney GL, Michelsen HA, Bevilacqua RM, Gunson MR, Irion FW, Livesey NJ, Oberheide J, Riese M, Russell JM, Toon GC, Zawodny JM (2001) Comparison of satellite ozone observations in coincident air masses in early november 1994. J Geophys Res Atmos 106(D9):9923–9943. https://doi.org/10.1029/2000JD900826 Ming FC, Vignelles D, Jegou F, Berthet G, Renard J-B, Gheusi F, Kuleshov Y (2016) Gravity-wave effects on tracer gases and stratospheric aerosol concentrations during the 2013 ChArMEx campaign. Atmos Chem Phys Discuss. https://doi.org/10.5194/acp-2015-889 Moreira L, Hocke K, Eckert E, von Clarmann T, Kämpfer N (2015) Trend analysis of the 20-year time series of stratospheric ozone profiles observed by the gromos microwave radiometer at bern. Atmos Chem Phys 15(19):10999–11009. https://doi.org/10.5194/acp-15-10999-2015 Moustaoui M, Teitelbaum H, Valero FPJ (2003) Ozone laminae inside the antarctic vortex produced by poleward filaments. Q J R Meteorol Soc 129(594):3121–3136. https://doi.org/10.1256/qj.03.19 Noguchi K, Imamura T, Oyama KI, Bodeker GE (2006) A global statistical study on the origin of small-scale ozone vertical structures in the lower stratosphere. J Geophys Res Atmos. https://doi.org/10.1029/2006JD007232 Offermann D, Grossmann K-U, Barthol P, Knieling P, Riese M, Trant R (1999) Cryogenic infrared spectrometers and telescopes for the atmosphere (crista) experiment and middle atmosphere variability. J Geophys Res Atmos 104(D13):16311–16325. https://doi.org/10.1029/1998JD100047 Parrish A (1994) Millimeter-wave remote sensing of ozone and trace constituents in the stratosphere. Proc IEEE 82(12):1915–1929. https://doi.org/10.1109/5.338079 Peter R (1997) The ground-based millimeter-wave ozone spectrometer-gromos. IAP research report, University of Bern, Bern, Switzerland Rodgers CD (1976) Retrieval of atmospheric temperature and composition from remote measurements of thermal radiation. Rev Geophys 14(4):609–624. https://doi.org/10.1029/RG014i004p00609 Waugh DW (1993) Subtropical stratospheric mixing linked to disturbances in the polar vortices. Nature 365(6446):535–537 Yamashita C, Liu HL, Chu X (2010) Gravity wave variations during the 2009 stratospheric sudden warming as revealed by ECMWF-T799 and observations. Geophys Res Lett. https://doi.org/10.1029/2010GL045437 Zhu X, Holton JR (1986) Photochemical damping of inertio-gravity waves. J Atmos Sci 43:2578–2584. https://doi.org/10.1175/1520-0469 KH performed the retrieval of the GROMOS measurements. LM carried out the data analysis and prepared the manuscript. NK is the principal investigator of the radiometry project. All authors have contributed to the interpretation of the results. Acknowlegements This work was supported by the Swiss National Science Foundation under Grant 200020-160048 and MeteoSwiss GAW Project: "Fundamental GAW parameters measured by microwave radiometry". Institute of Applied Physics, University of Bern, Bern, Switzerland Lorena Moreira, Klemens Hocke & Niklaus Kämpfer Oeschger Centre for Climate Change Research, University of Bern, Bern, Switzerland Klemens Hocke & Niklaus Kämpfer Lorena Moreira Klemens Hocke Niklaus Kämpfer Correspondence to Lorena Moreira. Moreira, L., Hocke, K. & Kämpfer, N. Short-term stratospheric ozone fluctuations observed by GROMOS microwave radiometer at Bern. Earth Planets Space 70, 8 (2018). https://doi.org/10.1186/s40623-017-0774-4 Stratospheric ozone Atmospheric variability
CommonCrawl
pdgLive Home > New Heavy Bosons (${{\mathit W}^{\,'}}$, ${{\mathit Z}^{\,'}}$, leptoquarks, etc.), Searches for GAUGE AND HIGGS BOSONS New Heavy Bosons (${{\boldsymbol W}^{\,'}}$, ${{\boldsymbol Z}^{\,'}}$, leptoquarks, etc.), Searches for We list here various limits on charged and neutral heavy vector bosons (other than ${{\mathit W}}$'s and ${{\mathit Z}}$'s), heavy scalar bosons (other than Higgs bosons), vector or scalar leptoquarks, and axigluons. The latest unpublished results are described in ``${{\mathit W}^{\,'}}$ Searches'' and ``${{\mathit Z}^{\,'}}$ Searches'' reviews. For recent searches on scalar bosons which could be identified as Higgs bosons, see the listings in the Higgs boson section. ${{\mathit W}^{\,'}}$-Boson Searches ${{\mathit Z}^{\,'}}$-Boson Searches Leptoquarks MASS LIMITS for ${{\mathit W}^{\,'}}$ (Heavy Charged Vector Boson Other Than ${{\mathit W}}$) in Hadron Collider Experiments $> 5200$ GeV CL=95.0% ${{\mathit W}_{{R}}}$ (Right-Handed ${{\mathit W}}$ Boson) MASS LIMITS $>715$ GeV CL=90.0% Limit on ${{\mathit W}_{{L}}}-{{\mathit W}_{{R}}}$ Mixing Angle $\zeta $ MASS LIMITS for ${{\boldsymbol Z}^{\,'}}$ (Heavy Neutral Vector Boson Other Than ${{\boldsymbol Z}}$) Limits for ${{\mathit Z}}{}^{′}_{{\mathrm {SM}}}$ $>4.500 \times 10^{3}$ GeV CL=95.0% Limits for ${{\mathit Z}}_{\mathit LR}$ $>630$ GeV CL=95.0% Limits for ${{\mathit Z}_{{\chi}}}$ $>4.100 \times 10^{3}$ GeV CL=95.0% Limits for ${{\mathit Z}_{{\psi}}}$ $> 3900$ GeV CL=95.0% Limits for ${{\mathit Z}_{{\eta}}}$ $>3.900 \times 10^{3}$ GeV CL=95.0% Limits for other ${{\mathit Z}^{\,'}}$ $>2.900 \times 10^{3}$ GeV CL=95.0% Searches for ${{\mathit Z}^{\,'}}$ with Lepton-Flavor-Violating decays Indirect Constraints on Kaluza-Klein Gauge Bosons MASS LIMITS for Leptoquarks from Pair Production $>1.050 \times 10^{3}$ GeV CL=95.0% MASS LIMITS for Leptoquarks from Single Production $>1.755 \times 10^{3}$ GeV CL=95.0% Indirect Limits for Leptoquarks MASS LIMITS for Diquarks $> 6000$ GeV CL=95.0% MASS LIMITS for ${{\mathit g}_{{A}}}$ (axigluon) and Other Color-Octet Gauge Bosons $> 6100$ GeV CL=95.0% MASS LIMITS for Color-Octet Scalar Bosons ${{\mathit X}^{0}}$ (Heavy Boson) Searches in ${{\mathit Z}}$ Decays MASS LIMITS for a Heavy Neutral Boson Coupling to ${{\mathit e}^{+}}{{\mathit e}^{-}}$ Search for ${{\mathit X}^{0}}$ Resonance in ${{\mathit e}^{+}}{{\mathit e}^{-}}$ Collisions Search for ${{\mathit X}^{0}}$ Resonance in ${{\mathit e}}{{\mathit p}}$ Collisions Search for ${{\mathit X}^{0}}$ Resonance in ${{\mathit e}^{+}}$ ${{\mathit e}^{-}}$ $\rightarrow$ ${{\mathit X}^{0}}{{\mathit \gamma}}$ Search for ${{\mathit X}^{0}}$ Resonance in ${{\mathit Z}}$ $\rightarrow$ ${{\mathit f}}{{\overline{\mathit f}}}{{\mathit X}^{0}}$ Search for ${{\mathit X}^{0}}$ Resonance in ${{\mathit W}}{{\mathit X}^{0}}$ final state Search for ${{\mathit X}^{0}}$ Resonance in Quarkonium Decays Search for ${{\mathit X}^{0}}$ Resonance in ${{\mathit H}{(125)}}$ Decays
CommonCrawl
Journals SciPost Physics Vol. 1 issue 1 SciPost Physics SciPost Physics Vol. 1 issue 1 (September - October 2016) Vol. 1 issue 2 | Next issue Quantum quenches to the attractive one-dimensional Bose gas: exact results Lorenzo Piroli, Pasquale Calabrese, Fabian H. L. Essler SciPost Phys. 1, 001 (2016) · published 14 September 2016 | Toggle abstract · pdf We study quantum quenches to the one-dimensional Bose gas with attractive interactions in the case when the initial state is an ideal one-dimensional Bose condensate. We focus on properties of the stationary state reached at late times after the quench. This displays a finite density of multi-particle bound states, whose rapidity distribution is determined exactly by means of the quench action method. We discuss the relevance of the multi-particle bound states for the physical properties of the system, computing in particular the stationary value of the local pair correlation function $g_2$. Tapered amplifier laser with frequency-shifted feedback A. Bayerle, S. Tzanova, P. Vlaar, B. Pasquiou, F. Schreck SciPost Phys. 1, 002 (2016) · published 22 October 2016 | We present a frequency-shifted feedback (FSF) laser based on a tapered amplifier. The laser operates as a coherent broadband source with up to 370GHz spectral width and 2.3us coherence time. If the FSF laser is seeded by a continuous-wave laser a frequency comb spanning the output spectrum appears in addition to the broadband emission. The laser has an output power of 280mW and a center wavelength of 780nm. The ease and flexibility of use of tapered amplifiers makes our FSF laser attractive for a wide range of applications, especially in metrology. Time evolution during and after finite-time quantum quenches in the transverse-field Ising chain Tatjana Puskarov, Dirk Schuricht We study the time evolution in the transverse-field Ising chain subject to quantum quenches of finite duration, ie, a continuous change in the transverse magnetic field over a finite time. Specifically, we consider the dynamics of the total energy, one- and two-point correlation functions and Loschmidt echo during and after the quench as well as their stationary behaviour at late times. We investigate how different quench protocols affect the dynamics and identify universal properties of the relaxation. Quantum-optical magnets with competing short- and long-range interactions: Rydberg-dressed spin lattice in an optical cavity Jan Gelhausen, Michael Buchhold, Achim Rosch, Philipp Strack The fields of quantum simulation with cold atoms [1] and quantum optics [2] are currently being merged. In a set of recent pathbreaking experiments with atoms in optical cavities [3,4] lattice quantum many-body systems with both, a short-range interaction and a strong interaction potential of infinite range -mediated by a quantized optical light field- were realized. A theoretical modelling of these systems faces considerable complexity at the interface of: (i) spontaneous symmetry-breaking and emergent phases of interacting many-body systems with a large number of atoms $N\rightarrow\infty$, (ii) quantum optics and the dynamics of fluctuating light fields, and (iii) non-equilibrium physics of driven, open quantum systems. Here we propose what is possibly the simplest, quantum-optical magnet with competing short- and long-range interactions, in which all three elements can be analyzed comprehensively: a Rydberg-dressed spin lattice [5] coherently coupled to a single photon mode. Solving a set of coupled even-odd sublattice Master equations for atomic spin and photon mean-field amplitudes, we find three key results. (R1): Superradiance and a coherent photon field can coexist with spontaneously broken magnetic translation symmetry. The latter is induced by the short-range nearest-neighbor interaction from weakly admixed Rydberg levels. (R2): This broken even-odd sublattice symmetry leaves its imprint in the light via a novel peak in the cavity spectrum beyond the conventional polariton modes. (R3): The combined effect of atomic spontaneous emission, drive, and interactions can lead to phases with anomalous photon number oscillations. Extensions of our work include nano-photonic crystals coupled to interacting atoms and multi-mode photon dynamics in Rydberg systems. Exactly solvable quantum few-body systems associated with the symmetries of the three-dimensional and four-dimensional icosahedra T. Scoquart, J. J. Seaward, S. G. Jackson, M. Olshanii The purpose of this article is to demonstrate that non-crystallographic reflection groups can be used to build new solvable quantum particle systems. We explicitly construct a one-parametric family of solvable four-body systems on a line, related to the symmetry of a regular icosahedron: in two distinct limiting cases the system is constrained to a half-line. We repeat the program for a 600-cell, a four-dimensional generalization of the regular three-dimensional icosahedron. Dispersive hydrodynamics of nonlinear polarization waves in two-component Bose-Einstein condensates T. Congy, A. M. Kamchatnov, N. Pavloff We study one dimensional mixtures of two-component Bose-Einstein condensates in the limit where the intra-species and inter-species interaction constants are very close. Near the mixing-demixing transition the polarization and the density dynamics decouple. We study the nonlinear polarization waves, show that they obey a universal (i.e., parameter free) dynamical description, identify a new type of algebraic soliton, explicitly write simple wave solutions, and study the Gurevich-Pitaevskii problem in this context. Role of fluctuations in the phase transitions of coupled plaquette spin models of glasses Giulio Biroli, Charlotte Rulquin, Gilles Tarjus, Marco Tarzia We study the role of fluctuations on the thermodynamic glassy properties of plaquette spin models, more specifically on the transition involving an overlap order parameter in the presence of an attractive coupling between different replicas of the system. We consider both short-range fluctuations associated with the local environment on Bethe lattices and long-range fluctuations that distinguish Euclidean from Bethe lattices with the same local environment. We find that the phase diagram in the temperature-coupling plane is very sensitive to the former but, at least for the $3$-dimensional (square pyramid) model, appears qualitatively or semi-quantitatively unchanged by the latter. This surprising result suggests that the mean-field theory of glasses provides a reasonable account of the glassy thermodynamics of models otherwise described in terms of the kinetically constrained motion of localized defects and taken as a paradigm for the theory of dynamic facilitation. We discuss the possible implications for the dynamical behavior. Correlations of zero-entropy critical states in the XXZ model: integrability and Luttinger theory far from the ground state R. Vlijm, I. S. Eliëns, J. -S. Caux Pumping a finite energy density into a quantum system typically leads to `melted' states characterized by exponentially-decaying correlations, as is the case for finite-temperature equilibrium situations. An important exception to this rule are states which, while being at high energy, maintain a low entropy. Such states can interestingly still display features of quantum criticality, especially in one dimension. Here, we consider high-energy states in anisotropic Heisenberg quantum spin chains obtained by splitting the ground state's magnon Fermi sea into separate pieces. Using methods based on integrability, we provide a detailed study of static and dynamical spin-spin correlations. These carry distinctive signatures of the Fermi sea splittings, which would be observable in eventual experimental realizations. Going further, we employ a multi-component Tomonaga-Luttinger model in order to predict the asymptotics of static correlations. For this effective field theory, we fix all universal exponents from energetics, and all non-universal correlation prefactors using finite-size scaling of matrix elements. The correlations obtained directly from integrability and those emerging from the Luttinger field theory description are shown to be in extremely good correspondence, as expected, for the large distance asymptotics, but surprisingly also for the short distance behavior. Finally, we discuss the description of dynamical correlations from a mobile impurity model, and clarify the relation of the effective field theory parameters to the Bethe Ansatz solution. A conformal bootstrap approach to critical percolation in two dimensions Marco Picco, Sylvain Ribault, Raoul Santachiara We study four-point functions of critical percolation in two dimensions, and more generally of the Potts model. We propose an exact ansatz for the spectrum: an infinite, discrete and non-diagonal combination of representations of the Virasoro algebra. Based on this ansatz, we compute four-point functions using a numerical conformal bootstrap approach. The results agree with Monte-Carlo computations of connectivities of random clusters. Detecting a many-body mobility edge with quantum quenches Piero Naldesi, Elisa Ercolessi, Tommaso Roscilde The many-body localization (MBL) transition is a quantum phase transition involving highly excited eigenstates of a disordered quantum many-body Hamiltonian, which evolve from "extended/ergodic" (exhibiting extensive entanglement entropies and fluctuations) to "localized" (exhibiting area-law scaling of entanglement and fluctuations). The MBL transition can be driven by the strength of disorder in a given spectral range, or by the energy density at fixed disorder - if the system possesses a many-body mobility edge. Here we propose to explore the latter mechanism by using "quantum-quench spectroscopy", namely via quantum quenches of variable width which prepare the state of the system in a superposition of eigenstates of the Hamiltonian within a controllable spectral region. Studying numerically a chain of interacting spinless fermions in a quasi-periodic potential, we argue that this system has a many-body mobility edge; and we show that its existence translates into a clear dynamical transition in the time evolution immediately following a quench in the strength of the quasi-periodic potential, as well as a transition in the scaling properties of the quasi-stationary state at long times. Our results suggest a practical scheme for the experimental observation of many-body mobility edges using cold-atom setups. SciPost Physics is published by the SciPost Foundation under the journal doi: 10.21468/SciPostPhys and ISSN 2542-4653. SciPost Physics has been awarded the DOAJ Seal from the Directory of Open Access Journals. All content in SciPost Physics is deposited and permanently preserved in the CLOCKSS archive
CommonCrawl
cs.LG eess eess.SP Computer Science > Machine Learning Title: End-to-End Optimized Arrhythmia Detection Pipeline using Machine Learning for Ultra-Edge Devices Authors: Sideshwar J B (1), Sachin Krishan T (1), Vishal Nagarajan (1), Shanthakumar S (2), Vineeth Vijayaraghavan (2) ((1) SSN College of Engineering, Chennai, India, (2) Solarillion Foundation, Chennai, India) Abstract: Atrial fibrillation (AF) is the most prevalent cardiac arrhythmia worldwide, with 2% of the population affected. It is associated with an increased risk of strokes, heart failure and other heart-related complications. Monitoring at-risk individuals and detecting asymptomatic AF could result in considerable public health benefits, as individuals with asymptomatic AF could take preventive measures with lifestyle changes. With increasing affordability to wearables, personalized health care is becoming more accessible. These personalized healthcare solutions require accurate classification of bio-signals while being computationally inexpensive. By making inferences on-device, we avoid issues inherent to cloud-based systems such as latency and network connection dependency. We propose an efficient pipeline for real-time Atrial Fibrillation Detection with high accuracy that can be deployed in ultra-edge devices. The feature engineering employed in this research catered to optimizing the resource-efficient classifier used in the proposed pipeline, which was able to outperform the best performing standard ML model by $10^5\times$ in terms of memory footprint with a mere trade-off of 2% classification accuracy. We also obtain higher accuracy of approximately 6% while consuming 403$\times$ lesser memory and being 5.2$\times$ faster compared to the previous state-of-the-art (SoA) embedded implementation. Comments: 8 pages, 9 figures, Accepted at 20th IEEE International Conference on Machine Learning and Applications (ICMLA) 2021 Subjects: Machine Learning (cs.LG); Signal Processing (eess.SP) Cite as: arXiv:2111.11789 [cs.LG] (or arXiv:2111.11789v1 [cs.LG] for this version) From: Sideshwar J B [view email] [v1] Tue, 23 Nov 2021 11:06:27 GMT (1040kb,D)
CommonCrawl
Keldysh Institute of Applied Mathematics The Keldysh Institute of Applied Mathematics (Russian: Институт прикладной математики им. М.В.Келдыша) is a research institute specializing in computational mathematics. It was established to solve computational tasks related to government programs of nuclear and fusion energy, space research and missile technology. The Institute is a part of the Department of Mathematical Sciences of the Russian Academy of Sciences. The main direction of activity of the institute is the use of computer technology to solve complex scientific and technical issues of practical importance. Since 2016, the development of mathematical and computational methods for biological research, as well as a direct solution to the problems of computational biology with the use of such methods, has also been included in the circle of scientific activities of the institute. KIAM Федеральный исследовательский центр Институт прикладной математики им. М. В. Келдыша Российской академии наук Latin: Keldysh Institute of Applied Mathematics TypePublic Established1953 PrincipalAlexander Aptekarev Postgraduates155 Doctoral students 91 Address Miusskaya pl., 4, 125047 , Moscow , Russia 55°46′37.57″N 37°35′24.8″E AffiliationsRussian Academy of Sciences Websitewww.keldysh.ru Scientific activity Nuclear physics Theoretical physicist Yakov Borisovich Zel'dovich headed one of the departments placed in charge of the theoretical aspects of the work on the creation of nuclear and thermonuclear weapons. Alexander Andreevich Samarskii performed the first realistic calculations of macrokinetics of chain reaction of a nuclear explosion, which led to the practical importance estimated power of nuclear weapons. In relation to nuclear energy, the institute was also involved in modelling of processes of neutron transport and nuclear reactions. In particular, E. Kuznetsov is known for his work on the theory of nuclear reactors. Currently, such work in KIAM is continuing in the field of plasma physics and controlled thermonuclear fusion, which began under the leadership of S. P. Kurdyumov, A. A. Samarskii, Yu. P. Popov. Cosmonautics D. E. Okhotsimsky directed the works in the department created for research of the dynamics of space flight. In 1966, the department was reorganized into the Ballistic Centre.[1] The Ballistic Centre was involved in the calculations of optimal orbit trajectory and actual corrections for all space flights, from uncrewed interplanetary and lunar vehicles to the crewed "Soyuz" and orbital station "Salyut" and "Mir". The institute took an active part in the creation of the Soviet space shuttle "Buran". KIAM continues to engage in Russian space projects. Current research is connected with: • development of systems for management and navigation of space vehicles in real time with the use of global satellite navigation systems GPS and GLONASS; • exploration of prospects for further interplanetary missions using electric rocket engines; • participation in such projects as "RadioAstron" and Fobos-Grunt. Mathematics and Mathematical Modelling One of the greatest mathematicians of the twentieth century I. M. Gelfand was at the head of the Department of heat transmission before his departure for the United States in 1989. He was carrying out the fundamental works on functional analysis, algebra and topology. A. N. Tikhonov worked initially also in these areas of mathematics. However, Tikhonov is better known for his works of more applied orientation, such as methods for solving ill-posed problems (Tikhonov regularization). Tikhonov also created the theory of differential equations with a small parameter at the highest derivative. The general theory of stability of difference schemes was developed by A. A. Samarskii. Samarskii considered mathematical modelling as an independent scientific discipline. S. P. Kurdyumov created a whole scientific school in nonlinear dynamics and synergetics in Russia. Currently, the existing arsenal of numerical methods is updated and improved in response to the growing complexity of the models and the possibilities of modern supercomputers. Scientists of KIAM elaborate grid methods for solving computational problems, which has led, in particular, to the creation of a declarative language "Norma".[2] Computers and Programming The first studies were carried out still on mechanical calculators "Mercedes" by a large staff of estimators. Currently, work is underway to create a distributed computing system based on combining several supercomputers, which are used for grid – technology and developed a specialized operating system (DVM). Robotics Currently, work on robotics is carried out in: • the Group of V. E. Pavlovsky of the Sector under the guidance of A. K. Platonov (virtual football[3]), • the Group of S. M. Sokolov (vision systems), • the Group of V. E. Pryanishnikova (autonomous tracked vehicles). Computational biology Since 2016, the sphere of interest of KIAM includes also computational biology problems, which are solved in IMPB RAS – Branch of KIAM RAS in Pushchino. Specialized research projects • The project "RadioAstron" • The project "Norma"[2] • The project "DVM" • The project "GNS" • The project "grid" (multiprocessor computations) • The project "Virtual football"[3] • International Scientific Optical Network (ISON) Teaching and social activities Most of the leading employees of KIAM worked part-time as professors of the Moscow State University and Moscow Institute of Physics and Technology. A. N. Tikhonov was the organizer and first Dean of the Faculty of Computational Mathematics and Cybernetics. I. M. Gelfand was engaged in the mathematical education of secondary school pupils. History Milestones The institute is located in Moscow, Russia. In 1978, it is named after Mstislav Keldysh. The institute was created in 1966 when it split from Steklov Institute of Mathematics. Already as the Department of Applied Mathematics of Steklov Institute it had conducted some outstanding research in the field of space exploration: in 1953 it developed the method of ballistic spacecraft descent, that was used on 12 April 1961 for Yuri Gagarin's return to the Earth, and in 1957 Sputnik 1 orbit was calculated there using the computer processing of optical observation data.[4] As a result of the reorganization in RAS of 2015–2016 years[5] Institute of Mathematical Problems of Biology became a branch of KIAM. Principals • 1953–1978 Keldysh, Mstislav Vsevolodovich, President of the USSR Academy of Sciences, mathematician. • 1978–1989 Tikhonov, Andrey Nikolayevich, Academician of RAS, mathematician. • 1989–1999 Kurdyumov, Sergei Pavlovich, Corresponding Member of the Russian Academy of Sciences, mathematician. • 1999–2008 Popov, Yuriy Petrovich, Corresponding Member of the Russian Academy of Sciences, mathematician. • 2008–2015 Chetverushkin, Boris Nikolaevich, Academician of RAS, mathematician. The organizations that were separated out • 1955 — Dorodnitsyn Computing Centre. • 1965 — Space Research Institute, established on the basis of one of the departments of KIAM to develop scientific space flight programs. • 1990 — Institute for Mathematical Modelling, based on department A. A. Samarskii. Famous faculty and alumni • Gelfand, Israel Moiseevich, Academician of the USSR Academy of Sciences, mathematician of world renown. • Godunov, Sergei Konstantinovich, Academician of the USSR Academy of Sciences, mathematician of world renown. • Lyapunov, Alexey Andreevich, Corresponding Member of the USSR Academy of Sciences, one of the founders of cybernetics in the USSR. • Okhotsimsky, Dmitry Yevgenyevich, Academician of RAS, cosmonautics and robotics. • Eneev, Timur Magometovich, Academician of RAS, cosmonautics and cosmogony. • Yablonsky, Sergey Vsevolodovich, Corresponding Member of RAS, one of the founders of cybernetics in the USSR. • Yanenko, Nikolai Nikolaevich, Academician of the USSR Academy of Sciences, mathematician and engineer. • Zel'dovich, Yakov Borisovich, Academician of the USSR Academy of Sciences, nuclear physics и astrophysics. References 1. (in Russian) Ballistic centre of KIAM 2. (in Russian) Programming languages and systems "Norma" 3. (in Russian) The project "Virtual football" of KIAM RAS 4. (in Russian) 50th anniversary of the Institute — Applied celestial mechanics Archived 30 September 2007 at the Wayback Machine 5. (in Russian) The order on accession IMPB to the KIAM External links • Institute website Authority control International • ISNI • VIAF National • United States
Wikipedia
Why does basicity of group 15 hydrides decrease down the group? In my textbook it is written that the order of basic strength of pnictogen hydrides is $$\ce{NH3 > PH3 > AsH3 > SbH3 > BiH3}$$ I tried but could not find any explanation as to why this happens. What is the explanation? inorganic-chemistry periodic-trends pnictogen Gaurang Tandon SohamSoham Each of these molecules has a pair of electrons in an orbital - this is termed a "lone pair" of electrons. It is the lone pair of electrons that makes these molecules nucleophilic or basic. As you move down the column from nitrogen to bismuth, you are placing your outermost shell of electrons, including the lone pair, in a larger and more diffuse orbital (the nitrogen lone pair is contained in the n=2 shell, while the bismuth lone pair is in the n=6 shell). As the electron density of the lone pair is spread over a greater volume and is consequently more diffuse, the lone pair of electrons becomes less nucleophilic, less basic. Commment: But greater the EN of the central atom , lesser would be its tendency to donate electrons right? Response to Comment: The ENs of the central atoms are N (3.04), P (2.19), As (2.18), Sb (2.05), Bi (2.02). All of these atoms, except for nitrogen have similar ENs and I think the electron density argument is valid for them. However, in the case of nitrogen and phosphorous there is a significant EN difference that would tend to argue in the direction opposite to my answer. However there is another important difference between N and P. The lone pair in ammonia is in an sp3 orbital. In all of the other cases the central atom is essentially unhybridized (~90° H-X-H angles) and the lone pair exists in an s orbital. Therefore the lone pair electron density in ammonia (being in an sp3 orbital) is effectively increased compared to phosphine where the lone pair is in an s orbital. So even though the EN difference between N and P is significant, when hybridization differences of the central atom are taken into account the electron density argument still explains the trend in basicity. ronron The hydrides of nitrogen family have one lone pair of electrons on their central atom. Therefore,they act as Lewis bases.As we go done the group, the basic character of these hydrides decreases. Nitrogen atom has the smallest size among the hydrides.Therefore the lone pair is concentrated on a small region and electron density on it is the maximum.Consequently, its electron releasing tendency is maximum.As the size of the central atom increases down the family, the electron density also decreases. As a result, the electron donating capacity or the basic strength decreases down the group. SarojSaroj Realise that there is a notable drop in basicity from nitrogen to phosphorus and then a slow and continuous further diminishing. The notable drop is due to the difference of the molecular structures of ammonia and phosphane. As I answered elsewhere on this network, the 'ideal' bonding situation from an orbital point of view would be to just use p-orbitals to bond and leave the lone pair in an s-type orbital which generally has a lower energy. Doing this allows for a greater σ bonding stabilisation. And that is what all pnictogens below phosphorus (and phosphorus to a certain extent, although the bond angle is still $3^\circ$ off) do: their lone pair can be considered a pure s-type orbital at first approximation. Ammonia, however, has space constraints since nitrogen is so much smaller than phosphorus and all the other pnictogens. If it were to assume $90^\circ$ bond angles, the hydrogen atoms would end up too close together resulting in steric strain. Thus, nitrogen moves the hydrogens apart by not bonding with pure p orbitals but with $\mathrm{sp^3}$ hybrid orbitals (in nitrogen's case, the hybridisation is actually very close to $25~\%$ s-character and thus to $\mathrm{sp^3}$!). This means that the lone pair must also be localised in an $\mathrm{sp^3}$ type orbital which in turn means that it is not spherical and diffuse but 'points in a direction'. This pointing allows for acidic hydrogens to easily attach to it and thus explain nitrogen's and ammonia's high basicity. I would also like to highlight an answer of Martin on the topic of $\ce{NH3}$ versus $\ce{PH3}$ hybridisation. Further down the group, i.e. behind phosphorus, a simple diffusity argument is sufficient to explain the slightly reducing basicity. The higher the period, the more populated orbitals lie beneath the s-type lone pair thus the further out it is (also the greater the atom becomes). And the more diffuse a lone pair is the less likely it becomes to participate in bonding or to be basic in any way. From a Born-Haber cycle point of view, nitrogen forms much stronger bonds with hydrogen than anything below it, and that enhances the proton acceptor capability of ammonia relative to other G15 hydrides. It's the same reason fluoride is a much stronger base than other halide ions. The molecular orbital explanation is that period 2 elements with their compact valence orbitals have good overlap with the hydrogen $1s$ orbital. Oscar LanziOscar Lanzi Not the answer you're looking for? Browse other questions tagged inorganic-chemistry periodic-trends pnictogen or ask your own question. Why is NH3 a stronger base than PH3,AsH3,SbH3,BiH3 and McH3? Basicity of Group 15 hydrides Why does basicity go down along the group 15 hydrides? Basic strength of compunds Why is the bond angle H-P-H smaller than H-N-H? Difference between basicity and reducing character Why does the PH3 geometry deviate more from the trigonal planar one, than does NH3? Acidity comparison of charged group 16 hydrides Bond angles for the hydrides What exactly is basicity? Reducing character of group 15 hydrides Why does the stability of hydrides decrease down the group in nitrogen family?
CommonCrawl
Homotopy invariants methods in the global dynamics of strongly damped wave equation Sharp criteria of Liouville type for some nonlinear systems June 2016, 36(6): 3251-3276. doi: 10.3934/dcds.2016.36.3251 Asymptotic analysis of charge conserving Poisson-Boltzmann equations with variable dielectric coefficients Chiun-Chang Lee 1, Department of Applied Mathematics, National Hsinchu University of Education, Hsinchu City 30014, Taiwan Received September 2015 Revised October 2015 Published December 2015 Concerning the influence of the dielectric constant on the electrostatic potential in the bulk of electrolyte solutions, we investigate a charge conserving Poisson-Boltzmann (CCPB) equation [31,32] with a variable dielectric coefficient and a small parameter $\epsilon$ (related to the Debye screening length) in a bounded connected domain with smooth boundary. Under the Robin boundary condition with a given applied potential, the limiting behavior (as $\epsilon\downarrow0$) of the solution (the electrostatic potential) has been rigorously studied. In particular, under the charge neutrality constraint, our result exactly shows the effects of the dielectric coefficient and the applied potential on the limiting value of the solution in the interior domain. The main approach is the Pohozaev's identity of this model. On the other hand, under the charge non-neutrality constraint, we show that the maximum difference between the boundary and interior values of the solution has a lower bound $\log\frac{1}{\epsilon}$ as $\epsilon$ goes to zero. Such an asymptotic blow-up behavior describes an unstable phenomenon which is totally different from the behavior of the solution under the charge neutrality constraint. Keywords: effect of variable dielectrics, Charge conserving Poisson-Boltzmann, small Debye length, asymptotic blow-up behavior., Pohozaev's identity. Mathematics Subject Classification: Primary: 35J25, 35J60, 35C20; Secondary: 35Q92, 45K0. Citation: Chiun-Chang Lee. Asymptotic analysis of charge conserving Poisson-Boltzmann equations with variable dielectric coefficients. Discrete & Continuous Dynamical Systems - A, 2016, 36 (6) : 3251-3276. doi: 10.3934/dcds.2016.36.3251 V. Barcilon, D.-P. Chen, R. S. Eisenberg and J. W. Jerome, Qualitative properties of steady-state Poisson-Nernst-Planck systems: perturbation and simulation study,, SIAM J. Appl. Math., 57 (1997), 631. doi: 10.1137/S0036139995312149. Google Scholar M. Z. Bazant, K. Thornton and A. Ajdari, Diffuse-charge dynamics in electrochemical systems,, Phys. Rev. E, 70 (2004). doi: 10.1103/PhysRevE.70.021506. Google Scholar M. Z. Bazant, K. T. Chu and B. J. Bayly, Current-Voltage relations for electrochemical thin films,, SIAM J. Appl. Math., 65 (2005), 1463. doi: 10.1137/040609938. Google Scholar D. Bothe, A. Fischer and J. Saal, Global well-posedness and stability of electrokinetic flows,, SIAM J. Math. Anal., 46 (2014), 1263. doi: 10.1137/120880926. Google Scholar S. L. Carnie, D. Y. C. Chan and J. Stankovich, Computation of forces between spherical colloidal particles: Nonlinear Poisson-Boltzmann theory,, J. Colloid Interface Sci., 165 (1994), 116. doi: 10.1006/jcis.1994.1212. Google Scholar Y. S. Choi and R. Lui, An integro-differential equation arising from an electrochemistry model,, Quart. Appl. Math., 55 (1997), 677. Google Scholar D. T. Conroy, R. V. Craster, O. K. Matar and D. T. Papageorgiou, Dynamics and stability of an annular electrolyte film,, J. Fluid Mech., 656 (2010), 481. doi: 10.1017/S0022112010001254. Google Scholar B. Eisenberg, Ionic channels in biological membranes: Natural nanotubes,, Acc. Chem. Res., 31 (1998), 117. doi: 10.1021/ar950051e. Google Scholar A. Erdélyi, W. Magnus, F. Oberhettinger and F. G. Tricomi, Higher Transcendental Functions II, Based, in part, on notes left by Harry Bateman,, McGraw-Hill, (1953). Google Scholar W. Fang and K. Ito, Existence and Uniqueness of Steady-State Solutions for an Electrochemistry Model,, P. Am. Math. Soc., 129 (2001), 1037. doi: 10.1090/S0002-9939-00-05769-5. Google Scholar M. A. Fontelos and L. B. Gamboa, On the structure of double layers in Poisson-Boltzmann equation,, Discrete Contin Dyn. Syst. B, 17 (2012), 1939. doi: 10.3934/dcdsb.2012.17.1939. Google Scholar A. Friedman and K. Tintarev, Boundary asymptotics for solutions of the Poisson-Boltzmann equation,, J. Differential Equations, 69 (1987), 15. doi: 10.1016/0022-0396(87)90100-8. Google Scholar D. Gilbarg and N. S. Trudinger, Elliptic Partial Differential Equations of Second Order,, Classics in Mathematics, (2001). doi: 10.1007/978-3-642-61798-0. Google Scholar A. Glitzky and R. Hünlich, Energetic estimates and asymptotics for electro-reaction-diffusion systems,, Z. Angew. Math. Mech., 77 (1997), 823. doi: 10.1002/zamm.19970771105. Google Scholar M. J. Holst, Multilevel Methods for the Poisson-Boltzmann Equation,, Ph.D thesis, (1993). Google Scholar Y. Hyon, A Mathematical Model For Electrical Activity in Cell Membrane: Energetic Variational Approach,, work in progress., (). Google Scholar Y. J. Kang, C. Yang and X. Y. Huang, Electroosmotic flow in a capillary annulus with high zeta potentials,, J. Colloid Interface Sci., 253 (2002), 285. doi: 10.1006/jcis.2002.8453. Google Scholar C. Koch, Biophysics of Computation,, Oxford University Press, (1999). Google Scholar D. Lacoste, G. I. Menon, M. Z. Bazant and J. F. Joanny, Electrostatic and electrokinetic contributions to the elastic moduli of a driven membrane,, Eur. Phys. J. E, 28 (2009), 243. doi: 10.1140/epje/i2008-10433-1. Google Scholar L. Lanzani and Z. Shen, On the Robin boundary condition for Laplace's equation in Lipschit domains,, Commun. Partial Differ. Eq., 29 (2004), 91. doi: 10.1081/PDE-120028845. Google Scholar C. C. Lee, The charge conserving Poisson-Boltzmann equations: Existence, uniqueness and maximum principle,, J. Math. Phys., 55 (2014). doi: 10.1063/1.4878492. Google Scholar C. C. Lee, H. Lee, Y. Hyon, T. C. Lin and C. Liu, New Poisson-Boltzmann type equations: One-dimensional solutions,, Nonlinearity, 24 (2011), 431. doi: 10.1088/0951-7715/24/2/004. Google Scholar C. C. Lee, H. Lee, Y. Hyon, T. C. Lin and C. Liu, Boundary layer solutions of Charge Conserving Poisson-Boltzmann equations: One-dimensional case,, to appear in Commun. Math. Sci., (). Google Scholar W. Liu, One-dimensional steady-state Poisson-Nernst-Planck systems for ion channels with multiple ion species,, J. Differential Equations, 246 (2009), 428. doi: 10.1016/j.jde.2008.09.010. Google Scholar W. Liu and H. Xu, A complete analysis of a classical Poisson-Nernst-Planck model for ionic flow,, J. Differential Equations, 258 (2015), 1192. doi: 10.1016/j.jde.2014.10.015. Google Scholar W. Nonner and B. Eisenberg, Ion permeation and glutamate residues linked by Poisson-Nernst-Planck theory in L-type calcium channels,, Biophys. J., 75 (1998), 1287. doi: 10.1016/S0006-3495(98)74048-2. Google Scholar J. H. Park and J. W. Jerome, Qualitative properties of steady-state Poisson-Nernst-Planck systems: Mathematical study,, SIAM J. Appl. Math., 57 (1997), 609. doi: 10.1137/S0036139995279809. Google Scholar I. Rubinstein, Counterion condensation as an exact limiting property of solutions of the Poisson-Boltzmann equation,, SIAM J. Appl. Math., 46 (1986), 1024. doi: 10.1137/0146061. Google Scholar R. Ryham, C. Liu and Z. Q. Wang, On electro-kinetic fluids: One dimensional configurations,, Discrete Contin Dyn. Syst. B, 6 (2006), 357. doi: 10.3934/dcdsb.2006.6.357. Google Scholar R. Ryham, C. Liu and L. Zikatanov, Mathematical models for the deformation of electrolyte droplets,, Discrete Contin Dyn. Syst. B, 8 (2007), 649. doi: 10.3934/dcdsb.2007.8.649. Google Scholar H. Sugioka, Ion-conserving Poisson-Boltzmann theory,, Phys. Rev. E, 86 (2012). doi: 10.1103/PhysRevE.86.016318. Google Scholar L. Wan, S. Xu, M. Liao, C. Liu and P. Sheng, Self-consistent approach to global charge neutrality in electrokinetics: A surface potential trap model,, Phys. Rev. X, 4 (2014). doi: 10.1103/PhysRevX.4.011042. Google Scholar F. Ziebert, M. Z. Bazant and D. Lacoste, Effective zero-thickness model for a conductive membrane driven by an electric field,, Phys. Rev. E, 81 (2010). doi: 10.1103/PhysRevE.81.031912. Google Scholar S. Zhou, Z. Wang and B. Li, Mean-field description of ionic size effects with non-uniform ionic sizes: A numerical approach,, Phys. Rev. E, 84 (2011). doi: 10.1103/PhysRevE.84.021901. Google Scholar Qiong Chen, Chunlai Mu, Zhaoyin Xiang. Blow-up and asymptotic behavior of solutions to a semilinear integrodifferential system. Communications on Pure & Applied Analysis, 2006, 5 (3) : 435-446. doi: 10.3934/cpaa.2006.5.435 Alberto Bressan, Massimo Fonte. On the blow-up for a discrete Boltzmann equation in the plane. Discrete & Continuous Dynamical Systems - A, 2005, 13 (1) : 1-12. doi: 10.3934/dcds.2005.13.1 Marco A. Fontelos, Lucía B. Gamboa. On the structure of double layers in Poisson-Boltzmann equation. Discrete & Continuous Dynamical Systems - B, 2012, 17 (6) : 1939-1967. doi: 10.3934/dcdsb.2012.17.1939 Jong-Shenq Guo. Blow-up behavior for a quasilinear parabolic equation with nonlinear boundary condition. Discrete & Continuous Dynamical Systems - A, 2007, 18 (1) : 71-84. doi: 10.3934/dcds.2007.18.71 Xin Zhong. A blow-up criterion for three-dimensional compressible magnetohydrodynamic equations with variable viscosity. Discrete & Continuous Dynamical Systems - B, 2019, 24 (7) : 3249-3264. doi: 10.3934/dcdsb.2018318 Pan Zheng, Chunlai Mu, Xuegang Hu. Boundedness and blow-up for a chemotaxis system with generalized volume-filling effect and logistic source. Discrete & Continuous Dynamical Systems - A, 2015, 35 (5) : 2299-2323. doi: 10.3934/dcds.2015.35.2299 Yohei Fujishima. On the effect of higher order derivatives of initial data on the blow-up set for a semilinear heat equation. Communications on Pure & Applied Analysis, 2018, 17 (2) : 449-475. doi: 10.3934/cpaa.2018025 Yūki Naito, Takasi Senba. Blow-up behavior of solutions to a parabolic-elliptic system on higher dimensional domains. Discrete & Continuous Dynamical Systems - A, 2012, 32 (10) : 3691-3713. doi: 10.3934/dcds.2012.32.3691 Frank Merle, Hatem Zaag. O.D.E. type behavior of blow-up solutions of nonlinear heat equations. Discrete & Continuous Dynamical Systems - A, 2002, 8 (2) : 435-450. doi: 10.3934/dcds.2002.8.435 Nicola Abatangelo. Large $s$-harmonic functions and boundary blow-up solutions for the fractional Laplacian. Discrete & Continuous Dynamical Systems - A, 2015, 35 (12) : 5555-5607. doi: 10.3934/dcds.2015.35.5555 Mikhaël Balabane, Mustapha Jazar, Philippe Souplet. Oscillatory blow-up in nonlinear second order ODE's: The critical case. Discrete & Continuous Dynamical Systems - A, 2003, 9 (3) : 577-584. doi: 10.3934/dcds.2003.9.577 Hua Chen, Nian Liu. Asymptotic stability and blow-up of solutions for semi-linear edge-degenerate parabolic equations with singular potentials. Discrete & Continuous Dynamical Systems - A, 2016, 36 (2) : 661-682. doi: 10.3934/dcds.2016.36.661 Yuri Kalinin, Volker Reitmann, Nayil Yumaguzin. Asymptotic behavior of Maxwell's equation in one-space dimension with thermal effect. Conference Publications, 2011, 2011 (Special) : 754-762. doi: 10.3934/proc.2011.2011.754 C. Y. Chan. Recent advances in quenching and blow-up of solutions. Conference Publications, 2001, 2001 (Special) : 88-95. doi: 10.3934/proc.2001.2001.88 Marina Chugunova, Chiu-Yen Kao, Sarun Seepun. On the Benilov-Vynnycky blow-up problem. Discrete & Continuous Dynamical Systems - B, 2015, 20 (5) : 1443-1460. doi: 10.3934/dcdsb.2015.20.1443 Marek Fila, Hiroshi Matano. Connecting equilibria by blow-up solutions. Discrete & Continuous Dynamical Systems - A, 2000, 6 (1) : 155-164. doi: 10.3934/dcds.2000.6.155 Victor A. Galaktionov, Juan-Luis Vázquez. The problem Of blow-up in nonlinear parabolic equations. Discrete & Continuous Dynamical Systems - A, 2002, 8 (2) : 399-433. doi: 10.3934/dcds.2002.8.399 W. Edward Olmstead, Colleen M. Kirk, Catherine A. Roberts. Blow-up in a subdiffusive medium with advection. Discrete & Continuous Dynamical Systems - A, 2010, 28 (4) : 1655-1667. doi: 10.3934/dcds.2010.28.1655 Yukihiro Seki. A remark on blow-up at space infinity. Conference Publications, 2009, 2009 (Special) : 691-696. doi: 10.3934/proc.2009.2009.691 Petri Juutinen. Convexity of solutions to boundary blow-up problems. Communications on Pure & Applied Analysis, 2013, 12 (5) : 2267-2275. doi: 10.3934/cpaa.2013.12.2267 Chiun-Chang Lee
CommonCrawl
\begin{document} \title{Approximation Algorithms for Correlated Knapsacks \\ and Non-Martingale Bandits} \author{ Anupam Gupta\thanks{Deparment of Computer Science, Carnegie Mellon University, Pittsburgh PA 15213.} \and Ravishankar Krishnaswamy$^*$ \and Marco Molinaro\thanks{Tepper School of Business, Carnegie Mellon University, Pittsburgh PA 15213.} \and R. Ravi$^\dagger$} \date{} \maketitle \begin{abstract} In the stochastic knapsack problem, we are given a knapsack of size $B$, and a set of jobs whose sizes and rewards are drawn from a known probability distribution. However, the only way to know the actual size and reward is to schedule the job---when it completes, we get to know these values. How should we schedule jobs to maximize the expected total reward? We know constant-factor approximations for this problem when we assume that rewards and sizes are independent random variables, and that we cannot prematurely cancel jobs after we schedule them. What can we say when either or both of these assumptions are changed? The stochastic knapsack problem is of interest in its own right, but techniques developed for it are applicable to other stochastic packing problems. Indeed, ideas for this problem have been useful for budgeted learning problems, where one is given several arms which evolve in a specified stochastic fashion with each pull, and the goal is to pull the arms a total of $B$ times to maximize the reward obtained. Much recent work on this problem focus on the case when the evolution of the arms follows a martingale, i.e., when the expected reward from the future is the same as the reward at the current state. What can we say when the rewards do not form a martingale? In this paper, we give constant-factor approximation algorithms for the stochastic knapsack problem with correlations and/or cancellations, and also for budgeted learning problems where the martingale condition is not satisfied, using similar ideas. Indeed, we can show that previously proposed linear programming relaxations for these problems have large integrality gaps. We propose new time-indexed LP relaxations; using a decomposition and ``gap-filling'' approach, we convert these fractional solutions to distributions over strategies, and then use the LP values and the time ordering information from these strategies to devise a randomized adaptive scheduling algorithm. We hope our LP formulation and decomposition methods may provide a new way to address other correlated bandit problems with more general contexts. \end{abstract} \thispagestyle{empty} \setcounter{page}{0} \section{Introduction} \label{sec:introduction} Stochastic packing problems seem to be conceptually harder than their deterministic counterparts---imagine a situation where some rounding algorithm outputs a solution in which the budget constraint has been exceeded by a constant factor. For deterministic packing problems (with a single constraint), one can now simply pick the most profitable subset of the items which meets the packing constraint; this would give us a profit within a constant of the optimal value. The deterministic packing problems not well understood are those with multiple (potentially conflicting) packing constraints. However, for the stochastic problems, even a single packing constraint is not simple to handle. Even though they arise in diverse situations, the first study from an approximations perspective was in an important paper of Dean et al.~\cite{DeanGV08} (see also~\cite{dgv05, Dean-thesis}). They defined the stochastic knapsack problem, where each job has a random size and a random reward, and the goal is to give an adaptive strategy for irrevocably picking jobs in order to maximize the expected value of those fitting into a knapsack with size $B$---they gave an LP relaxation and rounding algorithm, which produced \emph{non-adaptive} solutions whose performance was surprisingly within a constant-factor of the best \emph{adaptive} ones (resulting in a constant adaptivity gap, a notion they also introduced). However, the results required that (a)~the random rewards and sizes for items were independent of each other, and (b)~once a job was placed, it could not be prematurely canceled---it is easy to see that these assumptions change the nature of the problem significantly. The study of the stochastic knapsack problem was very influential---in particular, the ideas here were used to obtain approximation algorithms for \emph{budgeted learning problems} studied by Guha and Munagala~\cite{GuhaM-soda07,GuhaM-stoc07,GuhaM09} and Goel et al.~\cite{GoelKN09}, among others. They considered problems in the multi-armed bandit setting with $k$ arms, each arm evolving according to an underlying state machine with probabilistic transitions when pulled. Given a budget $B$, the goal is to pull arms up to $B$ times to maximize the reward---payoffs are associated with states, and the reward is some function of payoffs of the states seen during the evolution of the algorithm. (E.g., it could be the sum of the payoffs of all states seen, or the reward of the best final state, etc.) The above papers gave $O(1)$-approximations, index-based policies and adaptivity gaps for several budgeted learning problems. However, these results all required the assumption that the rewards satisfied a \emph{martingale property}, namely, if an arm is some state $u$, one pull of this arm would bring an expected payoff equal to the payoff of state $u$ itself --- the motivation for such an assumption comes from the fact that the different arms are assumed to be associated with a fixed (but unknown) reward, but we only begin with a prior distribution of possible rewards. Then, the expected reward from the next pull of the arm, \emph{conditioned} on the previous pulls, forms a Doob martingale. However, there are natural instances where the martingale property need not hold. For instance, the evolution of the prior could not just depend on the observations made but on external factors (such as time) as well. Or, in a marketing application, the evolution of a customer's state may require repeated ``pulls'' (or marketing actions) before the customer transitions to a high reward state and makes a purchase, while the intermediate states may not yield any reward. These lead us to consider the following problem: there are a collection of $n$ arms, each characterized by an arbitrary (known) Markov chain, and there are rewards associated with the different states. When we play an arm, it makes a state transition according to the associated Markov chain, and fetches the corresponding reward of the new state. What should our strategy be in order to maximize the expected total reward we can accrue by making at most $B$ pulls in total? \subsection{Results} Our main results are the following: We give the first constant-factor approximations for the general version of the stochastic knapsack problem where rewards could be correlated with the sizes. Our techniques are general and also apply to the setting when jobs could be canceled arbitrarily. We then extend those ideas to give the first constant-factor approximation algorithms for a class of budgeted learning problems with Markovian transitions where the martingale property is not satisfied. We summarize these in \lref[Table]{tab:results}. \begin{table} \begin{center} \begin{tabular}{ | l | l | l | l | } \hline Problem& Restrictions & Paper \\ \hline Stochastic Knapsack & Fixed Rewards, No Cancellation & \cite{dgv05} \\ \hline & Correlated Rewards, No Cancellation & \lref[Section]{sec:nopmtn} \\ \hline & Correlated Rewards, Cancellation & \lref[Section]{sec:sk} \\ \hline Multi-Armed Bandits & Martingale Assumption & \cite{GuhaM-soda07} \\ \hline & No Martingale Assumption & \lref[Section]{sec:mab} \\ \hline \end{tabular} \caption{Summary of Results}\label{tab:results} \end{center} \end{table} \subsection{Why Previous Ideas Don't Extend, and Our Techniques} \label{sec:high-level-idea} One reason why stochastic packing problems are more difficult than their deterministic counterparts is that, unlike in the deterministic setting, here we cannot simply take a solution with expected reward $R^*$ that packs into a knapsack of size $2B$ and convert it (by picking a subset of the items) into a solution which obtains a constant fraction of the reward $R^*$ whilst packing into a knapsack of size $B$. In fact, there are examples where a budget of $2B$ can fetch much more reward than what a budget of size $B$ can (see \lref[Appendix]{sec:badness-corr}). Another distinction from deterministic problems is that allowing cancellations can drastically increase the value of the solution (see \lref[Appendix]{sec:badness-cancel}). The model used in previous works on stochastic knapsack and on budgeted learning circumvented both issues---in contrast, our model forces us to address them. \textbf{Stochastic Knapsack:} Dean et al.~\cite{DeanGV08, Dean-thesis} assume that the reward/profit of an item is independent of its stochastic size. Moreover, their model does not consider the possibility of canceling jobs in the middle. These assumptions simplify the structure of the decision tree and make it possible to formulate a (deterministic) knapsack-style LP, and round it. However, as shown in \lref[Appendix]{sec:egs}, their LP relaxation performs poorly when either correlation or cancellation is allowed. This is the first issue we need to address. \textbf{Budgeted Learning:} Obtaining approximations for budgeted learning problems is a more complicated task, since cancellations maybe inherent in the problem formulation, i.e., any strategy would stop playing a particular arm and switch to another, and the rewards by playing any arm are naturally correlated with the (current) state and hence the number of previous pulls made on the item/arm. The first issue is often tacked by using more elaborate LPs with a flow-like structure that compute a probability distribution over the different times at which the LP stops playing an arm (e.g., \cite{GuhaM-stoc07}), but the latter issue is less understood. Indeed, several papers on this topic present strategies that fetch an expected reward which is a constant-factor of an optimal solution's reward, but which may violate the budget by a constant factor. In order to obtain an approximate solution without violating the budget, they critically make use of the \emph{martingale property}---with this assumption at hand, they can truncate the last arm played to fit the budget without incurring any loss in expected reward. However, such an idea fails when the martingale property is not satisfied, and these LPs now have large integrality gaps (see \lref[Appendix]{sec:badness-corr}). At a high level, a major drawback with previous LP relaxations for both problems is that the constraints are \emph{local} for each arm/job, i.e., they track the probability distribution over how long each item/arm is processed (either till completion or cancellation), and there is an additional global constraint binding the total number of pulls/total size across items. This results in two different issues. For the (correlated) stochastic knapsack problem, these LPs do not capture the case when all the items have high contention, since they want to play early in order to collect profit. And for the general multi-armed bandit problem, we show that no local LP can be good since such LPs do not capture the notion of \emph{preempting} an arm, namely switching from one arm to another, and possibly returning to the original arm later later. Indeed, we show cases when any near-optimal strategy must switch between different arms (see \lref[Appendix]{sec:preemption-gap})---this is a major difference from previous work with the martingale property where there exist near-optimal strategies that never return to any arm~\cite[Lemma 2.1]{GuhaM09}. At a high level, the lack of the martingale property means our algorithm needs to make adaptive decisions, where each move is a function of the previous outcomes; in particular this may involve revisiting a particular arm several times, with interruptions in the middle. We resolve these issues in the following manner: incorporating cancellations into stochastic knapsack can be handled by just adapting the flow-like LPs from the multi-armed bandits case. To resolve the problems of contention and preemption, we formulate a \emph{global time-indexed} relaxation that forces the LP solution to commit each job to begin at a time, and places constraints on the maximum expected reward that can be obtained if the algorithm begins an item a particular time. Furthermore, the time-indexing also enables our rounding scheme to extract information about when to preempt an arm and when to re-visit it based on the LP solution; in fact, these decisions will possibly be different for different (random) outcomes of any pull, but the LP encodes the information for each possibility. We believe that our rounding approach may be of interest in other applications in Stochastic optimization problems. Another important version of budgeted learning is when we are allowed to make up to $B$ plays as usual but now we can ``exploit'' at most $K$ times: reward is only fetched when an arm is exploited and again depends on its current state. There is a further constraint that once an arm is exploited, it must then be discarded. The LP-based approach here can be easily extended to that case as well. \subsection{Roadmap} We begin in \lref[Section]{sec:nopmtn} by presenting a constant-factor approximation algorithm for the stochastic knapsack problem (\ensuremath{\mathsf{StocK}}\xspace) when rewards could be correlated with the sizes, but decisions are irrevocable, i.e., job cancellations are not allowed. Then, we build on these ideas in \lref[Section]{sec:sk}, and present our results for the (correlated) stochastic knapsack problem, where job cancellation is allowed. In \lref[Section]{sec:mab}, we move on to the more general class of multi-armed bandit (\ensuremath{\mathsf{MAB}}\xspace) problems. For clarity in exposition, we present our algorithm for \ensuremath{\mathsf{MAB}}\xspace, assuming that the transition graph for each arm is an \emph{arborescence} (i.e., a directed tree), and then generalize it to arbitrary transition graphs in \lref[Section]{dsec:mab}. We remark that while our LP-based approach for the budgeted learning problem implies approximation algorithms for the stochastic knapsack problem as well, the knapsack problem provides a gentler introduction to the issues---it motivates and gives insight into our techniques for \ensuremath{\mathsf{MAB}}\xspace. Similarly, it is easier to understand our techniques for the \ensuremath{\mathsf{MAB}}\xspace problem when the transition graph of each arm's Markov chain is a tree. Several illustrative examples are presented in \lref[Appendix]{sec:egs}, e.g., illustrating why we need adaptive strategies for the non-martingale \ensuremath{\mathsf{MAB}}\xspace problems, and why some natural ideas do not work. Finally, the extension of our algorithm for \ensuremath{\mathsf{MAB}}\xspace for the case when rewards are available only when the arms are explicitly exploited with budgets on both the exploration and exploitation pulls appear in \lref[Appendix]{xsec:mab}. Note that this algorithm strictly generalizes the previous work on budgeted learning for \ensuremath{\mathsf{MAB}}\xspace with the martingale property~\cite{GuhaM-stoc07}. \subsection{Related Work} \label{sec:related-work} Stochastic scheduling problems have been long studied since the 1960s (e.g.,~\cite{BirgeL97, Pinedo}); however, there are fewer papers on approximation algorithms for such problems. Kleinberg et al.~\cite{KRT-sched}, and Goel and Indyk~\cite{GI99} consider stochastic knapsack problems with chance constraints: find the max-profit set which will overflow the knapsack with probability at most $p$. However, their results hold for deterministic profits and specific size distributions. Approximation algorithms for minimizing average completion times with arbitrary job-size distributions was studied by~\cite{MohringSU99, SkutU01}. The work most relevant to us is that of Dean, Goemans and Vondr\'ak~\cite{DeanGV08, dgv05, Dean-thesis} on stochastic knapsack and packing; apart from algorithms (for independent rewards and sizes), they show the problem to be PSPACE-hard when correlations are allowed. \cite{ChawlaR06} study stochastic flow problems. Recent work of Bhalgat et al.~\cite{BGK11} presents a PTAS but violate the capacity by a factor $(1+\epsilon)$; they also get better constant-factor approximations without violations. The general area of learning with costs is a rich and diverse one (see, e.g.,~\cite{Bert05,Gittins89}). Approximation algorithms start with the work of Guha and Munagala~\cite{GuhaM-stoc07}, who gave LP-rounding algorithms for some problems. Further papers by these authors~\cite{GuhaMS07, GuhaM09} and by Goel et al.~\cite{GoelKN09} give improvements, relate LP-based techniques and index-based policies and also give new index policies. (See also~\cite{GGM06,GuhaM-soda07}.) \cite{GuhaM09} considers switching costs, \cite{GuhaMP11} allows pulling many arms simultaneously, or when there is delayed feedback. All these papers assume the martingale condition. \newcommand{\mathsf{ER}}{\mathsf{ER}} \newcommand{{\textsf{StocK-Small}}\xspace}{{\textsf{StocK-Small}}\xspace} \newcommand{{\textsf{StocK-Large}}\xspace}{{\textsf{StocK-Large}}\xspace} \newcommand{{\textsf{StocK-NoCancel}}\xspace}{{\textsf{StocK-NoCancel}}\xspace} \section{The Correlated Stochastic Knapsack without Cancellation} \label{sec:nopmtn} We begin by considering the stochastic knapsack problem (\ensuremath{\mathsf{StocK}}\xspace), when the job rewards may be correlated with its size. This generalizes the problem studied by Dean et al. \cite{dgv05} who assume that the rewards are independent of the size of the job. We first explain why the LP of~\cite{dgv05} has a large integrality gap for our problem; this will naturally motivate our time-indexed formulation. We then present a simple randomized rounding algorithm which produces a non-adaptive strategy and show that it is an $O(1)$-approximation. \subsection{Problem Definitions and Notation} \label{sec:knap-model} We are given a knapsack of total budget $B$ and a collection of $n$ stochastic items. For any item $i \in [1,n]$, we are given a probability distribution over $(\mathsf{size}, \mathsf{reward})$ pairs specified as follows: for each integer value of $t \in [1,B]$, the tuple $(\pi_{i,t}, R_{i,t})$ denotes the probability $\pi_{i,t}$ that item $i$ has a size $t$, and the corresponding reward is $R_{i,t}$. Note that the reward for a job is now correlated to its size; however, these quantities for two different jobs are still independent of each other. An algorithm to \emph{adaptively} process these items can do the following actions at the end of each timestep; \begin{inparaenum} \item[(i)] an item may complete at a certain size, giving us the corresponding reward, and the algorithm may choose a new item to start processing, or \item[(ii)] the knapsack becomes full, at which point the algorithm cannot process any more items, and any currently running job does not accrue any reward. \end{inparaenum} The objective function is to maximize the total expected reward obtained from all completed items. Notice that we do not allow the algorithm to cancel an item before it completes. We relax this requirement in \lref[Section]{sec:sk}. \subsection{LP Relaxation} The LP relaxation in~\cite{dgv05} was (essentially) a knapsack LP where the sizes of items are replaced by the expected sizes, and the rewards are replaced by the expected rewards. While this was sufficient when an item's reward is fixed (or chosen randomly but independent of its size), we give an example in \lref[Appendix]{sec:badness-corr} where such an LP (and in fact, the class of more general LPs used for approximating \ensuremath{\mathsf{MAB}}\xspace problems) would have a large integrality gap. As mentioned in \lref[Section]{sec:high-level-idea}, the reason why local LPs don't work is that there could be high contention for being scheduled early (i.e., there could be a large number of items which all fetch reward if they instantiate to a large size, but these events occur with low probability). In order to capture this contention, we write a global time-indexed LP relaxation. The variable $x_{i,t} \in [0,1]$ indicates that item $i$ is scheduled at (global) time $t$; $S_i$ denotes the random variable for the size of item $i$, and $\mathsf{ER}_{i,t} = \sum_{s \le B - t} \pi_{i,s} R'_{i,s}$ captures the expected reward that can be obtained from item $i$ \emph{if it begins} at time $t$; (no reward is obtained for sizes that cannot fit the (remaining) budget.) \begin{alignat}{2} \tag{$\mathsf{LP}_{\sf NoCancel}$} \label{lp:large} \max &\textstyle \sum_{i,t} \mathsf{ER}_{i,t} \cdot x_{i,t} &\\ &\textstyle \sum_t x_{i,t} \le 1 &\forall i \label{LPbig1}\\ &\textstyle \sum_{i, t' \le t} x_{i,t'} \cdot \mathbb{E}[\min(S_i,t)] \le 2t \qquad &\forall t \in [B] \label{LPbig2}\\ &x_{i,t} \in [0,1] &\forall t \in [B], \forall i \label{LPbig3} \end{alignat} While the size of the above LP (and the running time of the rounding algorithm below) polynomially depend on $B$, i.e., pseudo-polynomial, it is possible to write a compact (approximate) LP and then round it; details on the polynomial time implementation appear in \lref[Appendix]{app:polytime-nopmtn}. Notice the constraints involving the \emph{truncated random variables} in equation~\eqref{LPbig2}: these are crucial for showing the correctness of the rounding algorithm {\textsf{StocK-NoCancel}}\xspace. Furthermore, the ideas used here will appear subsequently in the \ensuremath{\mathsf{MAB}}\xspace algorithm later; for \ensuremath{\mathsf{MAB}}\xspace, even though we can't explicitly enforce such a constraint in the LP, we will end up inferring a similar family of inequalities from a near-optimal LP solution. \begin{lemma} \label{thm:lp-large-valid} The above relaxation is valid for the \ensuremath{\mathsf{StocK}}\xspace problem when cancellations are not permitted, and has objective value $\ensuremath{\mathsf{LPOpt}\xspace} \geq \ensuremath{\mathsf{Opt}\xspace}$, where $\ensuremath{\mathsf{Opt}\xspace}$ is the expected profit of an optimal adaptive policy. \end{lemma} \begin{proof} Consider an optimal policy $\ensuremath{\mathsf{Opt}\xspace}$ and let $x^*_{i,t}$ denote the probability that item $i$ is scheduled at time $t$. We first show that $\{x^*\}$ is a feasible solution for the LP relaxation \ref{lp:large}. It is easy to see that constraints~\eqref{LPbig1} and~\eqref{LPbig3} are satisfied. To prove that \eqref{LPbig2} are also satisfied, consider some $t \in [B]$ and some run (over random choices of item sizes) of the optimal policy. Let $\mathbf{1}^{{\sf sched}}_{i,t'}$ be indicator variable that item $i$ is scheduled at time $t'$ and let $\mathbf{1}^{{\sf size}}_{i,s}$ be the indicator variable for whether the size of item $i$ is $s$. Also, let $L_t$ be the random variable indicating the last item scheduled at or before time $t$. Notice that $L_t$ is the only item scheduled before or at time $t$ whose execution may go over time $t$. Therefore, we get that $$\sum_{i \neq L_t} \sum_{t' \le t} \sum_{s\leq B} \mathbf{1}^{{\sf sched}}_{i,t'} \cdot \mathbf{1}^{{\sf size}}_{i,s} \cdot s \le t.$$ Including $L_t$ in the summation and truncating the sizes by $t$, we immediately obtain $$\sum_i \sum_{t' \le t} \sum_{s} \mathbf{1}^{{\sf sched}}_{i,t'} \cdot \mathbf{1}^{{\sf size}}_{i,s} \cdot \min(s, t) \le 2t.$$ Now, taking expectation (over all of \ensuremath{\mathsf{Opt}\xspace}'s sample paths) on both sides and using linearity of expectation we have $$\sum_i \sum_{t' \le t} \sum_{s} \mathbb{E} \left[\mathbf{1}^{{\sf sched}}_{i,t'} \cdot \mathbf{1}^{{\sf size}}_{i,s}\right] \cdot \min(s,t) \le 2t.$$ However, because $\ensuremath{\mathsf{Opt}\xspace}$ decides whether to schedule an item before observing the size it instantiates to, we have that $\mathbf{1}^{{\sf sched}}_{i,t'}$ and $\mathbf{1}^{{\sf size}}_{i,s}$ are independent random variables; hence, the LHS above can be re-written as \begin{align*} &\sum_i \sum_{t' \le t} \sum_s \Pr[\mathbf{1}^{{\sf sched}}_{i,t'} = 1 \wedge \mathbf{1}^{{\sf size}}_{i,s} = 1] \min(s,t) \\ &= \sum_i \sum_{t' \le t} \Pr[\mathbf{1}^{{\sf sched}}_{i,t'} = 1] \sum_s \Pr[\mathbf{1}^{{\sf size}}_{i,s} = 1] \min(s,t) \\ &= \sum_i \sum_{t' \le t} x^*_{i,t'} \cdot \mathbb{E}[\min(S_i,t)] \end{align*} Hence constraints \eqref{LPbig2} are satisfied. Now we argue that the expected reward of $\ensuremath{\mathsf{Opt}\xspace}$ is equal to the value of the solution $x^*$. Let $O_i$ be the random variable denoting the reward obtained by $\ensuremath{\mathsf{Opt}\xspace}$ from item $i$. Again, due to the independence between $\ensuremath{\mathsf{Opt}\xspace}$ scheduling an item and the size it instantiates to, we get that the expected reward that $\ensuremath{\mathsf{Opt}\xspace}$ gets from executing item $i$ at time $t$ is $$\mathbb{E}[O_i | \mathbf{1}^{{\sf sched}}_{i,t} = 1] = \sum_{s \le B - t} \pi_{i,s} R_{i,s} = \mathsf{ER}_{i,t}.$$ Thus the expected reward from item $i$ is obtained by considering all possible starting times for $i$: \begin{align*} \mathbb{E}[O_i] = \sum_t \Pr[\mathbf{1}^{{\sf sched}}_{i,t} = 1] \cdot \mathbb{E}[O_i | \mathbf{1}^{{\sf sched}}_{i,t} = 1] = \sum_t \mathsf{ER}_{i,t} \cdot x^*_{i,t}. \end{align*} This shows that \ref{lp:large} is a valid relaxation for our problem and completes the proof of the lemma. \end{proof} We are now ready to present our rounding algorithm {\textsf{StocK-NoCancel}}\xspace (\lref[Algorithm]{alg:sksnocancel}). It a simple randomized rounding procedure which (i) picks the start time of each item according to the corresponding distribution in the optimal LP solution, and (ii) plays the items in order of the (random) start times. To ensure that the budget is not violated, we also drop each item independently with some constant probability. \begin{algorithm}[ht!] \caption{Algorithm {\textsf{StocK-NoCancel}}\xspace} \begin{algorithmic}[1] \label{alg:sksnocancel} \STATE for each item $i$, \textbf{assign} a random start-time $D_i = t$ with probability $\frac{x^*_{i,t}}{4}$; with probability $1 - \sum_{t} \frac{x^*_{i,t}}{4}$, completely ignore item $i$ ($D_i = \infty$ in this case). \label{alg:big1} \FOR{$j$ from $1$ to $n$} \STATE Consider the item $i$ which has the $j$th smallest deadline (and $D_i \neq \infty$) \label{alg:big2} \IF{the items added so far to the knapsack occupy at most $D_i$ space} \STATE add $i$ to the knapsack. \label{alg:big3} \ENDIF \label{alg:big4} \ENDFOR \end{algorithmic} \end{algorithm} Notice that the strategy obtained by the rounding procedure obtains reward from all items which are not dropped and which do not fail (i.e. they can start being scheduled before the sampled start-time $D_i$ in \lref[Step]{alg:big1}); we now bound the failure probability. \begin{lemma} \label{lem:big-fail} For every $i$, $\Pr(i~\mathsf{fails} \mid D_i = t) \le 1/2$. \end{lemma} \begin{proof} Consider an item $i$ and time $t \neq \infty$ and condition on the event that $D_i = t$. Let us consider the execution of the algorithm when it tries to add item $i$ to the knapsack in \lref[steps]{alg:big2}-\ref{alg:big4}. Now, let $Z$ be a random variable denoting \emph{how much of the interval} $[0,t]$ of the knapsack is occupied by previously scheduling items, at the time when $i$ is considered for addition; since $i$ does not fail when $Z < t$, it suffices to prove that $\Pr(Z \ge t) \le 1/2$. For some item $j \neq i$, let $\mathbf{1}_{D_j \le t}$ be the indicator variable that $D_j \le t$; notice that by the order in which algorithm {\textsf{StocK-NoCancel}}\xspace adds items into the knapsack, it is also the indicator that $j$ was considered before $i$. In addition, let $\mathbf{1}^{{\sf size}}_{j,s}$ be the indicator variable that $S_j = s$. Now, if $Z_j$ denotes the total amount of the interval $[0,t]$ that that $j$ occupies, we have $$ Z_j \le \mathbf{1}_{D_j \le t} \sum_s \mathbf{1}^{{\sf size}}_{j,s} \min(s, t).$$ Now, using the independence of $\mathbf{1}_{D_j \le t}$ and $\mathbf{1}^{{\sf size}}_{j,s}$, we have \begin{equation} \mathbb{E}[Z_j] \textstyle \le \mathbb{E}[\mathbf{1}_{D_j \le t}] \cdot \mathbb{E}[\min(S_j, t)] = \frac{1}{4} \sum_{t' \le t} x^*_{j,t'} \cdot \mathbb{E}[\min(S_j, t)] \end{equation} Since $Z = \sum_j Z_j$, we can use linearity of expectation and the fact that $\{x^*\}$ satisfies LP constraint~\eqref{LPbig2} to get \begin{align*} \mathbb{E}[Z] &\textstyle \le \frac{1}{4} \sum_j \sum_{t' \le t} x^*_{j,t'} \cdot \mathbb{E}[\min(S_j, t)] \le \frac{t}{2}\;. \end{align*} To conclude the proof of the lemma, we apply Markov's inequality to obtain $\Pr(Z \ge t) \le 1/2$. \end{proof} To complete the analysis, we use the fact that any item chooses a random start time $D_i = t$ with probability $x^*_{i,t}/4$, and conditioned on this event, it is added to the knapsack with probability at least $1/2$ from \lref[Lemma]{lem:big-fail}; in this case, we get an expected reward of at least $\mathsf{ER}_{i,t}$. The theorem below (formally proved in \lref[Appendix]{app:nopmtn-proof} then follows by linearity of expectations. \begin{theorem}\label{thm:large} The expected reward of our randomized algorithm is at least $\frac18$ of $\ensuremath{\mathsf{LPOpt}\xspace}$. \end{theorem} \section{Stochastic Knapsack with Correlated Rewards and Cancellations} \label{sec:sk} In this section, we present our algorithm for stochastic knapsack (\ensuremath{\mathsf{StocK}}\xspace) where we allow correlations between rewards and sizes, and also allow cancellation of jobs. The example in \lref[Appendix]{sec:badness-cancel} shows that there can be an arbitrarily large gap in the expected profit between strategies that can cancel jobs and those that can't. Hence we need to write new LPs to capture the benefit of cancellation, which we do in the following manner. Consider any job $j$: we can create two jobs from it, the ``early'' version of the job, where we discard profits from any instantiation where the size of the job is more than $B/2$, and the ``late'' version of the job where we discard profits from instantiations of size at most $B/2$. Hence, we can get at least half the optimal value by flipping a fair coin and either collecting rewards from either the early or late versions of jobs, based on the outcome. In the next section, we show how to obtain a constant factor approximation for the first kind. For the second kind, we argue that cancellations don't help; we can then reduce it to \ensuremath{\mathsf{StocK}}\xspace without cancellations (considered in \lref[Section]{sec:nopmtn}). \subsection{Case I: Jobs with Early Rewards} \label{caseSmall} We begin with the setting in which only small-size instantiations of items may fetch reward, i.e., the rewards $R_{i,t}$ of every item $i$ are assumed to be $0$ for $t > B/2$. In the following LP relaxation \ref{lpone}, $v_{i,t} \in [0,1]$ tries to capture the probability with which $\ensuremath{\mathsf{Opt}\xspace}$ will process item $i$ for \emph{at least} $t$ timesteps\footnote{In the following two sections, we use the word timestep to refer to processing one unit of some item.}, $s_{i,t} \in [0,1]$ is the probability that $\ensuremath{\mathsf{Opt}\xspace}$ stops processing item $i$ \emph{exactly} at $t$ timesteps. The time-indexed formulation causes the algorithm to have running times of $\operatorname{poly}(B)$---however, it is easy to write compact (approximate) LPs and then round them; we describe the necessary changes to obtain an algorithm with running time $\operatorname{poly}(n, \log B)$ in \lref[Appendix]{app:polytime}. \begin{alignat}{2} \max &\textstyle \sum_{1 \leq t \leq B/2} \sum_{1 \leq i \leq n} v_{i,t} \cdot R_{i,t} \frac{\pi_{i,t}}{\sum_{t' \geq t} \pi_{i,t'}} & & \tag{$\mathsf{LP}_S$} \label{lpone} \\ & v_{i,t} = s_{i,t} + v_{i,t+1} & \qquad & \forall \, t \in [0,B], \, i \in [n] \label{eq:1} \\ &s_{i,t} \geq \frac{\pi_{i,t}}{\sum_{t' \geq t} \pi_{i,t'}} \cdot v_{i,t} & \qquad & \forall \, t \in [0,B], \, i \in [n] \label{eq:2} \\ &\textstyle \sum_{i \in [n]} \sum_{t \in [0,B]} t \cdot s_{i,t} \leq B & \label{eq:3}\\ &v_{i,0} = 1 & \qquad & \forall \, i \label{eq:4} \\ v_{i,t}, s_{i,t} &\in [0,1] & \qquad & \forall \, t \in [0,B], \, i \in [n] \label{eq:5} \end{alignat} \begin{theorem} \label{thm:lp1-valid} The linear program~(\ref{lpone}) is a valid relaxation for the \ensuremath{\mathsf{StocK}}\xspace problem, and hence the optimal value $\ensuremath{\mathsf{LPOpt}\xspace}$ of the LP is at least the total expected reward $\ensuremath{\mathsf{Opt}\xspace}$ of an optimal solution. \end{theorem} \begin{proof} Consider an optimal solution $\ensuremath{\mathsf{Opt}\xspace}$ and let $v^*_{i,t}$ and $s^*_{i,t}$ denote the probability that $\ensuremath{\mathsf{Opt}\xspace}$ processes item $i$ for at least $t$ timesteps, and the probability that $\ensuremath{\mathsf{Opt}\xspace}$ stops processing item $i$ at exactly $t$ timesteps. We will now show that all the constraints of ~\ref{lpone} are satisfied one by one. To this end, let $R_i$ denote the random variable (over different executions of $\ensuremath{\mathsf{Opt}\xspace}$) for the amount of processing done on job $i$. Notice that $ \Pr[R_i \geq t] = \Pr[R_i \geq (t+1)] + \Pr[R_i = t]$. But now, by definition we have $\Pr[R_i \geq t] = v^*_{i,t}$ and $\Pr[R_i = t] = s^*_{i,t}$. This shows that $\{v^*, s^*\}$ satisfies these constraints. For the next constraint, observe that conditioned on $\ensuremath{\mathsf{Opt}\xspace}$ running an item $i$ for at least $t$ time steps, the probability of item $i$ stopping due to its size having instantiated to exactly equal to $t$ is $\pi_{i,t}/\sum_{t' \geq t} \pi_{i,t'}$, i.e., $\Pr [ R_i = t \mid R_i \geq t ] \geq \pi_{i,t}/\sum_{t' \geq t} \pi_{i,t'}$. This shows that $\{v^*, s^*\}$ satisfies constraints~(\ref{eq:2}). Finally, to see why constraint~(\ref{eq:3}) is satisfied, consider any particular run of the optimal algorithm and let $\mathbf{1}^{stop}_{i,t}$ denote the indicator random variable of the event $R_i = t$. Then we have \[ \sum_{i} \sum_{t} \mathbf{1}^{stop}_{i,t} \cdot t \leq B \] Now, taking expectation over all runs of $\ensuremath{\mathsf{Opt}\xspace}$ and using linearity of expectation and the fact that $\mathbb{E}[\mathbf{1}^{stop}_{i,t}] = s^*_{i,t}$, we get constraint~(\ref{eq:3}). As for the objective function, we again consider a particular run of the optimal algorithm and let $\mathbf{1}^{proc}_{i,t}$ now denote the indicator random variable for the event $(R_i \geq t)$, and $\mathbf{1}^{size}_{i,t}$ denote the indicator variable for whether the size of item $i$ is instantiated to exactly $t$ in this run. Then we have the total reward collected by $\ensuremath{\mathsf{Opt}\xspace}$ in this run to be exactly \[ \sum_{i} \sum_{t} \mathbf{1}^{proc}_{i,t} \cdot \mathbf{1}^{size}_{i,t} \cdot R_{i,t} \] Now, we simply take the expectation of the above random variable over all runs of $\ensuremath{\mathsf{Opt}\xspace}$, and then use the following fact about $\mathbb{E}[\mathbf{1}^{proc}_{i,t} \mathbf{1}^{size}_{i,t}]$: \begin{eqnarray} \nonumber \mathbb{E}[\mathbf{1}^{proc}_{i,t} \mathbf{1}^{size}_{i,t}] &=& \Pr[\mathbf{1}^{proc}_{i,t} = 1 \wedge \mathbf{1}^{size}_{i,t} = 1]\\ \nonumber & =& \Pr[\mathbf{1}^{proc}_{i,t} = 1] \Pr[\mathbf{1}^{size}_{i,t} = 1 \, |\, \mathbf{1}^{proc}_{i,t} = 1] \\ \nonumber & =& v^*_{i,t} \frac{\pi_{i,t}}{\sum_{t' \geq t} \pi_{i,t'}} \end{eqnarray} We thus get that the expected reward collected by $\ensuremath{\mathsf{Opt}\xspace}$ is exactly equal to the objective function value of the LP formulation for the solution $(v^*, s^*)$. \end{proof} Our rounding algorithm is very natural, and simply tries to mimic the probability distribution (over when to stop each item) as suggested by the optimal LP solution. To this end, let $(v^*, s^*)$ denote an optimal fractional solution. The reason why we introduce some damping (in the selection probabilities) up-front is to make sure that we could appeal to Markov's inequality and ensure that the knapsack does not get violated with good probability. \begin{algorithm}[ht!] \caption{Algorithm {\textsf{StocK-Small}}\xspace} \begin{algorithmic}[1] \label{alg:skssmall} \FOR{each item $i$} \STATE \textbf{ignore} $i$ with probability $1-1/4$ (i.e., do not schedule it at all). \label{alg:st:1} \FOR{$0 \leq t \leq B/2$} \STATE \textbf{cancel} item $i$ at this step with probability $\frac{s^*_{i,t}}{v^*_{i,t}} - \frac{\pi_{i,t}}{\sum_{t' \geq t} \pi_{i,t'}}$ and \textbf{continue} to next item. \label{alg:st:2} \STATE process item $i$ for its $(t+1)^{st}$ timestep. \label{alg:st:4} \IF{item $i$ terminates after being processed for exactly $(t+1)$ timesteps} \STATE \textbf{collect} a reward of $R_{i,t+1}$ from this item; \textbf{continue} onto next item; \label{alg:st:5} \ENDIF \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} Notice that while we let the algorithm proceed even if its budget is violated, we will collect reward only from items that complete before time $B$. This simplifies the analysis a fair bit, both here and for the \ensuremath{\mathsf{MAB}}\xspace algorithm. In \lref[Lemma]{lem:stop-dist} below (proof in \lref[Appendix]{app:small}), we show that for any item that is not dropped in \lref[step]{alg:st:1}, its probability distribution over stopping times is identical to the optimal LP solution $s^*$. We then use this to argue that the expected reward of our algorithm is $\Omega(1)\ensuremath{\mathsf{LPOpt}\xspace}$. \begin{lemma} \label{lem:stop-dist} Consider item $i$ that was not dropped in \lref[step]{alg:st:1}, Then, for any timestep $t \geq 0$, the following hold: \begin{OneLiners} \item[(i)] The probability (including cancellation\& completion) of stopping at timestep $t$ for item $i$ is $s^*_{i,t}$. \item[(ii)] The probability that item $i$ gets processed for its $(t+1)^{st}$ timestep is exactly $v^*_{i,t+1}$ \item[(iii)] If item $i$ has been processed for $(t+1)$ timesteps, the probability of completing successfully at timestep $(t+1)$ is $\pi_{i,t+1}/\sum_{t' \geq t +1} \pi_{i,t'}$ \end{OneLiners} \end{lemma} \begin{theorem} \label{thm:small} The expected reward of our randomized algorithm is at least $\frac18$ of $\ensuremath{\mathsf{LPOpt}\xspace}$. \end{theorem} \begin{proof} Consider any item $i$. In the worst case, we process it after all other items. Then the total expected size occupied thus far is at most $\sum_{i' \neq i} \mathbf{1}^{keep}_{i'} \sum_{t \geq 0} t \cdot s^*_{i',t}$, where $\mathbf{1}^{keep}_{i'}$ is the indicator random variable denoting whether item $i'$ is not dropped in \lref[step]{alg:st:1}. Here we have used \lref[Lemma]{lem:stop-dist} to argue that if an item $i'$ is selected, its stopping-time distribution follows $s^*_{i',t}$. Taking expectation over the randomness in \lref[step]{alg:st:1}, the expected space occupied by other jobs is at most $\sum_{i' \neq i} \frac{1}{3} \sum_{t \geq 0} t \cdot s^*_{i',t} \leq \frac{B}{4}$. Markov's inequality implies that this is at most $B/2$ with probability at least $1/2$. In this case, if item $i$ is started (which happens w.p. $1/4$), it runs without violating the knapsack, with expected reward $\sum_{t \geq 1} v^*_{i,t} \cdot \pi_{i,t}/(\sum_{t' \geq t} \pi_{i,t'})$; the total expected reward is then at least $\sum_{i} \frac{1}{8} \sum_{t} v^*_{i,t} \pi_{i,t}/(\sum_{t' \geq t} \pi_{i,t'}) \geq \frac{\ensuremath{\mathsf{LPOpt}\xspace}}{8}$. \end{proof} \subsection{Case II: Jobs with Late Rewards} \label{sec:large} Now we handle instances in which only large-size instantiations of items may fetch reward, i.e., the rewards $R_{i,t}$ of every item $i$ are assumed to be $0$ for $t \leq B/2$. For such instances, we now argue that \emph{cancellation is not helpful}. As a consequence, we can use the results of \lref[Section]{sec:nopmtn} and obtain a constant-factor approximation algorithm! To see why, intuitively, as an algorithm processes a job for its $t^{th}$ timestep for $t < B/2$, it gets no more information about the reward than when starting (since all rewards are at large sizes). Furthermore, there is no benefit of canceling a job once it has run for at least $B/2$ timesteps -- we can't get any reward by starting some other item. More formally, consider a (deterministic) strategy $S$ which in some state makes the decision of scheduling item $i$ and halting its execution if it takes more than $t$ timesteps. First suppose that $t \le B/2$; since this job does will not be able to reach size larger than $B/2$, no reward will be accrued from it and hence we can change this strategy by skipping the scheduling of $i$ without altering its total reward. Now consider the case where $t > B/2$. Consider the strategy $S'$ which behaves as $S$ except that it does not preempt $i$ in this state but lets $i$ run to completion. We claim that $S'$ obtains at least as much expected reward as $S$. First, whenever item $i$ has size at most $t$ then $S$ and $S'$ obtain the same reward. Now suppose that we are in a scenario where $i$ reached size $t > B/2$. Then item $i$ is halted and $S$ cannot obtain any other reward in the future, since no item that can fetch any reward would complete before the budget runs out; in the same situation, strategy $S'$ obtains non-negative rewards. Using this argument we can eliminate all the cancellations of a strategy without decreasing its expected reward. \begin{lemma} There is an optimal solution in this case which does not cancel. \end{lemma} As mentioned earlier, we can now appeal to the results of \lref[Section]{sec:nopmtn} and obtain a constant-factor approximation for the large-size instances. Now we can combine the algorithms that handle the two different scenarios (or choose one at random and run it), and get a constant fraction of the expected reward that an optimal policy fetches. \section{Multi-Armed Bandits} \label{sec:mab} We now turn our attention to the more general Multi-Armed Bandits problem (\ensuremath{\mathsf{MAB}}\xspace). In this framework, there are $n$ \emph{arms}: arm $i$ has a collection of states denoted by ${\mathcal{S}_i}$, a starting state $\rho_i \in {\mathcal{S}_i}$; Without loss of generality, we assume that ${\mathcal{S}_i} \cap \mathcal{S}_j = \emptyset$ for $i \neq j$. Each arm also has a \emph{transition graph} $T_i$, which is given as a polynomial-size (weighted) directed tree rooted at $\rho_i$; we will relax the tree assumption later. If there is an edge $u \to v$ in $T_i$, then the edge weight $p_{u,v}$ denotes the probability of making a transition from $u$ to $v$ if we play arm $i$ when its current state is node $u$; hence $\sum_{v: (u,v) \in T_i} p_{u,v} =1$. Each time we play an arm, we get a reward whose value depends on the state from which the arm is played. Let us denote the reward at a state $u$ by $r_u$. Recall that the martingale property on rewards requires that $\sum_{v: (u,v) \in T_i} p_{u,v} r_v = r_u$ for all states $u$. {\bf Problem Definition.} For a concrete example, we consider the following budgeted learning problem on \emph{tree transition graphs}. Each of the arms starts at the start state $\rho_i \in {\mathcal{S}_i}$. We get a reward from each of the states we play, and the goal is to maximize the total expected reward, while not exceeding a pre-specified allowed number of plays $B$ across all arms. The framework described below can handle other problems (like the explore/exploit kind) as well, and we discuss this in \lref[Appendix]{xsec:mab}. Note that the Stochastic Knapsack problem considered in the previous section is a special case of this problem where each item corresponds to an arm, where the evolution of the states corresponds to the explored size for the item. Rewards are associated with each stopping size, which can be modeled by end states that can be reached from the states of the corresponding size with the probability of this transition being the probability of the item taking this size. Thus the resulting trees are paths of length up to the maximum size $B$ with transitions to end states with reward for each item size. For example, the transition graph in \lref[Figure]{fig:redn} corresponds to an item which instantiates to a size of $1$ with probability $1/2$ (and fetches a reward $R_1$), takes size $3$ with probability $1/4$ (with reward $R_3$), and size $4$ with the remaining probability $1/4$ (reward is $R_4$). Notice that the reward on stopping at all intermediate nodes is $0$ and such an instance therefore does not satisfy the martingale property. Even though the rewards are obtained in this example on reaching a state rather than playing it, it is not hard to modify our methods for this version as well. \begin{figure} \caption{Reducing Stochastic Knapsack to MAB} \label{fig:redn} \end{figure} \paragraph{Notation.} The transition graph $T_i$ for arm $i$ is an out-arborescence defined on the states ${\mathcal{S}_i}$ rooted at $\rho_i$. Let $\mathsf{depth}(u)$ of a node $u \in {\mathcal{S}_i}$ be the depth of node $u$ in tree $T_i$, where the root $\rho_i$ has depth $0$. The unique parent of node $u$ in $T_i$ is denoted by $\mathsf{parent}(u)$. Let ${\mathcal{S}} = \cup_{i} {\mathcal{S}_i}$ denote the set of all states in the instance, and $\mathsf{arm}(u)$ denote the arm to which state $u$ belongs, i.e., the index $i$ such that $u \in {\mathcal{S}_i}$. Finally, for $u \in {\mathcal{S}_i}$, we refer to the act of playing arm $i$ when it is in state $u$ as ``playing state $u \in {\mathcal{S}_i}$'', or ``playing state $u$'' if the arm is clear in context. \subsection{Global Time-indexed LP} \label{sec:mab-lp} In the following, the variable $z_{u,t} \in [0,1]$ indicates that the algorithm plays state $u \in {\mathcal{S}_i}$ at time $t$. For state $u \in {\mathcal{S}_i}$ and time $t$, $w_{u,t} \in [0,1]$ indicates that arm $i$ \emph{first enters} state $u$ at time $t$: this happens if and only if the algorithm \emph{played} $\mathsf{parent}(u)$ at time $t-1$ and the arm made a transition into state $u$. \begin{alignat}{2} \tag{$\mathsf{LP}_\mathsf{mab}$} \label{lp:mab} \max \textstyle \sum_{u,t} r_u &\cdot z_{u,t}\\ w_{u,t} &= z_{\mathsf{parent}(u), t-1} \cdot p_{\mathsf{parent}(u),u} & \qquad \forall t \in [2,B],\, u \in {\mathcal{S}} \setminus \cup_{i} \{\rho_i\} \label{eq:mablp1}\\ \textstyle \sum_{t' \le t} w_{u,t'} &\geq \textstyle \sum_{t' \leq t} z_{u,t'} & \qquad \forall t \in [1,B], \, u \in {\mathcal{S}} \label{eq:mablp2}\\ \textstyle \sum_{u \in {\mathcal{S}}} z_{u,t} &\le 1 & \qquad \forall t \in [1,B] \label{eq:mablp3}\\ w_{\rho_i, 1} &= 1 & \qquad \forall i \in [1,n] \label{eq:mablp4} \end{alignat} \begin{lemma} The value of an optimal LP solution $\ensuremath{\mathsf{LPOpt}\xspace}$ is at least $\ensuremath{\mathsf{Opt}\xspace}$, the expected reward of an optimal adaptive strategy. \end{lemma} \begin{proof} We convention that $\ensuremath{\mathsf{Opt}\xspace}$ starts playing at time $1$. Let $z^*_{u,t}$ denote the probability that $\ensuremath{\mathsf{Opt}\xspace}$ plays state $u$ at time $t$, namely, the probability that arm $\mathsf{arm}(u)$ is in state $u$ at time $t$ and is played at time $t$. Also let $w^*_{u,t}$ denote the probability that $\ensuremath{\mathsf{Opt}\xspace}$ ``enters'' state $u$ at time $t$, and further let $w^*_{\rho_i,1} = 1$ for all $i$. We first show that $\{z^*, w^*\}$ is a feasible solution for \ref{lp:mab} and later argue that its LP objective is at least $\ensuremath{\mathsf{Opt}\xspace}$. Consider constraint \eqref{eq:mablp1} for some $t \in [2, B]$ and $u \in {\mathcal{S}}$. The probability of entering state $u$ at time $t$ conditioned on $\ensuremath{\mathsf{Opt}\xspace}$ playing state $\mathsf{parent}(u)$ at time $t - 1$ is $p_{\mathsf{parent}(u),u}$. In addition, the probability of entering state $u$ at time $t$ conditioning on $\ensuremath{\mathsf{Opt}\xspace}$ not playing state $\mathsf{parent}(u)$ at time $t - 1$ is zero. Since $z^*_{\mathsf{parent}(u),t-1}$ is the probability that $\ensuremath{\mathsf{Opt}\xspace}$ plays state $\mathsf{parent}(u)$ at time $t - 1$, we remove the conditioning to obtain $w^*_{u,t} = z^*_{\mathsf{parent}(u),t-1} \cdot p_{\mathsf{parent}(u),u}$. Now consider constraint \eqref{eq:mablp2} for some $t \in [1, B]$ and $u \in {\mathcal{S}}$. For any outcome of the algorithm (denoted by a sample path $\sigma$), let $\mathbf{1}^{enter}_{u',t'}$ be the indicator variable that $\ensuremath{\mathsf{Opt}\xspace}$ enters state $u'$ at time $t'$ and let $\mathbf{1}^{play}_{u',t'}$ be the indicator variable that $\ensuremath{\mathsf{Opt}\xspace}$ plays state $u'$ at time $t'$. Since $T_i$ is acyclic, state $u$ is played at most once in $\sigma$ and is also entered at most once in $\sigma$. Moreover, whenever $u$ is played before or at time $t$, it must be that $u$ was also entered before or at time $t$, and hence $\sum_{t' \le t} \mathbf{1}^{play}_{u,t'} \le \sum_{t' \le t} \mathbf{1}^{enter}_{u, t'}$. Taking expectation on both sides and using the fact that $\mathbb{E}[\mathbf{1}^{play}_{u,t'}] = z^*_{u,t'}$ and $\mathbb{E}[\mathbf{1}^{enter}_{u,t'}] = w^*_{u,t'}$, linearity of expectation gives $\sum_{t' \le t} z^*_{u,t'} \le \sum_{t' \le t} w^*_{u,t'}$. To see that constraints \eqref{eq:mablp3} are satisfied, notice that we can play at most one arm (or alternatively one state) in each time step, hence $\sum_{u \in {\mathcal{S}}} \mathbf{1}^{play}_{u,t} \le 1$ holds for all $t \in [1, B]$; the claim then follows by taking expectation on both sides as in the previous paragraph. Finally, constraints \eqref{eq:mablp4} is satisfied by definition of the start states. To conclude the proof of the lemma, it suffices to show that $\ensuremath{\mathsf{Opt}\xspace} = \sum_{u,t} r_u \cdot z^*_{u,t}$. Since $\ensuremath{\mathsf{Opt}\xspace}$ obtains reward $r_u$ whenever it plays state $u$, it follows that $\ensuremath{\mathsf{Opt}\xspace}$'s reward is given by $\sum_{u,t} r_u \cdot \mathbf{1}^{play}_{u,t}$; by taking expectation we get $\sum_{u,t} r_u z^*_{u,t} = \ensuremath{\mathsf{Opt}\xspace}$, and hence $\ensuremath{\mathsf{LPOpt}\xspace} \geq \ensuremath{\mathsf{Opt}\xspace}$. \end{proof} \subsection{The Rounding Algorithm} In order to best understand the motivation behind our rounding algorithm, it would be useful to go over the example which illustrates the necessity of preemption (repeatedly switching back and forth between the different arms) in \lref[Appendix]{sec:preemption-gap}. At a high level, the rounding algorithm proceeds as follows. In Phase~I, given an optimal LP solution, we decompose the fractional solution for each arm into a convex\footnote{Strictly speaking, we do not get convex combinations that sum to one; our combinations sum to $\sum_t z_{\rho_i, t}$, the value the LP assigned to pick to play the root of the arm over all possible start times, which is at most one.} combination of integral ``strategy forests'' (which are depicted in \lref[Figure]{fig:treeforest}): each of these tells us at what times to play the arm, and in which states to abandon the arm. Now, if we sample a random strategy forest for each arm from this distribution, we may end up scheduling multiple arms to play at some of the timesteps, and hence we need to resolve these conflicts. A natural first approach might be to (i) sample a strategy forest for each arm, (ii) play these arms in a random order, and (iii) for any arm follow the decisions (about whether to abort or continue playing) as suggested by the sampled strategy forest. In essence, we are ignoring the times at which the sampled strategy forest has scheduled the plays of this arm and instead playing this arm continually until the sampled forest abandons it. While such a non-preemptive strategy works when the martingale property holds, the example in \lref[Appendix]{sec:preemption-gap} shows that preemption is unavoidable. Another approach would be to try to play the sampled forests at their prescribed times; if multiple forests want to play at the same time slot, we round-robin over them. The expected number of plays in each timestep is 1, and the hope is that round-robin will not hurt us much. However, if some arm needs $B$ contiguous steps to get to a state with high reward, and a single play of some other arm gets scheduled by bad luck in some timestep, we would end up getting nothing! Guided by these bad examples, we try to use the continuity information in the sampled strategy forests---once we start playing some contiguous component (where the strategy forest plays the arm in every consecutive time step), we play it to the end of the component. The na\"{\i}ve implementation does not work, so we first alter the LP solution to get convex combinations of ``nice'' forests---loosely, these are forests where the strategy forest plays contiguously in almost all timesteps, or in at least half the timesteps. This alteration is done in Phase~II, and then the actual rounding in Phase~III, and the analysis appears in \lref[Section]{sec:phase-iii}. \subsubsection{Phase I: Convex Decomposition} \label{sec:phase-i} In this step, we decompose the fractional solution into a convex combination of ``forest-like strategies'' $\{\mathbb{T}(i,j)\}_{i,j}$, corresponding to the $j^{th}$ forest for arm $i$. We first formally define what these forests look like: The $j^{th}$ \emph{strategy forest} $\mathbb{T}(i,j)$ for arm $i$ is an assignment of values $\mathsf{time}(i,j,u)$ and $\mathsf{prob}(i,j,u)$ to each state $u \in {\mathcal{S}_i}$ such that: \begin{OneLiners} \item[(i)] For $u \in {\mathcal{S}_i}$ and $v = \mathsf{parent}(u)$, it holds that $\mathsf{time}(i,j,u) \geq 1+ \mathsf{time}(i,j,v)$, and \item[(ii)] For $u \in {\mathcal{S}_i}$ and $v = \mathsf{parent}(u)$, if $\mathsf{time}(i,j,u) \neq \infty$ then $\mathsf{prob}(i,j,u) = p_{v,u}\,\mathsf{prob}(i,j,v)$; else if $\mathsf{time}(i,j,u) = \infty$ then $\mathsf{prob}(i,j,u) = 0$. \end{OneLiners} We call a triple $(i,j,u)$ a \emph{tree-node} of $\mathbb{T}(i,j)$. When $i$ and $j$ are understood from the context, we identify the tree-node $(i,j,u)$ with the state $u$. For any state $u$, the values $\mathsf{time}(i,j,u)$ and $\mathsf{prob}(i,j,u)$ denote the time at which the arm $i$ is played at state $u$, and the probability with which the arm is played, according to the strategy forest $\mathbb{T}(i,j)$.\footnote{When $i$ and $j$ are clear from the context, we will just refer to state $u$ instead of the triple $(i,j,u)$.} The probability values are particularly simple: if $\mathsf{time}(i,j,u) = \infty$ then this strategy does not play the arm at $u$, and hence the probability is zero, else $\mathsf{prob}(i,j,u)$ is equal to the probability of reaching $u$ over the random transitions according to $T_i$ if we play the root with probability $\mathsf{prob}(i,j,\rho_i)$. Hence, we can compute $\mathsf{prob}(i,j,u) $ just given $\mathsf{prob}(i,j, \rho_i)$ and whether or not $\mathsf{time}(i,j,u) = \infty$. Note that the $\mathsf{time}$ values are not necessarily consecutive, plotting these on the timeline and connecting a state to its parents only when they are in consecutive timesteps (as in \lref[Figure]{fig:treeforest}) gives us forests, hence the name. \begin{figure} \caption{Strategy forests and how to visualize them: grey blobs are connected components.} \label{fig:subfig1} \label{fig:subfig2} \label{fig:treeforest} \end{figure} The algorithm to construct such a decomposition proceeds in rounds for each arm $i$; in a particular round, it ``peels'' off such a strategy as described above, and ensures that the residual fractional solution continues to satisfy the LP constraints, guaranteeing that we can repeat this process, which is similar to (but slightly more involved than) performing flow-decompositions. The decomposition lemma is proved in \lref[Appendix]{sec:details-phase-i}: \begin{lemma} \label{lem:convexppt} Given a solution to~(\ref{lp:mab}), there exists a collection of at most $nB|{\mathcal{S}}|$ strategy forests $\{\mathbb{T}(i,j)\}$ such that $z_{u,t} = \sum_{j:\mathsf{time}(i,j,u) = t} \mathsf{prob}(i,j,u)$.\footnote{To reiterate, even though we call this a convex decomposition, the sum of the probability values of the root state of any arm is at most one by constraint~\ref{eq:mablp3}, and hence the sum of the probabilities of the root over the decomposition could be less than one in general.} Hence, $\sum_{(i, j, u): \mathsf{time}(i,j,u)=t} \mathsf{prob}(i,j,u) \leq 1$ for all $t$. \end{lemma} For any $\mathbb{T}(i,j)$, these $\mathsf{prob}$ values satisfy a ``preflow'' condition: the in-flow at any node $v$ is always at least the out-flow, namely $\mathsf{prob}(i,j,v) \ge \sum_{u: \mathsf{parent}(u)=v} \mathsf{prob}(i,j,u)$. This leads to the following simple but crucial observation. \begin{observation} \label{obs:treeflow} For any arm $i$, for any set of states $X \subseteq {\mathcal{S}_i}$ such that no state in $X$ is an ancestor of another state in $X$ in the transition tree $T_i$, and for any $z \in {\mathcal{S}_i}$ that is an ancestor of all states in $X$, $\mathsf{prob}(i,j,z) \geq \sum_{x \in X} \mathsf{prob}(i,j,x)$. More generally, given similar conditions on $X$, if $Z$ is a set of states such that for any $x \in X$, there exists $z \in Z$ such that $z$ is an ancestor of $x$, we have $\sum_{z \in Z} \mathsf{prob}(i,j,z) \geq \sum_{x \in X} \mathsf{prob}(i,j,x)$ \end{observation} \subsubsection{Phase II: Eliminating Small Gaps} \label{sec:phase-ii} While \lref[Appendix]{sec:preemption-gap} shows that preemption is necessary to remain competitive with respect to $\ensuremath{\mathsf{Opt}\xspace}$, we also should not get ``tricked'' into switching arms during very short breaks taken by the LP. For example, say, an arm of length $(B-1)$ was played in two continuous segments with a gap in the middle. In this case, we should not lose out on profit from this arm by starting some other arms' plays during the break. To handle this issue, whenever some path on the strategy tree is almost contiguous---i.e., gaps on it are relatively small---we make these portions completely contiguous. Note that we will not make the entire tree contiguous, but just combine some sections together. Before we make this formal, here is some useful notation: Given $u \in {\mathcal{S}_i}$, let $\mathsf{Head}(i,j,u)$ be its ancestor node $v \in {\mathcal{S}_i}$ of least depth such that the plays from $v$ through $u$ occur in consecutive $\mathsf{time}$ values. More formally, the path $v = v_1, v_2, \ldots, v_l = u$ in $T_i$ is such that $\mathsf{time}(i,j,v_{l'}) = \mathsf{time}(i,j,v_{l' - 1}) + 1$ for all $l' \in [2, l]$. We also define the \emph{connected component} of a node $u$, denoted by $\mathsf{comp}(i,j,u)$, as the set of all nodes $u'$ such that $\mathsf{Head}(i,j,u) = \mathsf{Head}(i,j,u')$. \lref[Figure]{fig:treeforest} shows the connected components and heads. The main idea of our \emph{gap-filling} procedure is the following: if a head state $v = \mathsf{Head}(i,j,u)$ is played at time $t = \mathsf{time}(i,j,v)$ s.t. $t < 2 \cdot \mathsf{depth}(v)$, then we ``advance'' the $\mathsf{comp}(i,j,v)$ and get rid of the gap between $v$ and its parent (and recursively apply this rule)\footnote{The intuition is that such vertices have only a small gap in their play and should rather be played contiguously.}. The procedure can be described in more detail as follows. \begin{algorithm}[ht!] \caption{Gap Filling Algorithm \textsf{GapFill}} \begin{algorithmic}[1] \label{alg:ridgaps} \FOR{$\tau$ $=$ $B$ to $1$} \WHILE{there exists a tree-node $u \in \mathbb{T}(i,j)$ such that $\tau = \mathsf{time}(\mathsf{Head}(u)) < 2 \cdot \mathsf{depth}(\mathsf{Head}(u))$} \label{alg:gap1} \STATE {\bf let} $v = \mathsf{Head}(u)$. \label{alg:setV} \IF{$v$ is not the root of $\mathbb{T}(i,j)$} \STATE {\bf let} $v' = \mathsf{parent}(v)$. \STATE {\bf advance} the component $\mathsf{comp}(v)$ rooted at $v$ such that $\mathsf{time}(v) \leftarrow \mathsf{time}(v') + 1$, to make $\mathsf{comp}(v)$ contiguous with the ancestor forming one larger component. Also alter the $\mathsf{time}$s of $w \in \mathsf{comp}(v)$ appropriately to maintain contiguity with $v$ (and now with $v'$). \ENDIF \ENDWHILE \label{alg:gap3} \ENDFOR \end{algorithmic} \end{algorithm} One crucial property is that these ``advances'' do not increase by much the number of plays that occur at any given time $t$. Essentially this is because if for some time slot $t$ we ``advance'' a set of components that were originally scheduled after $t$ to now cross time slot $t$, these components moved because their ancestor paths (fractionally) used up at least $t/2$ of the time slots before $t$; since there are $t$ time slots to be used up, each to unit extent, there can be at most $2$ units of components being moved up. Hence, in the following, we assume that our $\mathbb{T}$'s satisfy the properties in the following lemma: \begin{lemma} \label{lem:gapfill} Algorithm \textsf{GapFill} produces a modified collection of $\mathbb{T}$'s such that \begin{OneLiners} \item[(i)] For each $i,j, u$ such that $r_u > 0$, $\mathsf{time}(\mathsf{Head}(i,j,u)) \ge 2 \cdot \mathsf{depth}(\mathsf{Head}(i,j,u))$. \item[(ii)] The total extent of plays at any time $t$, i.e., $\sum_{(i,j,u): \mathsf{time}(i,j,u)=t} \mathsf{prob}(i,j,u)$ is at most $3$. \end{OneLiners} \end{lemma} The proof appears in \lref[Appendix]{sec:details-phase-ii}. \subsubsection{Phase III: Scheduling the Arms} \label{sec:phase-iii} Having done the preprocessing, the rounding algorithm is simple: it first randomly selects at most one strategy forest from the collection $\{\mathbb{T}(i,j)\}_j$ for each arm $i$. It then picks an arm with the earliest connected component (i.e., that with smallest $\mathsf{time}(\mathsf{Head}(i,j,u))$) that contains the current state (the root states, to begin with), plays it to the end---which either results in terminating the arm, or making a transition to a state played much later in time, and repeats. The formal description appears in \lref[Algorithm]{alg:roundmab}. (If there are ties in \lref[Step]{alg:mabstep4}, we choose the smallest index.) Note that the algorithm runs as long as there is some active node, regardless of whether or not we have run out of plays (i.e., the budget is exceeded)---however, we only count the profit from the first $B$ plays in the analysis. \newcommand{\mathsf{currstate}}{\mathsf{currstate}} \begin{algorithm}[ht!] \caption{Scheduling the Connected Components: Algorithm \textsf{AlgMAB}} \begin{algorithmic}[1] \label{alg:roundmab} \STATE for arm $i$, \textbf{sample} strategy $\mathbb{T}(i,j)$ with probability $\frac{\mathsf{prob}(i,j,\rho_i)}{24}$; ignore arm $i$ w.p.\ $1 - \sum_{j} \frac{\mathsf{prob}(i,j,\rho_i)}{24}$. \label{alg:mabstep1} \STATE let $A \gets$ set of ``active'' arms which chose a strategy in the random process. \label{alg:mabstep2} \STATE for each $i \in A$, \textbf{let} $\sigma(i) \gets$ index $j$ of the chosen $\mathbb{T}(i,j)$ and \textbf{let} $\mathsf{currstate}(i) \gets $ root $\rho_i$. \label{alg:mabstep3} \WHILE{active arms $A \neq \emptyset$} \STATE \textbf{let} $i^* \gets$ arm with state played earliest in the LP (i.e., $i^* \gets \operatorname{argmin}_{i \in A} \{ \mathsf{time}(i, \sigma(i), \mathsf{currstate}(i)) \}$. \label{alg:mabstep4} \STATE \textbf{let} $\tau \gets \mathsf{time}(i^*, \sigma(i^*), \mathsf{currstate}(i^*))$. \WHILE{$\mathsf{time}(i^*, \sigma(i^*), \mathsf{currstate}(i^*)) \neq \infty$ \textbf{and} $\mathsf{time}(i^*, \sigma(i^*), \mathsf{currstate}(i^*)) = \tau$} \label{alg:mabLoop} \STATE \textbf{play} arm $i^*$ at state $\mathsf{currstate}(i^*)$ \label{alg:mabPlay} \STATE \textbf{update} $\mathsf{currstate}(i^*)$ be the new state of arm $i^*$; \textbf{let} $\tau \gets \tau + 1$. \label{alg:mabstep5} \ENDWHILE \label{alg:mabEndLoop} \IF{$\mathsf{time}(i^*, \sigma(i^*), \mathsf{currstate}(i^*)) = \infty$} \label{alg:mabAbandon} \STATE \textbf{let} $A \gets A \setminus \{i^*\}$ \ENDIF \ENDWHILE \end{algorithmic} \end{algorithm} Observe that \lref[Steps]{alg:mabLoop}-\ref{alg:mabstep5} play a connected component of a strategy forest contiguously. In particular, this means that all $\mathsf{currstate}(i)$'s considered in \lref[Step]{alg:mabstep4} are head vertices of the corresponding strategy forests. These facts will be crucial in the analysis. \begin{lemma} \label{lem:visitprob} For arm $i$ and strategy $\mathbb{T}(i,j)$, conditioned on $\sigma(i) = j$ after \lref[Step]{alg:mabstep1} of \textsf{AlgMAB}, the probability of playing state $u \in {\mathcal{S}_i}$ is $\mathsf{prob}(i,j,u)/\mathsf{prob}(i,j,\rho_i)$, where the probability is over the random transitions of arm $i$. \end{lemma} The above lemma is relatively simple, and proved in \lref[Appendix]{sec:details-phase-iii}. The rest of the section proves that in expectation, we collect a constant factor of the LP reward of each strategy $\mathbb{T}(i,j)$ before running out of budget; the analysis is inspired by our \ensuremath{\mathsf{StocK}}\xspace rounding procedure. We mainly focus on the following lemma. \begin{lemma} \label{lem:beforetime} Consider any arm $i$ and strategy $\mathbb{T}(i,j)$. Then, conditioned on $\sigma(i) = j$ and on the algorithm playing state $u \in {\mathcal{S}_i}$, the probability that this play happens before time $\mathsf{time}(i,j,u)$ is at least $1/2$. \end{lemma} \newcommand{\mathcal{E}}{\mathcal{E}} \newcommand{\mathbf{v}}{\mathbf{v}} \begin{proof} Fix an arm $i$ and an index $j$ for the rest of the proof. Given a state $u \in {\mathcal{S}_i}$, let $\mathcal{E}_{iju}$ denote the event $(\sigma(i) = j) \wedge (\text{state $u$ is played})$. Also, let $\mathbf{v} = \mathsf{Head}(i,j,u)$ be the head of the connected component containing $u$ in $\mathbb{T}(i,j)$. Let r.v.\ $\tau_u$ (respectively $\tau_\mathbf{v}$) be the actual time at which state $u$ (respectively state $\mathbf{v}$) is played---these random variables take value $\infty$ if the arm is not played in these states. Then \begin{equation} \Pr [ \tau_u \leq \mathsf{time}(i,j,u) \mid \mathcal{E}_{iju} ] \geq \textstyle \frac{1}{2} \iff \Pr [ \tau_\mathbf{v} \leq \mathsf{time}(i,j,\mathbf{v}) \mid \mathcal{E}_{iju} ] \geq \textstyle \frac{1}{2}\label{eq:7}, \end{equation} because the time between playing $u$ and $\mathbf{v}$ is exactly $\mathsf{time}(i,j,u) - \mathsf{time}(i,j,\mathbf{v})$ since the algorithm plays connected components continuously (and we have conditioned on $\mathcal{E}_{iju}$). Hence, we can just focus on proving the right inequality in~(\ref{eq:7}) for vertex $\mathbf{v}$. For brevity of notation, let $t_\mathbf{v} = \mathsf{time}(i,j,\mathbf{v})$. In addition, we define the order $\preceq$ to indicate which states can be played before $\mathbf{v}$. That is, again making use of the fact that the algorithm plays connected components contiguously, we say that $(i',j',v') \preceq (i,j,\mathbf{v})$ iff $\mathsf{time}(\mathsf{Head}(i',j',v')) \le \mathsf{time}(\mathsf{Head}(i,j,\mathbf{v}))$. Notice that this order is independent of the run of the algorithm. For each arm $i' \neq i$ and index $j'$, we define random variables $Z_{i'j'}$ used to count the number of plays that can possibly occur before the algorithm plays state $\mathbf{v}$. If $\mathbf{1}_{(i',j',v')}$ is the indicator variable of event $\mathcal{E}_{i'j'v'}$, define \begin{equation} \textstyle Z_{i',j'} = \min \big( t_\mathbf{v} \; , \; \sum_{v': (i',j',v') \preceq (i,j,\mathbf{v})} \mathbf{1}_{(i',j',v')} \big)~.\label{eq:8} \end{equation} We truncate $Z_{i',j'}$ at $t_\mathbf{v}$ because we just want to capture how much time \emph{up to} $t_\mathbf{v}$ is being used. Now consider the sum $Z = \sum_{i' \neq i} \sum_{j'} Z_{i',j'}$. Note that for arm $i'$, at most one of the $Z_{i',j'}$ values will be non-zero in any scenario, namely the index $\sigma(i')$ sampled in \lref[Step]{alg:mabstep1}. The first claim below shows that it suffices to consider the upper tail of~$Z$, and show that $\Pr[Z \geq t_\mathbf{v}/2] \leq 1/2$, and the second gives a bound on the conditional expectation of $Z_{i',j'}$. \begin{claim} \label{cl:sumbound} $\Pr[ \tau_\mathbf{v} \leq t_\mathbf{v} \mid \mathcal{E}_{iju} ] \geq \Pr[ Z \leq t_\mathbf{v}/2 ]$. \end{claim} \begin{proof} We first claim that $\Pr[ \tau_\mathbf{v} \leq t_\mathbf{v} \mid \mathcal{E}_{iju} ] \geq \Pr[ Z \leq t_\mathbf{v}/2 \mid \mathcal{E}_{iju}]$. So, let us condition on $\mathcal{E}_{iju}$. Then if $Z \leq t_\mathbf{v}/2$, none of the $Z_{i',j'}$ variables were truncated at $t_\mathbf{v}$, and hence $Z$ exactly counts the total number of plays (by all other arms $i' \neq i$, from any state) that could possibly be played before the algorithm plays $v$ in strategy $\mathbb{T}(i,j)$. Therefore, if $Z$ is smaller than $t_\mathbf{v}/2$, then combining this with the fact that $\mathsf{depth}(v) \leq t_\mathbf{v}/2$ (from \lref[Lemma]{lem:gapfill}(i)), we can infer that all the plays (including those of $v$'s ancestors) that can be made before playing $v$ can indeed be completed within $t_\mathbf{v}$. In this case the algorithm will definitely play $v$ before $t_\mathbf{v}$; hence we get that conditioning on $\mathcal{E}_{iju}$, the event $\tau_\mathbf{v} \leq t_\mathbf{v}$ holds when $Z \leq t_\mathbf{v}/2$. Finally, to remove the conditioning: note that $Z_{i'j'}$ is just a function of (i) the random variables $\mathbf{1}_{(i',j',v')}$, i.e., the random choices made by playing $\mathbb{T}(i',j')$, and (ii) the constant $t_\mathbf{v} = \mathsf{time}(i,j,v)$. However, the r.vs $\mathbf{1}_{(i',j',v')}$ are clearly independent of the event $\mathcal{E}_{iju}$ for $i' \neq i$ since the plays of \textsf{AlgMAB} in one arm are independent of the others, and $\mathsf{time}(i,j,v)$ is a constant determined once the strategy forests are created in Phase II. Hence the event $Z \leq t_\mathbf{v}/2$ is independent of $\mathcal{E}_{iju}$; hence $\Pr[ Z \leq t_\mathbf{v}/2 \mid \mathcal{E}_{iju}] = \Pr[ Z \leq t_\mathbf{v}/2]$, which completes the proof. \end{proof} \begin{claim} \label{cl:localexp} \[ {\displaystyle \mathbb{E}[ Z_{i',j'} \, | \, \sigma(i') = j'] \leq \sum_{v'~\textsf{s.t}~ \mathsf{time}(i',j',v') \leq t_\mathbf{v}} \frac{\mathsf{prob}(i',j',v')}{\mathsf{prob}(i',j',\rho_{i'})} + t_\mathbf{v} \left( \sum_{v'~\textsf{s.t}~ \mathsf{time}(i',j',v') = t_\mathbf{v}} \frac{\mathsf{prob}(i',j',v')}{\mathsf{prob}(i',j',\rho_{i'})} \right) } \] \end{claim} \begin{proof} Recall the definition of $Z_{i'j'}$ in Eq~(\ref{eq:8}): any state $v'$ with $\mathsf{time}(i',j',v') > t_\mathbf{v}$ may contribute to the sum only if it is part of a connected component with head $\mathsf{Head}(i',j',v')$ such that $\mathsf{time}(\mathsf{Head}(i',j',v')) \leq t_\mathbf{v}$, by the definition of the ordering $\preceq$. Even among such states, if $\mathsf{time}(i',j',v') > 2t_\mathbf{v}$, then the truncation implies that $Z_{i',j'}$ is unchanged whether or not we include $\mathbf{1}_{(i',j',v')}$ in the sum. Indeed, if $\mathbf{1}_{(i',j',v')} = 1$ then all of $v'$'s ancestors will have their indicator variables at value $1$; moreover $\mathsf{depth}(v') > t_\mathbf{v}$ since there is a contiguous collection of nodes that are played from this tree $\mathbb{T}(i',j')$ from time $t_\mathbf{v}$ onwards till $\mathsf{time}(i',j',v') > 2t_\mathbf{v}$; so the sum would be truncated at value $t_\mathbf{v}$ whenever $\mathbf{1}_{(i',j',v')} = 1$. Therefore, we can write \begin{equation} \label{eq:12} Z_{i',j'} \leq \sum_{v':\mathsf{time}(i',j',v') \leq t_\mathbf{v}} \mathbf{1}_{(i',j',v')} + \sum_{\substack{v': t_\mathbf{v} < \mathsf{time}(i',j',v') \leq 2t_\mathbf{v} \\ (i',j',v') \preceq (i,j,v)} } \mathbf{1}_{(i',j',v')} \end{equation} Recall we are interested in the conditional expectation given $\sigma(i') = j'$. Note that $\Pr[\mathbf{1}_{(i',j',v')} \mid \sigma(i') = j'] = \mathsf{prob}(i',j',v')/\mathsf{prob}(i',j',\rho_{i'})$ by \lref[Lemma]{lem:visitprob}, hence the first sum in~(\ref{eq:12}) gives the first part of the claimed bound. Now the second part: observe that for any arm $i'$, any fixed value of $\sigma(i') = j'$, and any value of $t' \geq t_\mathbf{v}$, \[{\displaystyle \sum_{\substack{v'~\textsf{s.t}~\mathsf{time}(i',j',v') = t' \\ (i',j',v') \preceq (i,j,v)}} \mathsf{prob}(i',j',v') \leq \sum_{\substack{ v'~\textsf{s.t}~\mathsf{time}(i',j',v') = t_\mathbf{v} }} \mathsf{prob}(i',j',v') }\] This is because of the following argument: Any state that appears on the LHS of the sum above is part of a connected component which crosses $t_\mathbf{v}$, they must have an ancestor which is played at $t_\mathbf{v}$. Also, since all states which appear in the LHS are played at $t'$, no state can be an ancestor of another. Hence, we can apply the second part of \lref[Observation]{obs:treeflow} and get the above inequality. Combining this with the fact that $\Pr[\mathbf{1}_{(i',j',v')} \mid \sigma(i') = j'] = \mathsf{prob}(i',j',v')/\mathsf{prob}(i',j',\rho_{i'})$, and applying it for each value of $t' \in (t_\mathbf{v}, 2t_\mathbf{v}]$, gives us the second term. \end{proof} Equipped with the above claims, we are ready to complete the proof of \lref[Lemma]{lem:beforetime}. Employing \lref[Claim]{cl:localexp} we get \begin{align} \mathbb{E}[ Z ] &= \sum_{i' \neq i} \sum_{j'} \mathbb{E}[Z_{i',j'} ] = \sum_{i' \neq i} \sum_{j'} \mathbb{E}[Z_{i',j'} \mid \sigma(i') = j']\cdot \Pr[\sigma(i') = j'] \notag \\ &= \frac{1}{24} \sum_{i' \neq i} \sum_{j'} \bigg\{ \sum_{v':\mathsf{time}(i',j',v') \leq t_\mathbf{v}} \mathsf{prob}(i',j',v') + t_\mathbf{v} \bigg( \sum_{v': \mathsf{time}(i',j',v') = t_\mathbf{v}} \mathsf{prob}(i',j',v') \bigg) \bigg\} \label{eq:10}\\ &= \frac{1}{24} \left( 3 \cdot t_\mathbf{v} + 3\cdot t_\mathbf{v} \right) \leq \frac{1}{4} t_\mathbf{v} \;. \label{eq:11} \end{align} Equation~(\ref{eq:10}) follows from the fact that each tree $\mathbb{T}(i,j)$ is sampled with probability $\frac{\mathsf{prob}(i,j,\rho_i)}{24}$ and (\ref{eq:11}) follows from \lref[Lemma]{lem:gapfill}. Applying Markov's inequality, we have that $\Pr[ Z \geq t_\mathbf{v}/2 ] \leq 1/2$. Finally, \lref[Claim]{cl:sumbound} says that $\Pr[ \tau_\mathbf{v} \leq t_\mathbf{v} \mid \mathcal{E}_{iju} ] \geq \Pr[Z \leq t_\mathbf{v}/2 ] \geq 1/2$, which completes the proof. \end{proof} \begin{theorem} \label{thm:main-mab} The reward obtained by the algorithm~\textsf{AlgMAB} is at least $\Omega(\ensuremath{\mathsf{LPOpt}\xspace})$. \end{theorem} \begin{proof} The theorem follows by a simple linearity of expectation. Indeed, the expected reward obtained from any state $u \in {\mathcal{S}_i}$ is at least $\sum_{j} \Pr[\sigma(i) = j] \Pr[\textsf{state }~u~\textsf{is played} \mid \sigma(i) = j] \Pr [ \tau_u \leq t_u | \mathcal{E}_{iju}] \cdot R_u \geq \sum_{j} \frac{ \mathsf{prob}(i,j,u)}{24} \frac{1}{2} \cdot R_u$. Here, we have used \lref[Lemmas]{lem:visitprob} and~\ref{lem:beforetime} for the second and third probabilities. But now we can use \lref[Lemma]{lem:convexppt} to infer that $\sum_j \mathsf{prob}(i,j,u) = \sum_t z_{u,t}$; Making this substitution and summing over all states $u \in {\mathcal{S}_i}$ and arms $i$ completes the proof. \end{proof} \newcommand{\mathbb{D}}{\mathbb{D}} \newcommand{\mathbb{DT}}{\mathbb{DT}} \newcommand{\mathsf{state}}{\mathsf{state}} \newcommand{\mathsf{root}}{\mathsf{root}} \newcommand{\mathsf{currnode}}{\mathsf{currnode}} \section{MABs with Arbitrary Transition Graphs} \label{dsec:mab} We now show how we can use techniques akin to those we described for the case when the transition graph is a tree, to handle the case when it can be an arbitrary directed graph. A na\"{\i}ve way to do this is to expand out the transition graph as a tree, but this incurs an exponential blowup of the state space which we want to avoid. We can assume we have a layered DAGs, though, since the conversion from a digraph to a layered DAG only increases the state space by a factor of the horizon $B$; this standard reduction appears in \lref[Appendix]{dsec:layered-enough}. While we can again write an LP relaxation of the problem for layered DAGs, the challenge arises in the rounding algorithm: specifically, in (i) obtaining the convex decomposition of the LP solution as in Phase~I, and (ii) eliminating small gaps as in Phase~II by advancing forests in the strategy. \begin{itemize} \item We handle the first difficulty by considering convex decompositions not just over strategy forests, but over slightly more sophisticated strategy DAGs. Recall (from \lref[Figure]{fig:treeforest}) that in the tree case, each state in a strategy forest was labeled by a unique time and a unique probability associated with that time step. As the name suggests, we now have labeled DAGs---but the change is more than just that. Now each state has a copy associated with \emph{each} time step in $\{1, \ldots, B\}$. This change tries to capture the fact that our strategy may play from a particular state $u$ at different times depending on the path taken by the random transitions used to reach this state. (This path was unique in the tree case.) \item Now having sampled a strategy DAG for each arm, one can expand them out into strategy forests (albeit with an exponential blow-up in the size), and use Phases~II and~III from our previous algorithm---it is not difficult to prove that this algorithm is a constant-factor approximation. However, the above is not a poly-time algorithm, since the size of the strategy forests may be exponentially large. If we don't expand the DAG, then we do not see how to define gap elimination for Phase~II. But we observe that instead of explicitly performing the advance steps in Phase~II, it suffices to perform them as a \emph{thought experiment}---i.e., to not alter the strategy forest at all, but merely to infer when these advances would have happened, and play accordingly in the Phase~III~\footnote{This is similar to the idea of lazy evaluation of strategies. The DAG contains an implicit randomized strategy which we make explicit as we toss coins of the various outcomes using an algorithm.}. Using this, we can give an algorithm that plays just on the DAG, and argue that the sequence of plays made by our DAG algorithm faithfully mimics the execution if we had constructed the exponential-size tree from the DAG, and executed Phases~II and~III on that tree. \end{itemize} The details of the LP rounding algorithm for layered DAGs follows in \lref[Sections]{dsec:lp-dag}-\ref{dsec:phase-iii}. \subsection{LP Relaxation} \label{dsec:lp-dag} There is only one change in the LP---constraint~\eqref{eq:mabdaglp1} now says that if a state $u$ is visited at time $t$, then one of its ancestors must have been pulled at time $t-1$; this ancestor was unique in the case of trees. \begin{alignat}{2} \tag{$\mathsf{LP}_\mathsf{mabdag}$} \label{lp:mabdag} \max \textstyle \sum_{u,t} r_u &\cdot z_{u,t}\\ w_{u,t} &= \sum_{v} z_{v, t-1} \cdot p_{v,u} & \qquad \forall t \in [2,B],\, u \in {\mathcal{S}} \setminus \cup_{i} \{\rho_i\},\, v \in {\mathcal{S}} \label{eq:mabdaglp1}\\ \textstyle \sum_{t' \le t} w_{u,t'} &\geq \textstyle \sum_{t' \leq t} z_{u,t'} & \qquad \forall t \in [1,B], \, u \in {\mathcal{S}} \label{eq:mabdaglp2}\\ \textstyle \sum_{u \in {\mathcal{S}}} z_{u,t} &\le 1 & \qquad \forall t \in [1,B] \label{eq:mabdaglp3}\\ w_{\rho_i, 1} &= 1 & \qquad \forall i \in [1,n] \label{eq:mabdaglp4} \end{alignat} Again, a similar analysis to the tree case shows that this is a valid relaxation, and hence the LP value is at least the optimal expected reward. \subsection{Convex Decomposition: The Altered Phase~I} \label{dsec:phase-i} This is the step which changes the most---we need to incorporate the notion of peeling out a ``strategy DAG'' instead of just a tree. The main complication arises from the fact that a play of a state $u$ may occur at different times in the LP solution, depending on the path to reach state $u$ in the transition DAG. However, we don't need to keep track of the entire history used to reach $u$, just how much time has elapsed so far. With this in mind, we create $B$ copies of each state $u$ (which will be our nodes in the strategy DAG), indexed by $(u,t)$ for $1 \leq t \leq B$. The $j^{th}$ \emph{strategy dag} $\mathbb{D}(i,j)$ for arm $i$ is an assignment of values $\mathsf{prob}(i,j,u,t)$ and a relation `$\rightarrow$' from 4-tuples to 4-tuples of the form $(i,j,u,t) \rightarrow (i,j,v,t')$ such that the following properties hold: \begin{OneLiners} \item[(i)] For $u,v \in {\mathcal{S}_i}$ such that $p_{u,v} > 0$ and any time $t$, there is exactly one time $t' \geq t+1$ such that $(i,j,u,t) \rightarrow (i,j,v,t')$. Intuitively, this says if the arm is played from state $u$ at time $t$ and it transitions to state $v$, then it is played from $v$ at a unique time $t'$, if it played at all. If $t' = \infty$, the play from $v$ never happens. \item[(ii)] For any $u \in {\mathcal{S}_i}$ and time $t \neq \infty$, $\mathsf{prob}(i,j,u,t) = \sum_{(v,t')~\mathsf{s.t}~(i,j,v,t')\rightarrow(i,j,u,t)} \mathsf{prob}(i,j,v,t') \cdot p_{v,u}$. \end{OneLiners} For clarity, we use the following notation throughout the remainder of the section: \emph{states} refer to the states in the original transition DAG, and \emph{nodes} correspond to the tuples $(i,j,u,t)$ in the strategy DAGs. When $i$ and $j$ are clear in context, we may simply refer to a node of the strategy DAG by $(u,t)$. Equipped with the above definition, our convex decomposition procedure appears in \lref[Algorithm]{dalg:dconvex}. The main subroutine involved is presented first~(\lref[Algorithm]{dalg:convex-sub}). This subroutine, given a fractional solution, identifies the structure of the DAG that will be peeled out, depending on when the different states are first played fractionally in the LP solution. Since we have a layered DAG, the notion of the \emph{depth} of a state is well-defined as the number of hops from the root to this state in the DAG, with the depth of the root being $0$. \newcommand{\textsf{PeelStrat}\xspace}{\textsf{PeelStrat}\xspace} \newcommand{\mathsf{peelProb}}{\mathsf{peelProb}} \begin{algorithm}[ht!] \caption{Sub-Routine \textsf{PeelStrat}\xspace(i,j)} \begin{algorithmic}[1] \label{dalg:convex-sub} \STATE {\bf mark} $(\rho_i,t)$ where $t$ is the earliest time s.t.\ $z_{\rho_i,t} > 0$ and set $\mathsf{peelProb}(\rho_i,t) = 1$. All other nodes are un-marked and have $\mathsf{peelProb}(v,t') = 0$. \WHILE {$\exists$ a marked unvisited node} \STATE {\bf let} $(u,t)$ denote the marked node of smallest depth and earliest time; {\bf update} its status to visited. \FOR {every $v$ s.t.\ $p_{u,v} > 0$} \IF{there is $t'$ such that $z_{v,t'} > 0$, consider the earliest such $t'$ and} \STATE {\bf mark} $(v,t')$ and {\bf set} $(i,j,u,t) \rightarrow (i,j,v,t')$; {\bf update} $\mathsf{peelProb}(v,t') := \mathsf{peelProb}(v,t') + \mathsf{peelProb}(u,t)\cdot p_{u,v}$. \label{dalg:peel3} \ELSE \STATE {\bf set} $(i,j,u,t) \rightarrow (i,j,v,\infty)$ and leave $\mathsf{peelProb}(v,\infty) = 0$. \ENDIF \ENDFOR \ENDWHILE \end{algorithmic} \end{algorithm} The convex decomposition algorithm is now very easy to describe with the sub-routine in \lref[Algorithm]{dalg:convex-sub} in hand. \begin{algorithm}[ht!] \caption{Convex Decomposition of Arm $i$} \begin{algorithmic}[1] \label{dalg:dconvex} \STATE {\bf set} ${\cal C}_i \leftarrow \emptyset$ and {\bf set loop index} $j \leftarrow 1$. \WHILE {$\exists$ a state $u \in {\mathcal{S}_i}$ s.t.\ $\sum_{t} z^{j-1}_{u,t} > 0$} \label{dalg:convex1} \STATE {\bf run} sub-routine \textsf{PeelStrat}\xspace to extract a DAG $\mathbb{D}(i,j)$ with the appropriate $\mathsf{peelProb}(u,t)$ values. \STATE {\bf let} $A \leftarrow \{(u,t)~\mathsf{s.t}~\mathsf{peelProb}(u,t) \neq 0\}$. \STATE {\bf let} $\epsilon = \min_{(u,t) \in A} z^{j-1}_{u,t}/\mathsf{peelProb}(u,t)$. \label{dalg:convex3} \FOR{ every $(u,t)$} \label{dalg:convex3a} \STATE {\bf set} $\mathsf{prob}(i,j,u,t) = \epsilon \cdot \mathsf{peelProb}(u,t)$. \label{dalg:convex4} \STATE {\bf update} $z^j_{u,t} = z^{j-1}_{u, t} - \mathsf{prob}(i,j,u,t)$. \label{dalg:convex5} \STATE {\bf update} $w^j_{v, t+1} = w^{j-1}_{v, t+1} - \mathsf{prob}(i,j,u,t) \cdot p_{u,v}$ for all $v$. \label{dalg:convex6} \ENDFOR \STATE {\bf set} ${\cal C}_i \leftarrow {\cal C}_i \cup \mathbb{D}(i,j)$. \label{dalg:convex7} \STATE {\bf increment} $j \leftarrow j + 1$. \ENDWHILE \end{algorithmic} \end{algorithm} An illustration of a particular DAG and a strategy dag $\mathbb{D}(i,j)$ peeled off is given in \lref[Figure]{dfig:dag} (notice that the states $w$, $y$ and $z$ appear more than once depending on the path taken to reach them). \begin{figure} \caption{Strategy dags and how to visualize them: notice the same state played at different times.} \label{dfig:subfig1} \label{dfig:subfig2} \label{dfig:dag} \end{figure} Now we analyze the solutions $\{z^j, w^j\}$ created by \lref[Algorithm]{dalg:dconvex}. \begin{lemma} \label{dlem:convexstep} Consider an integer $j$ and suppose that $\{z^{j-1}, w^{j-1}\}$ satisfies constraints~\eqref{eq:mablp1}-\eqref{eq:mablp3} of \ref{lp:mabdag}. Then after iteration $j$ of \lref[Step]{dalg:convex1}, the following properties hold: \begin{enumerate} \item[(a)] $\mathbb{D}(i,j)$ (along with the associated $\mathsf{prob}(i,j,.,.)$ values) is a valid strategy dag, i.e., satisfies the conditions (i) and (ii) presented above. \item[(b)] The residual solution $\{z^j, w^j\}$ satisfies constraints~\eqref{eq:mabdaglp1}-\eqref{eq:mabdaglp3}. \item[(c)] For any time $t$ and state $u \in {\mathcal{S}_i}$, $z^{j-1}_{u,t} - z^{j}_{u,t} = \mathsf{prob}(i,j,u,t)$. \end{enumerate} \end{lemma} \begin{proof} We show the properties stated above one by one. \noindent {\bf Property (a):} This follows from the construction of \lref[Algorithm]{dalg:convex-sub}. More precisely, condition (i) is satisfied because in \lref[Algorithm]{dalg:convex-sub} each $(u,t)$ is visited at most once and that is the only time when a pair $(u,t) \rightarrow (v, t')$ (with $t' \ge t + 1$) is added to the relation. For condition (ii), notice that every time a pair $(u,t) \rightarrow (v, t')$ is added to the relation we keep the invariant $\mathsf{peelProb}(v, t') = \sum_{(w,\tau)~\mathsf{s.t}~(i,j,w,\tau) \rightarrow (i,j,v,t')} \mathsf{peelProb}(w, \tau) \cdot p_{w,v}$; condition (ii) then follows since $\mathsf{prob}(.)$ is a scaling of $\mathsf{peelProb}(.)$. \noindent {\bf Property (b):} Constraint~\eqref{eq:mabdaglp1} of \ref{lp:mabdag} is clearly satisfied by the new LP solution $\{z^j, w^j\}$ because of the two updates performed in \lref[Steps]{dalg:convex5} and~\ref{dalg:convex6}: if we decrease the $z$ value of any state at any time, the $w$ of all children are appropriately reduced for the subsequent timestep. Before showing that the solution $\{z^j, w^j\}$ satisfies constraint~\eqref{eq:mabdaglp2}, we first argue that after every round of the procedure they remain non-negative. By the choice of $\epsilon$ in \lref[step]{dalg:convex3}, we have $\mathsf{prob}(i,j,u,t) = \epsilon \cdot \mathsf{peelProb}(u,t) \leq \frac{z^{j-1}_{u,t}}{\mathsf{peelProb}(u,t)}\mathsf{peelProb}(u,t) = z^{j-1}_{u,t}$ (notice that this inequality holds even if $\mathsf{peelProb}(u,t) = 0$); consequently even after the update in \lref[step]{dalg:convex5}, $z^{j}_{u,t} \geq 0$ for all $u,t$. This and the fact that the constraints~(\ref{eq:mabdaglp1}) are satisfied implies that $\{z^j, w^j\}$ satisfies the non-negativity requirement. We now show that constraint~\eqref{eq:mabdaglp2} is satisfied. Suppose for the sake of contradiction there exist some $u \in {\mathcal{S}}$ and $t \in [1,B]$ such that $\{z^j, w^j\}$ violates this constraint. Then, let us consider any such $u$ and the earliest time $t_u$ such that the constraint is violated. For such a $u$, let $t'_u \leq t_u$ be the latest time before $t_u$ where $z^{j-1}_{u,t'} > 0$. We now consider two cases. {\bf Case (i): $t'_u < t_u$}. This is the simpler case of the two. Because $t_u$ was the earliest time where constraint~\eqref{eq:mabdaglp2} was violated, we know that $\sum_{t' \leq t'_u} w^{j}_{u,t'} \geq \sum_{t' \leq t'_u} z^{j}_{u,t'}$. Furthermore, since $z_{u,t}$ is never increased during the course of the algorithm we know that $\sum_{t' = t'_u +1}^{t_u} z^{j}_{u,t'} = 0$. This fact coupled with the non-negativity of $w^j_{u,t}$ implies that the constraint in fact is not violated, which contradicts our assumption about the tuple $u,t_u$. {\bf Case (ii): $t'_u = t_u$}. In this case, observe that there cannot be any pair of tuples $(v,t_1) \rightarrow (u,t_2)$ s.t.\ $t_1 < t_u$ and $t_2 > t_u$, because any copy of $v$ (some ancestor of $u$) that is played before $t_u$, will mark a copy of $u$ that occurs before $t_u$ or the one being played at $t_u$ in \lref[Step]{dalg:peel3} of \textsf{PeelStrat}\xspace. We will now show that summed over all $t' \leq t_u$, the decrease in the LHS is counter-balanced by a corresponding drop in the RHS, between the solutions $\{z^{j-1}, w^{j-1}\}$ and $\{z^{j}, w^{j}\}$ for this constraint~\eqref{eq:mabdaglp2} corresponding to $u$ and $t_u$. To this end, notice that the only times when $w_{u,t'}$ is updated (in \lref[Step]{dalg:convex6}) for $t' \leq t_u$, are when considering some $(v,t_1)$ in \lref[Step]{dalg:convex3a} such that $(v,t_1) \rightarrow (u,t_2)$ and $t_1 < t_2 \leq t_u$. The value of $w_{u, t_1+1}$ is dropped by exactly $\mathsf{prob}(i,j,v,t_1) \cdot p_{v,u}$. But notice that the corresponding term $z_{u,t_2}$ drops by $\mathsf{prob}(i,j,u,t_2) = \sum_{(v'',t'')~\mathsf{s.t}~(v'',t'')\rightarrow (u,t_2)} \mathsf{prob}(i,j,v'',t'') \cdot p_{v'',u}$. Therefore, the total drop in $w$ is balanced by a commensurate drop in $z$ on the RHS. Finally, constraint~\eqref{eq:mabdaglp3} is also satisfied as the $z$ variables only decrease in value. \noindent {\bf Property (c):} This is an immediate consequence of the \lref[Step]{dalg:convex5} of the convex decomposition algorithm. \end{proof} As a consequence of the above lemma, we get the following. \begin{lemma} \label{dlem:convexppt} Given a solution to~(\ref{lp:mabdag}), there exists a collection of at most $nB^2|{\mathcal{S}}|$ strategy dags $\{\mathbb{D}(i,j)\}$ such that $z_{u,t} = \sum_{j} \mathsf{prob}(i,j,u,t)$. Hence, $\sum_{(i, j, u)} \mathsf{prob}(i,j,u,t) \leq 1$ for all $t$. \end{lemma} \subsection{Phases II and III} \label{dsec:phase-iii} We now show how to execute the strategy dags $\mathbb{D}(i,j)$. At a high level, the development of the plays mirrors that of \lref[Sections]{sec:phase-ii} and \ref{sec:phase-iii}. First we transform $\mathbb{D}(i,j)$ into a (possibly exponentially large) blown-up tree and show how this playing these exactly captures playing the strategy dags. Hence (if running time is not a concern), we can simply perform the gap-filling algorithm and make plays on these blown-up trees following Phases II and III in \lref[Sections]{sec:phase-ii} and~\ref{sec:phase-iii}. To achieve polynomial running time, we then show that we can \emph{implicitly execute} the gap-filling phase while playing this tree, thus getting rid of actually performing \lref[Phase]{sec:phase-ii}. Finally, to complete our argument, we show how we do not need to explicitly construct the blown-up tree, and can generate the required portions depending on the transitions made thus far \emph{on demand}. \subsubsection{Transforming the DAG into a Tree} Consider any strategy dag $\mathbb{D}(i,j)$. We first transform this dag into a (possibly exponential) tree by making as many copies of a node $(i,j,u,t)$ as there are paths from the root to $(i,j,u,t)$ in $\mathbb{D}(i,j)$. More formally, define $\mathbb{DT}(i,j)$ as the tree whose vertices are the simple paths in $\mathbb{D}(i,j)$ which start at the root. To avoid confusion, we will explicitly refer to vertices of the tree $\mathbb{DT}$ as tree-nodes, as distinguished from the \emph{nodes} in $\mathbb{D}$; to simplify the notation we identify each tree-node in $\mathbb{DT}$ with its corresponding path in $\mathbb{D}$. Given two tree-nodes $P, P'$ in $\mathbb{DT}(i,j)$, add an arc from $P$ to $P'$ if $P'$ is an immediate extension of $P$, i.e., if $P$ corresponds to some path $(i,j,u_1, t_1) \rightarrow \ldots \rightarrow (i,j,u_k, t_k)$ in $\mathbb{D}(i,j)$, then $P'$ is a path $(i,j,u_1,t_1) \rightarrow \ldots \rightarrow (i,j,u_k,t,k) \rightarrow (i,j,u_{k+1},t_{k+1})$ for some node $(i,j,u_{k+1},t_{k+1})$. For a tree-node $P \in \mathbb{DT}(i,j)$ which corresponds to the path $(i,j,u_1, t_1) \rightarrow \ldots \rightarrow (i,j,u_k,t_k)$ in $\mathbb{D}(i,j)$, we define $\mathsf{state}(P) = u_k$, i.e., $\mathsf{state}(\cdot)$ denotes the final state (in ${\mathcal{S}}_i$) in the path $P$. Now, for tree-node $P \in \mathbb{DT}(i,j)$, if $u_1, \ldots, u_k$ are the children of $\mathsf{state}(P)$ in ${\mathcal{S}_i}$ with positive transition probability from $\mathsf{state}(P)$, then $P$ has exactly $k$ children $P_1, \ldots, P_k$ with $\mathsf{state}(P_l)$ equal to $u_l$ for all $l \in [k]$. The \emph{depth} of a tree-node $P$ is defined as the depth of $\mathsf{state}(P)$. We now define the quantities $\mathsf{time}$ and $\mathsf{prob}$ for tree-nodes in $\mathbb{DT}(i,j)$. Let $P$ be a path in $\mathbb{D}(i,j)$ from $\rho_i$ to node $(i,j,u,t)$. We define $\mathsf{time}(P) := t$ and $\mathsf{prob}(P) := \mathsf{prob}(P') p_{(\mathsf{state}(P'),u)}$, where $P'$ is obtained by dropping the last node from $P$. The blown-up tree $\mathbb{DT}(i,j)$ of our running example $\mathbb{D}(i,j)$ (\lref[Figure]{dfig:dag}) is given in \lref[Figure]{dfig:blown-up}. \begin{lemma} For any state $u$ and time $t$, $\sum_{P~\mathsf{s.t}~\mathsf{time}(P) = t~\mathsf{and}~\mathsf{state}(P)=u} \mathsf{prob}(P) = \mathsf{prob}(i,j,u,t)$. \end{lemma} \begin{figure} \caption{Blown-up Strategy Forest $\mathbb{DT}(i,j)$} \label{dfig:blown-up} \end{figure} Now that we have a tree labeled with $\mathsf{prob}$ and $\mathsf{time}$ values, the notions of connected components and heads from Section~\ref{sec:phase-ii} carry over. Specifically, we define $\mathsf{Head}(P)$ to be the ancestor $P'$ of $P$ in $\mathbb{DT}(i,j)$ with least depth such that there is a path $(P' = P_1 \rightarrow \ldots \rightarrow P_l = P)$ satisfying $\mathsf{time}(P_i) = \mathsf{time}(P_{i-1}) + 1$ for all $i \in [2,l]$, i.e., the plays are made contiguously from $\mathsf{Head}(P)$ to $P$ in the blown-up tree. We also define $\mathsf{comp}(P)$ as the set of all tree-nodes $P'$ such that $\mathsf{Head}(P) = \mathsf{Head}(P')$. In order to play the strategies $\mathbb{DT}(i,j)$ we first eliminate small gaps. The algorithm \textsf{GapFill} presented in \lref[Section]{sec:phase-ii} can be employed for this purpose and returns trees $\mathbb{DT}'(i,j)$ which satisfy the analog of \lref[Lemma]{lem:gapfill}. \begin{lemma} \label{lem:gapfillDAG} The trees returned by \textsf{GapFill} satisfy the followings properties. \begin{OneLiners} \item[(i)] For each tree-node $P$ such that $r_{\mathsf{state}(P)} > 0$, $\mathsf{time}(\mathsf{Head}(P)) \ge 2 \cdot \mathsf{depth}(\mathsf{Head}(P))$. \item[(ii)] The total extent of plays at any time $t$, i.e., $\sum_{P: \mathsf{time}(P)=t} \mathsf{prob}(P)$ is at most $3$. \end{OneLiners} \end{lemma} Now we use \lref[Algorithm]{alg:roundmab} to play the trees $\mathbb{DT}(i,j)$. We restate the algorithm to conform with the notation used in the trees $\mathbb{DT}(i,j)$. \begin{algorithm}[ht!] \caption{Scheduling the Connected Components: Algorithm \textsf{AlgDAG}} \begin{algorithmic}[1] \label{alg:roundmabDAG} \STATE for arm $i$, \textbf{sample} strategy $\mathbb{DT}(i,j)$ with probability $\frac{\mathsf{prob}(\mathsf{root}(\mathbb{DT}(i,j)))}{24}$; ignore arm $i$ w.p.\ $1 - \sum_{j} \frac{\mathsf{prob}(\mathsf{root}(\mathbb{DT}(i,j)))}{24}$. \label{alg:mabstep1DAG} \STATE let $A \gets$ set of ``active'' arms which chose a strategy in the random process. \label{alg:mabstep2DAG} \STATE for each $i \in A$, \textbf{let} $\sigma(i) \gets$ index $j$ of the chosen $\mathbb{DT}(i,j)$ and \textbf{let} $\mathsf{currnode}(i) \gets $ root of $\mathbb{DT}(i,\sigma(i))$. \label{alg:mabstep3DAG} \WHILE{active arms $A \neq \emptyset$} \STATE \textbf{let} $i^* \gets$ arm with tree-node played earliest (i.e., $i^* \gets \operatorname{argmin}_{i \in A} \{ \mathsf{time}(\mathsf{currnode}(i)) \}$). \label{alg:mabstep4DAG} \STATE \textbf{let} $\tau \gets \mathsf{time}(\mathsf{currnode}(i^*))$. \WHILE{$\mathsf{time}(\mathsf{currnode}(i^*)) \neq \infty$ \textbf{and} $\mathsf{time}(\mathsf{currnode}(i^*)) = \tau$} \label{alg:mabLoopDAG} \STATE \textbf{play} arm $i^*$ at state $\mathsf{state}(\mathsf{currnode}(i^*))$ \label{alg:mabPlayDAG} \STATE \textbf{let} $u$ be the new state of arm $i^*$ and \textbf{let} $P$ be the child of $\mathsf{currnode}(i^*)$ satisfying $\mathsf{state}(P) = u$. \STATE \textbf{update} $\mathsf{currnode}(i^*)$ to be $P$; \textbf{let} $\tau \gets \tau + 1$. \label{alg:mabstep5DAG} \ENDWHILE \label{alg:mabEndLoopDAG} \IF{$\mathsf{time}(\mathsf{currnode}(i^*)) = \infty$} \label{alg:mabAbandonDAG} \STATE \textbf{let} $A \gets A \setminus \{i^*\}$ \ENDIF \ENDWHILE \end{algorithmic} \end{algorithm} Now an argument identical to that for Theorem~\ref{thm:main-mab} gives us the following: \begin{theorem} \label{thm:main-mabDAG} The reward obtained by the algorithm~\textsf{AlgDAG} is at least a constant fraction of the optimum for \eqref{lp:mabdag}. \end{theorem} \subsubsection{Implicit gap filling} Our next goal is to execute \textsf{GapFill} implicitly, that is, to incorporate the gap-filling within Algorithm~\textsf{AlgDAG} without having to explicitly perform the advances. To do this, let us review some properties of the trees returned by \textsf{GapFill}. For a tree-node $P$ in $\mathbb{DT}(i,j)$, let $\mathsf{time}(P)$ denote the associated time in the original tree (i.e., before the application of \textsf{GapFill}) and let $\mathsf{time}'(P)$ denote the time in the modified tree (i.e., after $\mathbb{DT}(i,j)$ is modified by \textsf{GapFill}). \begin{claim} \label{cl:gapppt} For a non-root tree-node $P$ and its parent $P'$, $\mathsf{time}'(P) = \mathsf{time}'(P') + 1$ if and only if, either $\mathsf{time}(P) = \mathsf{time}(P') + 1$ or $2 \cdot \mathsf{depth}(P) > \mathsf{time}(P)$. \end{claim} \begin{proof} Let us consider the forward direction. Suppose $\mathsf{time}'(P) = \mathsf{time}'(P') + 1$ but $\mathsf{time}(P) > \mathsf{time}(P') + 1$. Then $P$ must have been the head of its component in the original tree and an \textbf{advance} was performed on it, so we must have $2 \cdot \mathsf{depth}(P) > \mathsf{time}(P)$. For the reverse direction, if $\mathsf{time}(P) = \mathsf{time}(P') + 1$ then $P$ could not have been a head since it belongs to the same component as $P'$ and hence it will always remain in the same component as $P'$ (as \textsf{GapFill} only merges components and never breaks them apart). Therefore, $\mathsf{time}'(P) = \mathsf{time}'(P') + 1$. On the other hand, if $\mathsf{time}(P) > \mathsf{time}(P') + 1$ and $2 \cdot \mathsf{depth}(P) > \mathsf{time}(P)$, then $P$ was a head in the original tree, and because of the above criterion, \textsf{GapFill} must have made an advance on $P'$ thereby including it in the same component as $P$; so again it is easy to see that $\mathsf{time}'(P) = \mathsf{time}'(P') + 1$. \end{proof} The crucial point here is that whether or not $P$ is in the same component as its predecessor after the gap-filling (and, consequently, whether it was played contiguously along with its predecessor should that transition happen in~\textsf{AlgDAG}) can be inferred from the $\mathsf{time}$ values of $P, P'$ before gap-filling and from the depth of $P$---it does not depend on any other \textbf{advance}s that happen during the gap-filling. Algorithm~\ref{alg:implicitFill} is a procedure which plays the original trees $\mathbb{DT}(i,j)$ while implicitly performing the \textbf{advance} steps of \textsf{GapFill} (by checking if the properties of Claim~\ref{cl:gapppt} hold). This change is reflected in \lref[Step]{impalg:fill} where we may play a node even if it is not contiguous, so long it satisfies the above stated properties. Therefore, as a consequence of Claim~\ref{cl:gapppt}, we get the following Lemma that the plays made by \textsf{ImplicitFill} are identical to those made by \textsf{AlgDAG} after running \textsf{GapFill}. \begin{algorithm}[ht!] \caption{Filling gaps implicitly: Algorithm \textsf{ImplicitFill}} \begin{algorithmic}[1] \label{alg:implicitFill} \STATE for arm $i$, \textbf{sample} strategy $\mathbb{DT}(i,j)$ with probability $\frac{\mathsf{prob}(\mathsf{root}(\mathbb{DT}(i,j)))}{24}$; ignore arm $i$ w.p.\ $1 - \sum_{j} \frac{\mathsf{prob}(\mathsf{root}(\mathbb{DT}(i,j)))}{24}$. \label{impalg:mabstep1DAG} \STATE let $A \gets$ set of ``active'' arms which chose a strategy in the random process. \STATE for each $i \in A$, \textbf{let} $\sigma(i) \gets$ index $j$ of the chosen $\mathbb{DT}(i,j)$ and \textbf{let} $\mathsf{currnode}(i) \gets $ root of $\mathbb{DT}(i,\sigma(i))$. \label{impalg:rootchoose} \WHILE{active arms $A \neq \emptyset$} \STATE \textbf{let} $i^* \gets$ arm with state played earliest (i.e., $i^* \gets \operatorname{argmin}_{i \in A} \{ \mathsf{time}(\mathsf{currnode}(i)) \}$). \STATE \textbf{let} $\tau \gets \mathsf{time}(\mathsf{currnode}(i^*))$. \WHILE{$\mathsf{time}(\mathsf{currnode}(i^*)) \neq \infty$ \textbf{and} ($\mathsf{time}(\mathsf{currnode}(i^*)) = \tau$ \textbf{or} $2 \cdot \mathsf{depth}(\mathsf{currnode}(i^*)) > \mathsf{time}(\mathsf{currnode}(i^*))$) } \label{impalg:fill} \STATE \textbf{play} arm $i^*$ at state $\mathsf{state}(\mathsf{currnode}(i^*))$ \label{impalg:play} \STATE \textbf{let} $u$ be the new state of arm $i^*$ and \textbf{let} $P$ be the child of $\mathsf{currnode}(i^*)$ satisfying $\mathsf{state}(P) = u$. \label{impalg:nextNode} \STATE \textbf{update} $\mathsf{currnode}(i^*)$ to be $P$; \textbf{let} $\tau \gets \tau + 1$. \ENDWHILE \IF{$\mathsf{time}(\mathsf{currnode}(i^*)) = \infty$} \STATE \textbf{let} $A \gets A \setminus \{i^*\}$ \ENDIF \ENDWHILE \end{algorithmic} \end{algorithm} \begin{lemma} Algorithm $\textsf{ImplicitFill}$ obtains the same reward as algorithm $\textsf{AlgDAG}\circ\textsf{GapFill}$. \end{lemma} \subsubsection{Running \textbf{ImplicitFill} in Polynomial Time} With the description of \textsf{ImplicitFill}, we are almost complete with our proof with the exception of handling the exponential blow-up incurred in moving from $\mathbb{D}$ to $\mathbb{DT}$. To resolve this, we now argue that while the blown-up $\mathbb{DT}$ made it easy to visualize the transitions and plays made, all of it can be done implicitly from the strategy DAG $\mathbb{D}$. Recall that the tree-nodes in $\mathbb{DT}(i,j)$ correspond to simple paths in $\mathbb{D}(i,j)$. In the following, the final algorithm we employ (called \textsf{ImplicitPlay}) is simply the algorithm \textsf{ImplicitFill}, but with the exponentially blown-up trees $\mathbb{DT}(i, \sigma(i))$ being generated \emph{on-demand}, as the different transitions are made. We now describe how this can be done. In Step~\ref{impalg:rootchoose} of \textsf{ImplicitFill}, we start off at the roots of the trees $\mathbb{DT}(i,\sigma(i))$, which corresponds to the single-node path corresponding to the root of $\mathbb{D}(i,\sigma(i))$. Now, at some point in time in the execution of \textsf{ImplicitFill}, suppose we are at the tree-node $\mathsf{currnode}(i^*)$, which corresponds to a path $Q$ in $\mathbb{D}(i,\sigma(i))$ that ends at $(i, \sigma(i), v, t)$ for some $v$ and $t$. The invariant we maintain is that, in our algorithm \textsf{ImplicitPlay}, we are at node $(i, \sigma(i), v, t)$ in $\mathbb{D}(i,\sigma(i))$. Establishing this invariant would show that the two runs \textsf{ImplicitPlay} and \textsf{ImplicitFill} would be identical, which when coupled with Theorem~\ref{thm:main-mabDAG} would complete the proof---the information that \textsf{ImplicitFill} uses of $Q$, namely $\mathsf{time}(Q)$ and $\mathsf{depth}(Q)$, can be obtained from $(i, \sigma(i), v, t)$. The invariant is clearly satisfied at the beginning, for the different root nodes. Suppose it is true for some tree-node $\mathsf{currnode}(i)$, which corresponds to a path $Q$ in $\mathbb{D}(i,\sigma(i))$ that ends at $(i, \sigma(i), v, t)$ for some $v$ and $t$. Now, suppose upon playing the arm $i$ at state $v$ (in Step~\ref{impalg:play}), we make a transition to state $u$ (say), then \textsf{ImplicitFill} would find the unique child tree-node $P$ of $Q$ in $\mathbb{DT}(i, \sigma(i))$ with $\mathsf{state}(P) = u$. Then let $(i, \sigma(i), u, t')$ be the last node of the path $P$, so that $P$ equals $Q$ followed by $(i, \sigma(i), u, t')$. But, since the tree $\mathbb{DT}(i, \sigma(i))$ is just an expansion of $\mathbb{D}(i, \sigma(i))$, the unique child $P$ in $\mathbb{DT}(i,\sigma(i))$ of tree-node $Q$ which has $\mathsf{state}(P) = u$, is (by definition of $\mathbb{DT}$) the unique node $(i, \sigma(i), u, t')$ of $\mathbb{D}(i, \sigma(i))$ such that $(i, \sigma(i), v, t) \rightarrow (i,\sigma(i), u,t')$. Hence, just as \textsf{ImplicitFill} transitions to $P$ in $\mathbb{DT}(i, \sigma(i))$ (in Step~\ref{impalg:nextNode}), we can transition to the state $(i, \sigma(i), u, t')$ with just $\mathbb{D}$ at our disposal, thus establishing the invariant. For completeness, we present the implicit algorithm below. \begin{algorithm}[ht!] \caption{Algorithm \textsf{ImplicitPlay}} \begin{algorithmic}[1] \label{alg:implicitPlay} \STATE for arm $i$, \textbf{sample} strategy $\mathbb{D}(i,j)$ with probability $\frac{\mathsf{prob}(\mathsf{root}(\mathbb{D}(i,j)))}{24}$; ignore arm $i$ w.p.\ $1 - \sum_{j} \frac{\mathsf{prob}(\mathsf{root}(\mathbb{D}(i,j)))}{24}$. \STATE let $A \gets$ set of ``active'' arms which chose a strategy in the random process. \STATE for each $i \in A$, \textbf{let} $\sigma(i) \gets$ index $j$ of the chosen $\mathbb{D}(i,j)$ and \textbf{let} $\mathsf{currnode}(i) \gets $ root of $\mathbb{D}(i,\sigma(i))$. \WHILE{active arms $A \neq \emptyset$} \STATE \textbf{let} $i^* \gets$ arm with state played earliest (i.e., $i^* \gets \operatorname{argmin}_{i \in A} \{ \mathsf{time}(\mathsf{currnode}(i)) \}$). \STATE \textbf{let} $\tau \gets \mathsf{time}(\mathsf{currnode}(i^*))$. \WHILE{$\mathsf{time}(\mathsf{currnode}(i^*)) \neq \infty$ \textbf{and} ($\mathsf{time}(\mathsf{currnode}(i^*)) = \tau$ \textbf{or} $2 \cdot \mathsf{depth}(\mathsf{currnode}(i^*)) > \mathsf{time}(\mathsf{currnode}(i^*))$) } \STATE \textbf{play} arm $i^*$ at state $\mathsf{state}(\mathsf{currnode}(i^*))$ \STATE \textbf{let} $u$ be the new state of arm $i^*$. \STATE \textbf{update} $\mathsf{currnode}(i^*)$ to be $u$; \textbf{let} $\tau \gets \tau + 1$. \ENDWHILE \IF{$\mathsf{time}(\mathsf{currnode}(i^*)) = \infty$} \STATE \textbf{let} $A \gets A \setminus \{i^*\}$ \ENDIF \ENDWHILE \end{algorithmic} \end{algorithm} \section{Concluding Remarks} We presented the first constant-factor approximations for the stochastic knapsack problem with cancellations and correlated size/reward pairs, and for the budgeted learning problem without the martingale property. We showed that existing LPs for the restricted versions of the problems have large integrality gaps, which required us to give new LP relaxations, and well as new rounding algorithms for these problems. \paragraph*{Acknowledgments.} We thank Kamesh Munagala and Sudipto Guha for useful conversations. {\small } \appendix \section{Some Bad Examples} \label{sec:egs} \subsection{Badness Due to Cancelations} \label{sec:badness-cancel} We first observe that the LP relaxation for the \ensuremath{\mathsf{StocK}}\xspace problem used in~\cite{DeanGV08} has a large integrality gap in the model where cancelations are allowed, \emph{even when the rewards are fixed for any item}. This was also noted in~\cite{Dean-thesis}. Consider the following example: there are $n$ items, every item instantiates to a size of $1$ with probability $0.5$ or a size of $n/2$ with probability $0.5$, and its reward is always $1$. Let the total size of the knapsack be $B = n$. For such an instance, a good solution would cancel any item that does not terminate at size $1$; this way, it can collect a reward of at least $n/2$ in expectation, because an average of $n/2$ items will instantiate with a size $1$ and these will all contribute to the reward. On the other hand, the LP from~\cite{DeanGV08} has value $O(1)$, since the mean size of any item is at least $n/4$. In fact, any strategy that does not cancel jobs will also accrue only $O(1)$ reward. \subsection{Badness Due to Correlated Rewards} \label{sec:badness-corr} While the LP relaxations used for \ensuremath{\mathsf{MAB}}\xspace (e.g., the formulation in ~\cite{GuhaM-stoc07}) can handle the issue explained above w.r.t cancelations, we now present an example of stochastic knapsack (where the reward is correlated with the actual size) for which the existing \ensuremath{\mathsf{MAB}}\xspace LP formulations all have a large integrality gap. Consider the following example: there are $n$ items, every item instantiates to a size of $1$ with probability $1-1/n$ or a size of $n$ with probability $1/n$, and its reward is $1$ only if its size is $n$, and $0$ otherwise. Let the total size of the knapsack be $B = n$. Clearly, any integral solution can fetch an expected reward of $1/n$ --- if the first item it schedules instantiates to a large size, then it gives us a reward. Otherwise, no subsequent item can be fit within our budget even if it instantiates to its large size. The issue with the existing LPs is that the \emph{arm-pull} constraints are ensured locally, and there is one global budget. That is, even if we play each arm to completion individually, the expected size (i.e., number of pulls) they occupy is $1 \cdot (1-1/n) + n \cdot (1/n) \leq 2$. Therefore, such LPs can accommodate $n/2$ jobs, fetching a total reward of $\Omega(1)$. This example brings to attention the fact that all these item are competing to be pulled in the first time slot (if we begin an item in any later time slot it fetches zero reward), thus naturally motivating our time-indexed LP formulation in Section~\ref{sec:large}. In fact, the above example also shows that if we allow ourselves a budget of $2B$, i.e., $2n$ in this case, we can in fact achieve an expected reward of $O(1)$ (much higher than what is possible with a budget of $B$) --- keep playing all items one by one, until one of them does not step after size $1$ and then play that to completion; this event happens with probability $\Omega(1)$. \subsection{Badness Due to the Non-Martingale Property in MAB: The Benefit of Preemption} \label{sec:preemption-gap} Not only do cancelations help in our problems (as can be seen from the example in Appendix~\ref{sec:badness-cancel}), we now show that even \emph{preemption} is necessary in the case of \ensuremath{\mathsf{MAB}}\xspace where the rewards do not satisfy the martingale property. In fact, this brings forward another key difference between our rounding scheme and earlier algorithms for \ensuremath{\mathsf{MAB}}\xspace --- the necessity of preempting arms is not an artifact of our algorithm/analysis but, rather, is unavoidable. Consider the following instance. There are $n$ identical arms, each of them with the following (recursively defined) transition tree starting at $\rho(0)$: When the root $\rho(j)$ is pulled for $j < m$, the following two transitions can happen: \begin{enumerate} \item[(i)] with probability $1/(n \cdot n^{m-j})$, the arm transitions to the ``right-side'', where if it makes $B - n(\sum_{k=0}^{j} L^k)$ plays, it will deterministically reach a state with reward $n^{m-j}$. All intermediate states have $0$ reward. \item[(ii)] with probability $1 - 1/(n \cdot n^{m-j})$, the arm transitions to the ``left-side'', where if it makes $L^{j+1} - 1$ plays, it will deterministically reach the state $\rho(j+1)$. No state along this path fetches any reward. \end{enumerate} Finally, node $\rho(m)$ makes the following transitions when played: (i) with probability $1/n$, to a leaf state that has a reward of $1$ and the arm ends there; (ii) with probability $1-1/n$, to a leaf state with reward of $0$. For the following calculations, assume that $B \gg L > n$ and $m \gg 0$. \noindent {\bf Preempting Solutions.} We first exhibit a preempting solution with expected reward $\Omega(m)$. The strategy plays $\rho(0)$ of all the arms until one of them transitions to the ``right-side'', in which case it continues to play this until it fetches a reward of $n^m$. Notice that any root which transitioned to the right-side can be played to completion, because the number of pulls we have used thus far is at most $n$ (only those at the $\rho(0)$ nodes for each arm), and the size of the right-side is exactly $B - n$. Now, if all the arms transitioned to the left-side, then it plays the $\rho(1)$ of each arm until one of them transitioned to the right-side, in which case it continues playing this arm and gets a reward of $n^{m-1}$. Again, any root $\rho(1)$ which transitioned to the right-side \emph{can be played} to completion, because the number of pulls we have used thus far is at most $n(1 + L)$ (for each arm, we have pulled the root $\rho(0)$, transitioned the walk of length $L-1$ to $\rho(1)$ and then pulled $\rho(1)$), and the size of the right-side is exactly $B - n(1+L)$. This strategy is similarly defined, recursively. We now calculate the expected reward: if any of the roots $\rho(0)$ made a transition to the right-side, we get a reward of $n^m$. This happens with probability roughly $1/n^m$, giving us an expected reward of $1$ in this case. If all the roots made the transition to the left-side, then at least one of the $\rho(1)$ states will make a transition to their right-side with probability $\approx 1/n^{m-1}$ in which case will will get reward of $n^{m-1}$, and so on. Thus, summing over the first $m/2$ such rounds, our expected reward is at least \[ \frac{1}{n^m} n^m + \left(1- \frac{1}{n^m}\right) \frac{1}{n^{m-1}} n^{m-1} + \left(1- \frac{1}{n^m}\right) \left(1- \frac{1}{n^{m-1}}\right)\frac{1}{n^{m-2}} n^{m-2} + \ldots \] Each term above is $\Omega(1)$ giving us a total of $\Omega(m)$ expected reward. \noindent {\bf Non-Preempting Solutions.} Consider any non-preempting solution. Once it has played the first node of an arm and it has transitioned to the left-side, it has to irrevocably decide if it abandons this arm or continues playing. But if it has continued to play (and made the transition of $L-1$ steps), then it cannot get any reward from the right-side of $\rho(0)$ of any of the other arms, because $L > n$ and the right-side requires $B-n$ pulls before reaching a reward-state. Likewise, if it has decided to move from $\rho(i)$ to $\rho(i+1)$ on any arm, it cannot get \emph{any} reward from the right-sides of $\rho(0), \rho(1), \ldots, \rho(i)$ on \emph{any} arm due to budget constraints. Indeed, for any $i \geq 1$, to have reached $\rho(i+1)$ on any particular arm, it must have utilized $(1 + L-1) + (1 + L^2 -1) + \ldots + (1 + L^{i+1} -1)$ pulls in total, which exceeds $n(1 +L + L^2 + \ldots + L^{i})$ since $L > n$. Finally, notice that if the strategy has decided to move from $\rho(i)$ to $\rho(i + 1)$ on any arm, the maximum reward that it can obtain is $n^{m - i - 1}$, namely, the reward from the right-side transition of $\rho(i + 1)$. Using these properties, we observe that an optimal non-preempting strategy proceeds in rounds as described next. \noindent {\bf Strategy at round $i$.} Choose a set $N_i$ of $n_i$ available arms and play them as follows: pick one of these arms, play until reaching state $\rho(i)$ and then play once more. If there is a right-side transition before reaching state $\rho(i)$, discard this arm since there is not enough budget to play until reaching a state with positive reward. If there is a right-side transition at state $\rho(i)$, play this arm until it gives reward of $n^{m - i}$. If there is no right-side transition and there is another arm in $N_i$ which is still to be played, discard the current arm and pick the next arm in $N_i$. In round $i$, at least $\max(0, n_i - 1)$ arms are discarded, hence $\sum_i n_i \le 2 n$. Therefore, the expected reward can be at most \[ \frac{n_1}{n \cdot n^m} n^m + \frac{n_2}{n \cdot n^{m-1}} n^{m-1} + \ldots + \frac{n_m}{n} \leq 2 \] \section{Proofs from Section~\ref{sec:nopmtn}} \subsection{Proof of Theorem~\ref{thm:large}} \label{app:nopmtn-proof} Let $\mathsf{add}_i$ denote the event that item $i$ was added to the knapsack in \lref[Step]{alg:big3}. Also, let $V_i$ denote the random variable corresponding to the reward that our algorithm gets from item $i$. Clearly if item $i$ has $D_i = t$ and was added, then it is added to the knapsack before time $t$. In this case it is easy to see that $\mathbb{E}[V_i \mid \mathsf{add}_i \wedge (D_i = t)] \ge R_{i,t}$ (because its random size is independent of when the algorithm started it). Moreover, from the previous lemma we have that $\Pr(\mathsf{add}_i \mid (D_i = t)) \ge 1/2$ and from \lref[Step]{alg:big1} we have $\Pr(D_i = t) = \frac{x^*_{i,t}}{4}$; hence $\Pr(\mathsf{add}_i \wedge (D_i = t)) \ge x^*_{i,t}/8$. Finally adding over all possibilities of $t$, we lower bound the expected value of $V_i$ by $$\mathbb{E}[V_i] \ge \sum_t \mathbb{E}[V_i \mid \mathsf{add}_i \wedge (D_i = t)] \cdot \Pr(\mathsf{add}_i \wedge (D_i = t)) \ge \frac{1}{8} {\sum_t x^*_{i,t} R_{i,t}}.$$ Finally, linearity of expectation over all items shows that the total expected reward of our algorithm is at least $\frac18 \cdot \sum_{i, t} x^*_{i,t} R_{i,t} = \ensuremath{\mathsf{LPOpt}\xspace}/8$, thus completing the proof. \subsection{Making {\textsf{StocK-NoCancel}}\xspace Fully Polynomial} \label{app:polytime-nopmtn} Recall that our LP relaxation \ref{lp:large} in Section~\ref{sec:nopmtn} uses a global time-indexed LP. In order to make it compact, our approach will be to group the $B$ timeslots in \ref{lp:large} and show that the grouped LP has optimal value within constant factor of \ref{lp:large}; furthermore, we show also that it can be rounded and analyzed almost identically to the original LP. To this end, consider the following LP relaxation: \begin{alignat}{2} \tag{$\mathsf{PolyLP}_L$} \label{lp:largePoly} \max &\textstyle \sum_i \sum_{j = 0}^{\log B} \mathsf{ER}_{i,2^{j + 1}} \cdot x_{i,2^j} &\\ &\textstyle \sum_{j = 0}^{\log B} x_{i,2^j} \le 1 &\forall i \label{LPbig1Poly}\\ &\textstyle \sum_{i, j' \le j} x_{i,2^{j'}} \cdot \mathbb{E}[\min(S_i,2^{j+1})] \le 2\cdot 2^j \qquad &\forall j \in [0, \log B] \label{LPbig2Poly}\\ &x_{i,2^j} \in [0,1] &\forall j \in [0, \log B], \forall i \end{alignat} The next two lemmas relate the value of \eqref{lp:largePoly} to that of the original LP \eqref{lp:large}. \begin{lemma} \label{lemma:largePoly1} The optimum of \eqref{lp:largePoly} is at least half of the optimum of \eqref{lp:large}. \end{lemma} \begin{proof} Consider a solution $x$ for \eqref{lp:large} and define $\bar{x}_{i1} = x_{i,1}/2 + \sum_{t \in [2,4)} x_{i,t}/2$ and $\bar{x}_{i,2^j} = \sum_{t \in [2^{j + 1}, 2^{j + 2})} x_{i,t}/2$ for $1 < j \le \log B$. It suffices to show that $\bar{x}$ is a feasible solution to \eqref{lp:largePoly} with value greater than of equal to half of the value of $x$. For constraints \eqref{LPbig1Poly} we have $\sum_{j = 0}^{\log B} \bar{x}_{i,2^j} = \sum_{t \ge 1} x_{i,t}/2 \le 1/2$; these constraints are therefore easily satisfied. We now show that $\{\bar{x}\}$ also satisfies constraints \eqref{LPbig2Poly}: \begin{align*} &\sum_{i, j' \le j} x_{i,2^{j'}} \cdot \mathbb{E}[\min(S_i,2^{j+1})] = \sum_i \sum_{t = 1}^{2^{j+2} - 1} \frac{x_{i,t} \mathbb{E}[\min(S_i, 2^{j + 1})]}{2} \\ & \le \sum_i \sum_{t = 1}^{2^{j+2} - 1} \frac{x_{i,t} \mathbb{E}[\min(S_i, 2^{j + 2} - 1)]}{2} \le 2^{j + 2} - 1, \end{align*} where the last inequality follows from feasibility of $\{x\}$. Finally, noticing that $\mathsf{ER}_{i,t}$ is non-increasing with respect to $t$, it is easy to see that $\sum_i \sum_{j = 0}^{\log B} \mathsf{ER}_{i,2^{j + 1}} \cdot \bar{x}_{i,2^j} \ge \sum_{i,t} \mathsf{ER}{i,t} \cdot x_{i,t}/2$ and hence $\bar{x}$ has value greater than of equal to half of the value of $x$ ad desired. \end{proof} \begin{lemma} \label{lemma:largePoly2} Let $\{\bar{x}\}$ be a feasible solution for \eqref{lp:largePoly}. Define $\{\hat{x}\}$ satisfying $\hat{x}_{i,t} = \bar{x}_{i,2^j}/2^j$ for all $t \in [2^j, 2^{j+1})$ and $i \in [n]$. Then $\{\hat{x}\}$ is feasible for \eqref{lp:large} and has value at least as large as $\{\bar{x}\}$. \end{lemma} \begin{proof} The feasibility of $\{\bar{x}\}$ directly imply that $\{\hat{x}\}$ satisfies constraints \eqref{LPbig1}. For constraints \eqref{LPbig2}, consider $t \in [2^j, 2^{j + 1})$; then we have the following: \begin{align*} & \sum_{i, t' \le t} \hat{x}_{i,t'} \cdot \mathbb{E}[\min(S_i,t)] \le \sum_i \sum_{j' \le j} \sum_{t \in [2^{j'}, 2^{j' + 1})} \frac{\bar{x}_{i,2^j}}{2^j} \mathbb{E}[\min(S_i, 2^{j+1})] \\ & = \sum_i \sum_{j' \le j} \bar{x}_{i,2^j} \mathbb{E}[\min(S_i, 2^{j+1})] \le 2 \cdot 2^j \le 2t. \end{align*} Finally, again using the fact that $\mathsf{ER}_{i,t}$ is non-increasing in $t$ we get that the value of $\{\hat{x}\}$ is \begin{align*} \sum_{i,t} \mathsf{ER}_{i,t} \cdot \hat{x}_{i,t} = \sum_i \sum_{j = 0}^{\log B} \sum_{t \in [2^j, 2^{j+1})} \mathsf{ER}_{i,t} \frac{\bar{x}_{i,2^j}}{2^j} \ge \sum_i \sum_{j = 0}^{\log B} \sum_{t \in [2^j, 2^{j+1})} \mathsf{ER}_{i,2^{j + 1}} \frac{\bar{x}_{i,2^j}}{2^j} = \sum_i \sum_{j = 0}^{\log B} \mathsf{ER}_{i,2^{j + 1}} \bar{x}_{i,2^j}, \end{align*} which is then at least as large as the value of $\{\bar{x}\}$. This concludes the proof of the lemma. \end{proof} The above two lemmas show that the \ref{lp:largePoly} has value close to that of \ref{lp:large}: let's now show that we can simulate the execution of Algorithm {\textsf{StocK-Large}}\xspace just given an optimal solution $\{\bar{x}\}$ for \eqref{lp:largePoly}. Let $\{\hat{x}\}$ be defined as in the above lemma, and consider the Algorithm {\textsf{StocK-Large}}\xspace applied to $\{\hat{x}\}$. By the definition of $\{\hat{x}\}$, here's how to execute \lref[Step]{alg:big1} (and hence the whole algorithm) in polynomial time: we obtain $D_i = t$ by picking $j \in [0, \log B]$ with probability $\bar{x}_{i,2^j}$ and then selecting $t \in [2^j, 2^{j + 1})$ uniformly; notice that indeed $D_i = t$ (with $t \in [2^j, 2^{j + 1})$) with probability $\bar{x}_{i,2^j}/2^j = \hat{x}_{i,t}$. Using this observation we can obtain a $1/16$ approximation for our instance $\mathcal{I}$ in polynomial time by finding the optimal solution $\{\bar{x}\}$ for \eqref{lp:largePoly} and then running Algorithm {\textsf{StocK-Large}}\xspace over $\{\hat{x}\}$ as described in the previous paragraph. Using a direct modification of \lref[Theorem]{thm:large} we have that the strategy obtained has expected reward at least at large as $1/8$ of the value of $\{\hat{x}\}$, which by \lref[Lemmas]{lemma:largePoly1} and \ref{lemma:largePoly2} (and \lref[Lemma]{thm:lp-large-valid}) is within a factor of $1/16$ of the optimal solution for $\mathcal{I}$. \section{Proofs from Section~\ref{sec:sk}} \label{app:small} \subsection{Proof of Lemma~\ref{lem:stop-dist}} The proof works by induction. For the base case, consider $t=0$. Clearly, this item is forcefully canceled in \lref[step]{alg:st:2} of Algorithm~\ref{alg:skssmall} {\textsf{StocK-Small}}\xspace (in the iteration with $t=0$) with probability $s^*_{i,0}/v^*_{i,0} - \pi_{i,0}/\sum_{t' \geq 0} \pi_{i,t'}$. But since $\pi_{i,0}$ was assumed to be $0$ and $v^*_{i,0}$ is $1$, this quantity is exactly $s^*_{i,0}$, and this proves property~(i). For property~(ii), item $i$ is processed for its $\mathbf{1}^{st}$ timestep if it did not get forcefully canceled in \lref[step]{alg:st:2}. This therefore happens with probability $1 - s^*_{i,0} = v^*_{i,0} - s^*_{i,0} = v^*_{i,1}$. For property~(iii), conditioned on the fact that it has been processed for its $\mathbf{1}^{st}$ timestep, clearly the probability that its (unknown) size has instantiated to $1$ is exactly $\pi_{i,1}/\sum_{t' \geq 1} \pi_{i,t'}$. When this happens, the job stops in \lref[step]{alg:st:5}, thereby establishing the base case. Assuming this property holds for every timestep until some fixed value $t-1$, we show that it holds for $t$; the proofs are very similar to the base case. Assume item $i$ was processed for the $t^{th}$ timestep (this happens w.p $v^*_{i,t}$ from property (ii) of the induction hypothesis). Then from property (iii), the probability that this item completes at this timestep is exactly $\pi_{i,t}/\sum_{t' \geq t} \pi_{i,t'}$. Furthermore, it gets forcefully canceled in \lref[step]{alg:st:2} with probability $s^*_{i,t}/v^*_{i,t} - \pi_{i,t}/\sum_{t' \geq t} \pi_{i,t'}$. Thus the total probability of stopping at time $t$, assuming it has been processed for its $t^{th}$ timestep is exactly $s^*_{i,t}/v^*_{i,t}$; unconditionally, the probability of stopping at time $t$ is hence $s^*_{i,t}$. Property~(ii) follows as a consequence of Property~(i), because the item is processed for its $(t+1)^{st}$ timestep only if it did not stop at timestep $t$. Therefore, conditioned on being processed for the $t^{th}$ timestep, it continues to be processed with probability $1 - s^*_{i,t}/v^*_{i,t}$. Therefore, removing the conditioning, we get the probability of processing the item for its $(t+1)^{st}$ timestep is $v^*_{i,t} - s^*_{i,t} = v^*_{i,t+1}$. Finally, for property~(iii), conditioned on the fact that it has been processed for its $(t+1)^{st}$ timestep, clearly the probability that its (unknown) size has instantiated to exactly $(t+1)$ is $\pi_{i,t+1}/\sum_{t' \geq t+1} \pi_{i,t'}$. When this happens, the job stops in \lref[step]{alg:st:5} of the algorithm. \subsection{\ensuremath{\mathsf{StocK}}\xspace with Small Sizes: A Fully Polytime Algorithm} \label{app:polytime} The idea is to quantize the possible sizes of the items in order to ensure that LP \ref{lpone} has polynomial size, then obtain a good strategy (via Algorithm {\textsf{StocK-Small}}\xspace) for the transformed instance, and finally to show that this strategy is actually almost as good for the original instance. Consider an instance $\mathcal{I} = (\pi, R)$ where $R_{i,t} = 0$ for all $t > B/2$. Suppose we start scheduling an item at some time; instead of making decisions of whether to continue or cancel an item at each subsequent time step, we are going to do it in time steps which are powers of 2. To make this formal, define instance $\bar{\mathcal{I}} = (\bar{\pi}, \bar{R})$ as follows: set $\bar{\pi}_{i,2^j} = \sum_{t \in [2^j, 2^{j + 1})} \pi_{i,t}$ and $\bar{R}_{i,2^j} = (\sum_{t \in [2^j, 2^{j+1})} \pi_{i,t} R_{i,t} )/ \bar{\pi}_{i,2^j}$ for all $i \in [n]$ and $j \in \{0, 1, \ldots, \lfloor\log B \rfloor\}$. The instances are coupled in the natural way: the size of item $i$ in the instance $\bar{\mathcal{I}}$ is $2^j$ iff the size of item $i$ in the instance $\mathcal{I}$ lies in the interval $[2^j, 2^{j+1})$. In \lref[Section]{caseSmall}, a \emph{timestep} of an item has duration of 1 time unit. However, due to the construction of $\bar{\mathcal{I}}$, it is useful to consider that the $t^{th}$ time step of an item has duration $2^t$; thus, an item can only complete at its $0^{th}$, $1^{st}$, $2^{nd}$, etc. timesteps. With this in mind, we can write an LP analogous to \eqref{lpone}: \begin{alignat}{2} \max &\textstyle \sum_{1 \leq j \leq \log(B/2)} \sum_{1 \leq i \leq n} v_{i,2^j} \cdot \bar{R}_{i,2^j} \frac{\bar{\pi}_{i,2^j}}{\sum_{j' \geq j} \pi_{i,2^{j'}}} & & \tag{$\mathsf{PolyLP}_{S}$} \label{lpone.2} \\ & v_{i,2^j} = s_{i,2^j} + v_{i,2^j+1} & \qquad & \forall \, j \in [0,\log B], \, i \in [n] \label{eq:1.2} \\ &s_{i,2^j} \geq \frac{\bar{\pi}_{i,2^j}}{\sum_{j' \geq j} \bar{\pi}_{i,2^{j'}}} \cdot v_{i,2^j} & \qquad & \forall \, t \in [0,\log B], \, i \in [n] \label{eq:2.2} \\ &\textstyle \sum_{i \in [n]} \sum_{j \in [0, \log B]} 2^j \cdot s_{i,2^j} \leq B & \label{eq:3.2}\\ &v_{i,0} = 1 & \qquad & \forall \, i \label{eq:4.2} \\ v_{i,2^j}, s_{i,2^j} &\in [0,1] & \qquad & \forall \, j \in [0,\log B], \, i \in [n] \label{eq:5.2} \end{alignat} Notice that this LP has size polynomial in the size of the instance $\mathcal{I}$. Consider the LP \eqref{lpone} with respect to the instance $\mathcal{I}$ and let $(v, s)$ be a feasible solution for it with objective value $z$. Then define $(\bar{v}, \bar{s})$ as follows: $\bar{v}_{i,2^j} = v_{i,2^j}$ and $\bar{s}_{i,2^j} = \sum_{t \in [2^j, 2^{j + 1})} s_{i,j}$. It is easy to check that $(\bar{v}, \bar{s})$ is a feasible solution for \eqref{lpone.2} with value at least $z$, where the latter uses the fact that $v_{i,t}$ is non-increasing in $t$. Using \lref[Theorem]{thm:lp1-valid} it then follows that the optimum of \eqref{lpone.2} with respect to $(\bar{\pi}, \bar{R})$ is at least as large as the reward obtained by the optimal solution for the stochastic knapsack instance $(\pi, R)$. Let $(\bar{v}, \bar{s})$ denote an optimal solution of \eqref{lpone.2}. Notice that with the redefined notion of timesteps we can naturally apply Algorithm {\textsf{StocK-Small}}\xspace to the LP solution $(\bar{v}, \bar{s})$. Moreover, \lref[Lemma]{lem:stop-dist} still holds in this setting. Finally, modify Algorithm {\textsf{StocK-Small}}\xspace by ignoring items with probability $1 - 1/8 = 7/8$ (instead of $3/4$) in \lref[Step]{alg:st:1} (we abuse notation slightly and shall refer to the modified algorithm also as {\textsf{StocK-Small}}\xspace) and notice that \lref[Lemma]{lem:stop-dist} still holds. Consider the strategy $\bar{\mathbb{S}}$ for $\bar{\mathcal{I}}$ obtained from Algorithm {\textsf{StocK-Small}}\xspace. We can obtain a strategy $\mathbb{S}$ for $\mathcal{I}$ as follows: whenever $\mathbb{S}$ decides to process item $i$ of $\bar{\mathcal{I}}$ for its $j$th timestep, we decide to continue item $i$ of $\mathcal{I}$ while it has size from $2^j$ to $2^{j + 1} - 1$. \begin{lemma} Strategy $\mathbb{S}$ is a $1/16$ approximation for $\mathcal{I}$. \end{lemma} \begin{proof} Consider an item $i$. Let $\bar{O}$ be the random variable denoting the total size occupied before strategy $\bar{\mathbb{S}}$ starts processing item $i$ and similarly let $O$ denote the total size occupied before strategy $\mathbb{S}$ starts processing item $i$. Since \lref[Lemma]{lem:stop-dist} still holds for the modified algorithm {\textsf{StocK-Small}}\xspace, we can proceed as in \lref[Theorem]{thm:small} and obtain that $\mathbb{E}[\bar{O}] \le B/8$. Due to the definition of $\mathbb{S}$ we can see that $O \le 2 \bar{O}$ and hence $\mathbb{E}[O] \le B/4$. From Markov's inequality we obtain that $\Pr(O \ge B/2) \le 1/2$. Noticing that $i$ is started by $\mathbb{S}$ with probability $1/8$ we get that the probability that $i$ is started and there is at least $B/2$ space left on the knapsack at this point is at least $1/16$. Finally, notice that in this case $\bar{\mathbb{S}}$ and $\mathbb{S}$ obtain the same expected value from item $i$, namely $\sum_j \bar{v}_{i,2^j} \cdot \bar{R}_{i,2^j} \frac{\bar{\pi}_{i,2^j}}{\sum_{j' \geq j} \pi_{i,2^{j'}}}$. Thus $\mathbb{S}$ get expected value at least that of the optimum of \eqref{lpone.2}, which is at least the value of the optimal solution for $\mathcal{I}$ as argued previously. \end{proof} \section{Details from Section~\ref{sec:mab}} \subsection{Details of Phase~I (from Section~\ref{sec:phase-i})} \label{sec:details-phase-i} We first begin with some notation that will be useful in the algorithm below. For any state $u \in {\mathcal{S}_i}$ such that the path from $\rho_i$ to $u$ follows the states $u_1 = \rho_i, u_2, \ldots, u_k = u$, let $\pi_u = \Pi_{l=1}^{k-1} p_{u_i, u_{i+1}}$. Fix an arm $i$, for which we will perform the decomposition. Let $\{z, w\}$ be a feasible solution to \ref{lp:mab} and set $z^0_{u,t} = z_{u,t}$ and $w^0_{u,t} = w_{u,t}$ for all $u \in {\mathcal{S}_i}$, $t \in [B]$. We will gradually alter the fractional solution as we build the different forests. We note that in a particular iteration with index $j$, all $z^{j-1}, w^{j-1}$ values that are not updated in \lref[Steps]{alg:convex5} and~\ref{alg:convex6} are retained in $z^j, w^j$ respectively. \begin{algorithm}[ht!] \caption{Convex Decomposition of Arm $i$} \begin{algorithmic}[1] \label{alg:convex} \STATE {\bf set} ${\cal C}_i \leftarrow \emptyset$ and {\bf set loop index} $j \leftarrow 1$. \WHILE {$\exists$ a node $u \in {\mathcal{S}_i}$ s.t $\sum_{t} z^{j-1}_{u,t} > 0$} \label{alg:convex1} \STATE {\bf initialize} a new tree $\mathbb{T}(i,j) = \emptyset$. \label{alg:convex1a} \STATE {\bf set} $A \leftarrow \{u \in {\mathcal{S}_i} ~\textsf{s.t}~\sum_{t} z^{j-1}_{u,t} > 0\}$. \label{alg:convex1b} \STATE for all $u \in {\mathcal{S}_i}$, {\bf set} $\mathsf{time}(i,j,u) \leftarrow \infty$, $\mathsf{prob}(i,j,u) \leftarrow 0$, and {\bf set} $\epsilon_u \leftarrow \infty$. \FOR{ every $u \in A$} \STATE {\bf update} $\mathsf{time}(i,j,u)$ to the smallest time $t$ s.t $z^{j-1}_{u,t} > 0$. \label{alg:convex2} \STATE {\bf update} $\epsilon_u = {z^{j-1}_{u,\mathsf{time}(i,j,u)}}/{\pi_{u}}$ \label{alg:convex2a} \ENDFOR \STATE {\bf let} $\epsilon = \min_{u} \epsilon_u$. \label{alg:convex3} \FOR{ every $u \in A$} \STATE {\bf set} $\mathsf{prob}(i,j,u) = \epsilon \cdot \pi_u$. \label{alg:convex4} \STATE {\bf update} $z^j_{u,\mathsf{time}(i,j,u)} = z^{j-1}_{u, \mathsf{time}(i,j,u)} - \mathsf{prob}(i,j,u)$. \label{alg:convex5} \STATE {\bf update} $w^j_{v, \mathsf{time}(i,j,u)+1} = w^{j-1}_{v, \mathsf{time}(i,j,u)+1} - \mathsf{prob}(i,j,u) \cdot p_{u,v}$ for all $v$ s.t $\mathsf{parent}(v) = u$. \label{alg:convex6} \ENDFOR \STATE {\bf set} ${\cal C}_i \leftarrow {\cal C}_i \cup \mathbb{T}(i,j)$. \label{alg:convex7} \STATE {\bf increment} $j \leftarrow j + 1$. \ENDWHILE \end{algorithmic} \end{algorithm} For brevity of notation, we shall use ``iteration $j$ of \lref[step]{alg:convex1}'' to denote the execution of the entire block (\lref[steps]{alg:convex1a} -- \ref{alg:convex7}) which constructs strategy forest $\mathbb{T}(i,j)$. \begin{lemma} \label{lem:convexstep} Consider an integer $j$ and suppose that $\{z^{j-1}, w^{j-1}\}$ satisfies constraints~\eqref{eq:mablp1}-\eqref{eq:mablp3} of \ref{lp:mab}. Then after iteration $j$ of \lref[Step]{alg:convex1}, the following properties hold: \begin{enumerate} \item[(a)] $\mathbb{T}(i,j)$ (along with the associated $\mathsf{prob}(i,j,.)$ and $\mathsf{time}(i,j,.)$ values) is a valid strategy forest, i.e., satisfies the conditions (i) and (ii) presented in Section \ref{sec:phase-i}. \item[(b)] The residual solution $\{z^j, w^j\}$ satisfies constraints~\eqref{eq:mablp1}-\eqref{eq:mablp3}. \item[(c)] For any time $t$ and state $u \in {\mathcal{S}_i}$, $z^{j-1}_{u,t} - z^{j}_{u,t} = \mathsf{prob}(i,j,u) \mathbf{1}_{\mathsf{time}(i,j,u)=t}$. \end{enumerate} \end{lemma} \begin{proof} We show the properties stated above one by one. \noindent {\bf Property (a):} We first show that the $\mathsf{time}$ values satisfy $\mathsf{time}(i,j,u)$ $\geq$ $\mathsf{time}(i,j,\mathsf{parent}(u)) + 1$, i.e. condition (i) of strategy forests. For sake of contradiction, assume that there exists $u \in {\mathcal{S}_i}$ with $v = \mathsf{parent}(u)$ where $\mathsf{time}(i,j,u) \leq \mathsf{time}(i,j,v)$. Define $t_u = \mathsf{time}(i,j,u)$ and $t_v = \mathsf{time}(i,j,\mathsf{parent}(u))$; the way we updated $\mathsf{time}(i,j,u)$ in \lref[step]{alg:convex2} gives that $z^{j-1}_{u,t_u} > 0$. Then, constraint~(\ref{eq:mablp2}) of the LP implies that $\sum_{t' \leq t_u} w^{j-1}_{u,t'} > 0$. In particular, there exists a time $t' \leq t_u \leq t_v$ such that $w^{j-1}_{u,t'} > 0$. But now, constraint~(\ref{eq:mablp1}) enforces that $z^{j-1}_{v, t'-1} = w^{j-1}_{u,t'}/p_{v, u} > 0$ as well. But this contradicts the fact that $t_v$ was the first time s.t $z^{j-1}_{v,t} > 0$. Hence we have $\mathsf{time}(i,j,u) \geq \mathsf{time}(i,j,\mathsf{parent}(u))+1$. As for condition (ii) about $\mathsf{prob}(i,j,.)$, notice that if $\mathsf{time}(i,j,u) \neq \infty$, then $\mathsf{prob}(i,j,u)$ is set to $\epsilon \cdot \pi_u$ in \lref[step]{alg:convex4}. It is now easy to see from the definition of $\pi_u$ (and from the fact that $\mathsf{time}(i,j,u) \neq \infty \Rightarrow \mathsf{time}(i,j,\mathsf{parent}(u)) \neq \infty$) that $\mathsf{prob}(i,j,u) = \mathsf{prob}(i,j,\mathsf{parent}(u)) \cdot p_{\mathsf{parent}(u),u}$. \noindent {\bf Property (b):} Constraint~\eqref{eq:mablp1} of \ref{lp:mab} is clearly satisfied by the new LP solution $\{z^j, w^j\}$ because of the two updates performed in \lref[Steps]{alg:convex5} and~\ref{alg:convex6}: if we decrease the $z$ value of any node at any time, the $w$ of all children are appropriately reduced (for the subsequent timestep). Before showing that the solution $\{z^j, w^j\}$ satisfies constraint~\eqref{eq:mablp2}, we first argue that they remain non-negative. By the choice of $\epsilon$ in step~\ref{alg:convex3}, we have $\mathsf{prob}(i,j,u) = \epsilon \pi_u \leq \epsilon_u \pi_u = z^{j-1}_{u,\mathsf{time}(i,j,u)}$ (where $\epsilon_u$ was computed in \lref[Step]{alg:convex2a}); consequently even after the update in step~\ref{alg:convex5}, $z^{j}_{u,\mathsf{time}(i,j,u)} \geq 0$ for all $u$. This and the fact that the constraints~(\ref{eq:mablp1}) are satisfied implies that $\{z^j, w^j\}$ satisfies the non-negativity requirement. We now show that constraint~\eqref{eq:mablp2} is satisfied. For any time $t$ and state $u \notin A$ (where $A$ is the set computed in step~\ref{alg:convex1b} for iteration $j$), clearly it must be that $\sum_{t' \leq t} z^{j-1}_{u,t} = 0$ by definition of the set $A$; hence just the non-negativity of $w^j$ implies that these constraints are trivially satisfied. Therefore consider some $t \in [B]$ and a state $u \in A$. We know from step~\ref{alg:convex2} that $\mathsf{time}(i,j,u) \neq \infty$. If $t < \mathsf{time}(i,j,u)$, then the way $\mathsf{time}(i,j,u)$ is updated in step~\ref{alg:convex2} implies that $\sum_{t' \leq t} z^j_{u,t'} = \sum_{t' \leq t} z^{j-1}_{u,t'} = 0$, so the constraint is trivially satisfied because $w^j$ is non-negative. If $t \geq \mathsf{time}(i,j,u)$, we claim that the change in the left hand side and right hand side (between the solutions $\{z^{j-1}, w^{j-1}\}$ and $\{z^{j}, w^{j}\}$) of the constraint under consideration is the same, implying that it will be still satisfied by $\{z^j, w^j\}$. To prove this claim, observe that the right hand side has decreased by exactly $z^{j-1}_{u, \mathsf{time}(i,j,u)} - z^{j}_{u, \mathsf{time}(i,j,u)} = \mathsf{prob}(i,j,u)$. But the only value which has been modified in the left hand side is $w^{j-1}_{u, \mathsf{time}(i,j,\mathsf{parent}(u))+1}$, which has gone down by $\mathsf{prob}(i, j, \mathsf{parent}(u)) \cdot p_{\mathsf{parent}(u), u}$. Because $\mathbb{T}(i,j)$ forms a valid strategy forest, we have $\mathsf{prob}(i,j,u) = \mathsf{prob}(i,j,\mathsf{parent}(u)) \cdot p_{\mathsf{parent}(u), u}$, and thus the claim follows. Finally, constraint~\eqref{eq:mablp3} are also satisfied as the $z$ variables only decrease in value over iterations. \noindent {\bf Property (c):} This is an immediate consequence of the \lref[Step]{alg:convex5}. \end{proof} To prove \lref[Lemma]{lem:convexppt}, firstly notice that since $\{z^0, w^0\}$ satisfies constraints~\eqref{eq:mablp1}-\eqref{eq:mablp3}, we can proceed by induction and infer that the properties in the previous lemma hold for every strategy forest in the decomposition; in particular, each of them is a valid strategy forest. In order to show that the marginals are preserved, observe that in the last iteration $j^*$ of procedure we have $z^{j^*}_{u,t} = 0$ for all $u, t$. Therefore, adding the last property in the previous lemma over all $j$ gives \begin{align*} z_{u,t} = \sum_{j \ge 1} (z^{j - 1}_{u,t} - z^j_{u,t}) = \sum_{j \ge 1} \mathsf{prob}(i,j,u) \mathbf{1}_{\mathsf{time}(i,j,u) = t} = \sum_{j : \mathsf{time}(i,j,u) = t} \mathsf{prob}(i,j,u). \end{align*} Finally, since some $z^j_{u,t}$ gets altered to $0$ since in each iteration of the above algorithm, the number of strategies for each arm in the decomposition is upper bounded by $B|{\mathcal{S}}|$. This completes the proof of \lref[Lemma]{lem:convexppt}. \subsection{Details of Phase~II (from Section~\ref{sec:phase-ii})} \label{sec:details-phase-ii} \begin{proofof}{Lemma~\ref{lem:gapfill}} Let $\mathsf{time}^t(u)$ denote the time assigned to node $u$ by the end of round $\tau = t$ of the algorithm; $\mathsf{time}^{B + 1}(u)$ is the initial time of $u$. Since the algorithm works backwards in time, our round index will start at $B$ and end up at $1$. To prove property (i) of the statement of the lemma, notice that the algorithm only converts head nodes to non-head nodes and not the other way around. Moreover, heads which survive the algorithm have the same $\mathsf{time}$ as originally. So it suffices to show that heads which originally did not satisfy property (i)---namely, those with $\mathsf{time}^{B + 1}(v) < 2 \cdot \mathsf{depth}(v)$---do not survive the algorithm; but this is clear from the definition of Step \ref{alg:gap1}. To prove property (ii), fix a time $t$, and consider the execution of \textsf{GapFill} at the end of round $\tau = t$. We claim that the total extent of fractional play at time $t$ does not increase as we continue the execution of the algorithm from round $\tau=t$ to round $1$. To see why, let $C$ be a connected component at the end of round $\tau = t$ and let $h$ denote its head. If $\mathsf{time}^t(h) > t$ then no further {\bf advance} affects $C$ and hence it does not contribute to an increase in the number of plays at time $t$. On the other hand, if $\mathsf{time}^t(h) \le t$, then even if $C$ is advanced in a subsequent round, each node $w$ of $C$ which ends up being played at $t$, i.e., has $\mathsf{time}^1(w) = t$ must have an ancestor $w'$ satisfying $\mathsf{time}^t(w') = t$, by the contiguity of $C$. Thus, \lref[Observation]{obs:treeflow} gives that $\sum_{u \in C : \mathsf{time}^1(u)=t} \mathsf{prob}(u) \le \sum_{u \in C : \mathsf{time}^t(u)=t} \mathsf{prob}(u)$. Applying this for each connected component $C$, proves the claim. Intuitively, any component which advances forward in time is only reducing its load/total fractional play at any fixed time $t$. \begin{figure} \caption{Depiction of a strategy forest $\mathbb{T}(i,j)$ on a timeline, where each triangle is a connected component. In this example, $H = \{h_2, h_5\}$ and $C_{h_2}$ consists of the grey nodes. From Observation \ref{obs:treeflow} the number of plays at $t$ do not increase as components are moved to the left.} \label{fig:subGapFill1} \label{fig:subGapFill2} \label{fig:gapFill} \end{figure} Then consider the end of iteration $\tau = t$ and we now prove that the fractional extent of play at time $t$ is at most 3. Due to \lref[Lemma]{lem:convexppt}, it suffices to prove that $\sum_{u \in U} \mathsf{prob}(u) \le 2$, where $U$ is the set of nodes which caused an increase in the number of plays at time $t$, namely, $U = \{u : \mathsf{time}^{B + 1}(u) > t \textrm{ and } \mathsf{time}^t(u) = t\}$. Notice that a connected component of the original forest can only contribute to this increase if its head $h$ crossed time $t$, that is $\mathsf{time}^{B + 1}(h) > t$ and $\mathsf{time}^t(h) \le t$. However, it may be that this crossing was not directly caused by an {\bf advance} on $h$ (i.e. $h$ advanced till $\mathsf{time}^{B + 1}(\mathsf{parent}(h)) \ge t$), but an {\bf advance} to a head $h'$ in a subsequent round was responsible for $h$ crossing over $t$. But in this case $h$ must be part of the connected component of $h'$ when the latter {\bf advance} happens, and we can use $h'$'s advance to bound the congestion. To make this more formal, let $H$ be the set of heads of the original forest whose {\bf advances} made them cross time $t$, namely, $h \in H$ iff $\mathsf{time}^{B + 1}(h) > t$, $\mathsf{time}^t(h) \le t$ and $\mathsf{time}^{B + 1}(\mathsf{parent}(h)) < t$. Moreover, for $h \in H$ let $C_h$ denote the connected component of $h$ in the beginning of the iteration where an {\bf advance} was executed on $h$, that is, when $v$ was set to $h$ in \lref[Step]{alg:setV}. The above argument shows that these components $C_h$'s contain all the nodes in $U$, hence it suffices to see how they increase the congestion at time $t$. In fact, it is sufficient to focus just on the heads in $H$. To see this, consider $h \in H$ and notice that no node in $U \cap C_h$ is an ancestor of another. Then \lref[Observation]{obs:treeflow} gives $\sum_{u \in U \cap C_h} \mathsf{prob}(u) \le \mathsf{prob}(h)$, and adding over all $h$ in $H$ gives $\sum_{u \in U} \mathsf{prob}(u) \le \sum_{h \in H} \mathsf{prob}(h)$. To conclude the proof, we upper bound the right hand side of the previous inequality. The idea now is that the play probabilities on the nodes in $H$ cannot be too large since their parents have $\mathsf{time}^{B + 1} < t$ (and each head has a large number of ancestors in $[1,t]$ because it was considered for an advance). More formally, fix $i,j$ and consider a head $h$ in $H \cap \mathbb{T}(i,j)$. From \lref[Step]{alg:gap1} of the algorithm, we obtain that $\mathsf{depth}(h) > (1/2) \mathsf{time}^{B + 1}(h) \ge t/2$. Since $\mathsf{time}^{B + 1}(\mathsf{parent}(h)) < t$, it follows that for every $d \le \lfloor t/2 \rfloor$, $h$ has an ancestor $u \in \mathbb{T}(i,j)$ with $\mathsf{depth}(u) = d$ and $\mathsf{time}^{B + 1}(u) \le t$. Moreover, the definition of $H$ implies that no head in $H \cap \mathbb{T}(i,j)$ can be an ancestor of another. Then again employing \lref[Observation]{obs:treeflow} we obtain \begin{align*} \sum_{h \in H \cap \mathbb{T}(i,j)} \mathsf{prob}(h) \le \sum_{u \in \mathbb{T}(i,j) : \mathsf{depth}(u) = d, \mathsf{time}^{B + 1}(u) \le t} \mathsf{prob}(u) \ \ \ \ \ \ \ (\forall d \le \lfloor t/2 \rfloor). \end{align*} Adding over all $i,j$ and $d \le \lfloor t/2 \rfloor$ leads to the bound $(t/2) \cdot \sum_{h \in H} \mathsf{prob}(h) \le \sum_{u : \mathsf{time}^{B + 1}(u) \le t} \mathsf{prob}(u)$. Finally, using \lref[Lemma]{lem:convexppt} we can upper bound the right hand side by $t$, which gives $\sum_{u \in U} \mathsf{prob}(u) \le \sum_{h \in H} \mathsf{prob}(u) \le 2$ as desired. \end{proofof} \subsection{Details of Phase~III (from Section~\ref{sec:phase-iii})} \label{sec:details-phase-iii} \proofof{Lemma~\ref{lem:visitprob}} The proof is quite straightforward. Intuitively, it is because $\textsf{AlgMAB}$ (Algorithm~\ref{alg:roundmab}) simply follows the probabilities according to the transition tree $T_i$ (unless $\mathsf{time}(i,j,u) = \infty$ in which case it abandons the arm). Consider an arm $i$ such that $\sigma(i) = j$, and any state $u \in {\mathcal{S}_i}$. Let $\langle v_1 = \rho_i, v_2, \ldots, v_t = u \rangle$ denote the unique path in the transition tree for arm $i$ from $\rho_i$ to $u$. Then, if $\mathsf{time}(i,j,u) \neq \infty$ the probability that state $u$ is played is exactly the probability of the transitions reaching $u$ (because in \lref[steps]{alg:mabPlay} and~\ref{alg:mabstep5}, the algorithm just keeps playing the states\footnote{We remark that while the plays just follow the transition probabilities, they may not be made contiguously.} and making the transitions, unless $\mathsf{time}(i,j,u) = \infty$). But this is precisely $\Pi_{k=1}^{t-1} p_{v_k, v_{k+1}} = \mathsf{prob}(i,j,u)/\mathsf{prob}(i,j,\rho_i)$ (from the properties of each strategy in the convex decomposition). If $\mathsf{time}(i,j,u) = \infty$ however, then the algorithm terminates the arm in \lref[Step]{alg:mabAbandon} without playing $u$, and so the probability of playing $u$ is $0 = \mathsf{prob}(i,j,u)/\mathsf{prob}(i,j,\rho_i)$. This completes the proof. \section{Proofs from Section~\ref{dsec:mab}} \label{sec:app-dag} \subsection{Layered DAGs capture all Graphs} \label{dsec:layered-enough} We first show that \emph{layered DAGs} can capture all transition graphs, with a blow-up of a factor of $B$ in the state space. For each arm $i$, for each state $u$ in the transition graph ${\mathcal{S}}_i$, create $B$ copies of it indexed by $(v,t)$ for all $1 \leq t \leq B$. Then for each $u$ and $v$ such that $p_{u,v} > 0$ and for each $1 \leq t < B$, place an arc $(u,t) \to (v,t+1)$. Finally, delete all vertices that are not reachable from the state $(\rho_i, 1)$ where $\rho_i$ is the starting state of arm $i$. There is a clear correspondence between the transitions in ${\mathcal{S}}_i$ and the ones in this layered graph: whenever state $u$ is played at time $t$ and ${\mathcal{S}}_i$ transitions to state $v$, we have the transition from $(u, t)$ to $(v, t + 1)$ in the layered DAG. Henceforth, we shall assume that the layered graph created in this manner is the transition graph for each arm. \section{MABs with Budgeted Exploitation} \label{xsec:mab} As we remarked before, we now explain how to generalize the argument from~\lref[Section]{sec:mab} to the presence of ``exploits''. A strategy in this model needs to choose an arm in each time step and perform one of two actions: either it \emph{pulls} the arm, which makes it transition to another state (this corresponds to \emph{playing} in the previous model), or \emph{exploits} it. If an arm is in state $u$ and is exploited, it fetches reward $r_u$, and cannot be pulled any more. As in the previous case, there is a budget $B$ on the total number of pulls that a strategy can make and an additional budget of $K$ on the total number of exploits allowed. (We remark that the same analysis handles the case when pulling an arm also fetches reward, but for a clearer presentation we do not consider such rewards here.) Our algorithm in~\lref[Section]{sec:mab} can be, for the large part, directly applied in this situation as well; we now explain the small changes that need to be done in the various steps, beginning with the new LP relaxation. The additional variable in the LP, denoted by $x_{u,t}$ (for $u \in {\mathcal{S}_i}, t \in [B]$) corresponds to the probability of exploiting state $u$ at time $t$. \begin{alignat}{2} \tag{$\mathsf{LP4}$} \label{xlp:mab} \max \textstyle \sum_{u,t} r_u &\cdot x_{u,t}\\ w_{u,t} &= z_{\mathsf{parent}(u), t-1} \cdot p_{\mathsf{parent}(u),u} & \qquad \forall t \in [2,B],\, u \in {\mathcal{S}} \label{xeq:mablp1}\\ \textstyle \sum_{t' \le t} w_{u,t'} &\geq \sum_{t' \leq t} (z_{u,t'} + x_{u,t'}) & \qquad \forall t \in [1,B], \, u \in {\mathcal{S}} \label{xeq:mablp2}\\ \textstyle \sum_{u \in {\mathcal{S}}} z_{u,t} &\le 1 & \qquad \forall t \in [1,B] \label{xeq:mablp3}\\ \textstyle \sum_{u \in {\mathcal{S}}, t \in [B]} x_{u,t} &\le K & \qquad \forall t \in [1,B] \label{xeq:mablp4}\\ w_{\rho_i, 1} &= 1 & \qquad \forall i \in [1,n] \label{xeq:mablp5} \end{alignat} \newcommand{\mathsf{x}\mathbb{T}}{\mathsf{x}\mathbb{T}} \newcommand{\mathsf{pull}}{\mathsf{pull}} \newcommand{\mathsf{exploit}}{\mathsf{exploit}} \subsection{Changes to the Algorithm} {\bf Phase I: Convex Decomposition} \label{xsec:phase-i} This is the step where most of the changes happen, to incorporate the notion of exploitation. For an arm $i$, its strategy forest $\mathsf{x}\mathbb{T}(i,j)$ (the ``\textsf{x}'' to emphasize the ``exploit'') is an assignment of values $\mathsf{time}(i,j,u)$, $\mathsf{pull}(i,j,u)$ and $\mathsf{exploit}(i,j,u)$ to each state $u \in {\mathcal{S}_i}$ such that: \begin{OneLiners} \item[(i)] For $u \in {\mathcal{S}_i}$ and $v = \mathsf{parent}(u)$, it holds that $\mathsf{time}(i,j,u) \geq 1+ \mathsf{time}(i,j,v)$, and \item[(ii)] For $u \in {\mathcal{S}_i}$ and $v = \mathsf{parent}(u)$ s.t $\mathsf{time}(i,j,u) \neq \infty$, then one of $\mathsf{pull}(i,j,u)$ or $\mathsf{exploit}(i,j,u)$ is equal to $p_{v,u}\,\mathsf{pull}(i,j,v)$ and the other is $0$; if $\mathsf{time}(i,j,u) = \infty$ then $\mathsf{pull}(i,j,u) = \mathsf{exploit}(i,j,u) = 0$. \end{OneLiners} For any state $u$, the value $\mathsf{time}(i,j,u)$ denotes the time at which arm $i$ is \emph{played} (i.e., pulled or exploited) at state $u$, and $\mathsf{pull}(i,j,u)$ (resp. $\mathsf{exploit}(i,j,u)$) denotes the probability that the state $u$ is pulled (resp. exploited). With the new definition, if $\mathsf{time}(i,j,u) = \infty$ then this strategy does not play the arm at $u$. If state $u$ satisfies $\mathsf{exploit}(i,j,u) \neq 0$, then strategy $\mathsf{x}\mathbb{T}(i,j)$ \emph{always exploits} $u$ upon reaching it and hence none of its descendants can be reached. For states $u$ which have $\mathsf{time}(i,j,u) \neq \infty$ and have $\mathsf{exploit}(i,j,u) = 0$, this strategy \emph{always pulls} $u$ upon reaching it. In essence, if $\mathsf{time}(i,j,u) \neq \infty$, either $\mathsf{pull}(i,j,u) = \mathsf{pull}(i,j,\rho_i) \cdot \pi_u$, or $\mathsf{exploit}(i,j,u) = \mathsf{pull}(i,j,\rho_i) \cdot \pi_u$. Furthermore, these strategy forests are such that the following are also true. \begin{OneLiners} \item[(i)] $\sum_{j~\textsf{s.t}~\mathsf{time}(i,j,u)=t} \mathsf{pull}(i,j,u) = z_{u,t}$, \item[(ii)] $\sum_{j~\textsf{s.t}~\mathsf{time}(i,j,u)=t} \mathsf{exploit}(i,j,u) = x_{u,t}$. \end{OneLiners} For convenience, let us define $\mathsf{prob}(i,j,u) = \mathsf{pull}(i,j,u) + \mathsf{exploit}(i,j,u)$, which denotes the probability of some play happening at $u$. The algorithm to construct such a decomposition is very similar to the one presented in \lref[Section]{sec:details-phase-i}. The only change is that in \lref[Step]{alg:convex2} of \lref[Algorithm]{alg:convex}, instead of looking at the first time when $z_{u,t} > 0$, we look at the first time when either $z_{u,t} > 0$ or $x_{u,t} > 0$. If $x_{u,t} > 0$, we ignore all of $u$'s descendants in the current forest we plan to peel off. Once we have such a collection, we again appropriately select the largest $\epsilon$ which preserves non-negativity of the $x$'s and $z$'s. Finally, we update the fractional solution to preserve feasibility. The same analysis can be used to prove the analogous of \lref[Lemma]{lem:convexstep} for this case, which in turn gives the desired properties for the strategy forests. {\bf Phase II: Eliminating Small Gaps} \label{xsec:phase-ii} This is identical to the \lref[Section]{sec:phase-ii}. {\bf Phase III: Scheduling the Arms} \label{xsec:phase-iii} The algorithm is also identical to that in \lref[Section]{sec:phase-iii}. We sample a strategy forest $\mathsf{x}\mathbb{T}(i,j)$ for each arm $i$ and simply play connected components contiguously. Each time we finish playing a connected component, we play the next component that begins earliest in the LP. The only difference is that a play may now be either a \emph{pull} or an \emph{exploit} (which is deterministically determined once we fix a strategy forest); if this play is an exploit, the arm does not proceed to other states and is dropped. Again we let the algorithm run ignoring the pull and exploit budgets, but in the analysis we only collect reward from exploits which happen before either budget is exceeded. The lower bound on the expected reward collected is again very similar to the previous model; the only change is to the statement of \lref[Lemma]{lem:beforetime}, which now becomes the following. \begin{lemma} \label{xlem:beforetime} For arm $i$ and strategy $\mathsf{x}\mathbb{T}(i,j)$, suppose arm $i$ samples strategy $j$ in \lref[step]{alg:mabstep1} of \textsf{AlgMAB} (i.e., $\sigma(i) = j$). Given that the algorithm plays the arm $i$ in state $u$ during this run, the probability that this play happens before time $\mathsf{time}(i,j,u)$ \textbf{and} the number of exploits before this play is smaller than $K$, is at least $11/24$. \end{lemma} In \lref[Section]{sec:mab}, we showed \lref[Lemma]{lem:beforetime} by showing that \[ \Pr [ {\tau}_u > \mathsf{time}(i,j,u) \mid \mathcal{E}_{iju} ] \leq \textstyle \frac{1}{2} \] Additionally, suppose we can also show that \begin{equation} \Pr [ \textsf{number of exploits before }~u > (K-1) \mid \mathcal{E}_{iju} ] \leq \textstyle \frac{1}{24} \label{eq:xpl-bound} \end{equation} Then we would have \[ \Pr [( \textsf{number of exploits before }~u > (K-1) )\vee ({\tau}_u > \mathsf{time}(i,j,u) ) \mid \mathcal{E}_{iju} ] \leq \textstyle 13/24, \] which would imply the Lemma. To show \lref[Equation]{eq:xpl-bound} we start with an analog of \lref[Lemma]{lem:visitprob} for bounding arm exploitations: conditioned on $\mathcal{E}_{i,j,u}$ and $\sigma(i') = j'$, the probability that arm $i'$ is exploited at state $u'$ before $u$ is exploited is at most $\mathsf{exploit}(i',j',u')/\mathsf{prob}(i',j',\rho_{i'})$. This holds even when $i' = i$: in this case the probability of arm $i$ being exploited before reaching $u$ is zero, since an arm is abandoned after its first exploit. Since $\sigma(i') = j'$ with probability $\mathsf{prob}(i',j',\rho_{i'})/24$, it follows that the probability of exploiting arm $i'$ in state $u'$ conditioned on $\mathcal{E}_{i,j,u}$ is at most $\sum_{j'} \mathsf{exploit}(i',j',u')/24$. By linearity of expectation, the expected number of exploits before $u$ conditioned on $\mathcal{E}_{i,j,u}$ is at most $\sum_{(i',j',u')} \mathsf{exploit}(i',j',u')/24 = \sum_{u',t} x_{u,t}/24$, which is upper bounded by $K/24$ due to LP feasibility. Then \lref[Equation]{eq:xpl-bound} follows from Markov inequality. The rest of the argument is identical to that in \lref[Section]{sec:mab} giving us the following. \begin{theorem} There is a randomized $O(1)$-approximation algorithm for the \ensuremath{\mathsf{MAB}}\xspace problem with an exploration budget of $B$ and an exploitation budget of $K$. \end{theorem} \end{document}
arXiv
Generalized Stokes system in Orlicz spaces A direct proof of the Tonelli's partial regularity result June 2012, 32(6): 2101-2123. doi: 10.3934/dcds.2012.32.2101 Quasi-periodic solutions for derivative nonlinear Schrödinger equation Meina Gao 1, and Jianjun Liu 2, School of Science, Shanghai Second Polytechnic University, Shanghai 201209, China School of Mathematical Sciences, Fudan University, Shanghai 200433 Received February 2011 Revised June 2011 Published February 2012 In this paper, we discuss the existence of time quasi-periodic solutions for the derivative nonlinear Schrödinger equation $$\label{1.1}\mathbf{i} u_t+u_{xx}+\mathbf{i} f(x,u,\bar{u})u_x+g(x,u,\bar{u})=0 $$ subject to Dirichlet boundary conditions. Using an abstract infinite dimensional KAM theorem dealing with unbounded perturbation vector-field and Birkhoff normal form, we will prove that there exist a Cantorian branch of KAM tori and thus many time quasi-periodic solutions for the above equation. Keywords: KAM theory, Derivative nonlinear Schrödinger equation, Normal form.. Mathematics Subject Classification: Primary: 37K55; Secondary: 35Q5. Citation: Meina Gao, Jianjun Liu. Quasi-periodic solutions for derivative nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems - A, 2012, 32 (6) : 2101-2123. doi: 10.3934/dcds.2012.32.2101 D. Bambusi, Birkhoff normal form for some nonlinear PDEs,, Comm. Math. Phys., 234 (2003), 253. doi: 10.1007/s00220-002-0774-4. Google Scholar D. Bambusi and S. Graffi, Time quasi-periodic unbounded perturbations of Schrödinger operators and KAM methods,, Comm. Math. Phys., 219 (2001), 465. doi: 10.1007/s002200100426. Google Scholar D. Bambusi and B. Grébert, Birkhoff normal form for partial differential equations with tame modulus,, Duke Math. J., 135 (2006), 507. Google Scholar D. Bambusi and S. Paleari, Families of periodic solutions of resonant PDEs, J. Nonlinear Sci., 11 (2001), 69. doi: 10.1007/s003320010010. Google Scholar M. Berti and P. Bolle, Cantor families of periodic solutions for completely resonant nonlinear wave equations,, Duke Math. J., 134 (2006), 359. doi: 10.1215/S0012-7094-06-13424-5. Google Scholar J. Bourgain, "Global Solutions of Nonlinear Schrödinger Equations,", American Mathematical Society, (1999). Google Scholar J. Bourgain, Periodic solutions of nonlinear wave equations, Harmonic analysis and partial differential equations,, in, (1999), 69. Google Scholar H. Chihara, Local existence for semilinear Schrödinger equations,, Math. Japon., 42 (1995), 35. Google Scholar G. Gentile and M. Procesi, Periodic solutions for a class of nonlinear partial differential equations in higher dimension,, Comm. Math. Phys., 289 (2009), 863. doi: 10.1007/s00220-009-0817-1. Google Scholar B. Grébert, R. Imekraz and E. Paturel, Normal forms for semilinear quantum harmonic oscillators,, Comm. Math. Phys., 291 (2009), 763. doi: 10.1007/s00220-009-0800-x. Google Scholar E. Faou, B. Grébert and E. Paturel, Birkhoff normal form for splitting methods applied to semilinear Hamiltonian PDEs. I. Finite-dimensional discretization,, Numer. Math., 114 (2010), 429. doi: 10.1007/s00211-009-0258-y. Google Scholar E. Faou, B. Grébert and E. Paturel, Birkhoff normal form for splitting methods applied to semilinear Hamiltonian PDEs. II. Abstract splitting,, Numer. Math., 114 (2010), 459. doi: 10.1007/s00211-009-0257-z. Google Scholar N. Hayashi and T. Ozawa, Remarks on nonlinear Schrödinger equations in one space dimension,, Differential Integral Equations, 7 (1994), 453. Google Scholar T. Kappeler and J. Pöschel, "KdV&KAM,", Springer-Verlag, (2003). Google Scholar C. Kenig, G. Ponce and L. Vega, On the IVP for the nonlinear Schrödinger equations,, in, (1995), 353. Google Scholar C. Kenig, G. Ponce and L. Vega, Small solutions to nonlinear Schrödinger equations,, Ann. Inst. H. Poincaré Anal. Non Linéaire, 10 (1993), 255. Google Scholar C. Kenig, G. Ponce and L. Vega, On the initial value problem for the Ishimori system,, Ann. Henri Poincaré, 1 (2000), 341. doi: 10.1007/PL00001008. Google Scholar C. Kenig, G. Ponce and L. Vega, Smoothing effects and local existence theory for the generalized nonlinear Schrödinger equations,, Invent. Math., 134 (1998), 489. doi: 10.1007/s002220050272. Google Scholar S. Klainerman, Long-time behaviour of solutions to nonlinear wave equations,, in, 1, 2 (1984), 1209. Google Scholar S. B. Kuksin, On small-denominators equations with large variable coefficients,, Z. Angew. Math. Phys., 48 (1997), 262. doi: 10.1007/PL00001476. Google Scholar S. B. Kuksin, "Analysis of Hamiltonian PDEs,", Oxford University Press, (2000). Google Scholar S. B. Kuksin and J. Pöschel, Invariant cantor manifolds of quasi-periodic oscillations for a nonlinear Schrödinger equation,, Ann. of Math. (2), 143 (1996), 149. doi: 10.2307/2118656. Google Scholar P. D. Lax, Development of singularities of solutions of nonlinear hyperbolic partial differential equations,, J. Mathematical Phys., 5 (1964), 611. doi: 10.1063/1.1704154. Google Scholar J. Liu and X. Yuan, Spectrum for quantum Duffing oscillator and small-divisor equation with large variable coefficient,, Comm. Pure Appl. Math., 63 (2010), 1145. Google Scholar J. Liu and X. Yuan, A KAM theorem for Hamiltonian partial differential equations with unbounded perturbations,, Commun. Math. Phys., 307 (2011), 629. doi: 10.1007/s00220-011-1353-3. Google Scholar J. Pöschel, Quasi-periodic solutions for a nonlinear wave equations,, Comment. Math. Helv., 71 (1996), 269. doi: 10.1007/BF02566420. Google Scholar X. Yuan, Quasi-periodic solutions of completely resonant nonlinear wave equations,, J. Differential Equations, 203 (2006), 213. doi: 10.1016/j.jde.2005.12.012. Google Scholar J. Zhang, M. Gao and X. Yuan, KAM tori for reversible partial differential equations,, Nonlinearity, 24 (2011), 1198. Google Scholar Dario Bambusi, A. Carati, A. Ponno. The nonlinear Schrödinger equation as a resonant normal form. Discrete & Continuous Dynamical Systems - B, 2002, 2 (1) : 109-128. doi: 10.3934/dcdsb.2002.2.109 Hongzi Cong, Lufang Mi, Yunfeng Shi, Yuan Wu. On the existence of full dimensional KAM torus for nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems - A, 2019, 39 (11) : 6599-6630. doi: 10.3934/dcds.2019287 Nakao Hayashi, Elena I. Kaikina, Pavel I. Naumkin. Large time behavior of solutions to the generalized derivative nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems - A, 1999, 5 (1) : 93-106. doi: 10.3934/dcds.1999.5.93 Hiroyuki Hirayama, Mamoru Okamoto. Random data Cauchy problem for the nonlinear Schrödinger equation with derivative nonlinearity. Discrete & Continuous Dynamical Systems - A, 2016, 36 (12) : 6943-6974. doi: 10.3934/dcds.2016102 Nakao Hayashi, Pavel I. Naumkin. Asymptotic behavior in time of solutions to the derivative nonlinear Schrödinger equation revisited. Discrete & Continuous Dynamical Systems - A, 1997, 3 (3) : 383-400. doi: 10.3934/dcds.1997.3.383 Kazumasa Fujiwara, Tohru Ozawa. On the lifespan of strong solutions to the periodic derivative nonlinear Schrödinger equation. Evolution Equations & Control Theory, 2018, 7 (2) : 275-280. doi: 10.3934/eect.2018013 Razvan Mosincat, Haewon Yoon. Unconditional uniqueness for the derivative nonlinear Schrödinger equation on the real line. Discrete & Continuous Dynamical Systems - A, 2020, 40 (1) : 47-80. doi: 10.3934/dcds.2020003 Guanghua Shi, Dongfeng Yan. KAM tori for quintic nonlinear schrödinger equations with given potential. Discrete & Continuous Dynamical Systems - A, 2020, 40 (4) : 2421-2439. doi: 10.3934/dcds.2020120 Nakao Hayashi, Pavel Naumkin. On the reduction of the modified Benjamin-Ono equation to the cubic derivative nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems - A, 2002, 8 (1) : 237-255. doi: 10.3934/dcds.2002.8.237 Zihua Guo, Yifei Wu. Global well-posedness for the derivative nonlinear Schrödinger equation in $H^{\frac 12} (\mathbb{R} )$. Discrete & Continuous Dynamical Systems - A, 2017, 37 (1) : 257-264. doi: 10.3934/dcds.2017010 Minoru Murai, Kunimochi Sakamoto, Shoji Yotsutani. Representation formula for traveling waves to a derivative nonlinear Schrödinger equation with the periodic boundary condition. Conference Publications, 2015, 2015 (special) : 878-900. doi: 10.3934/proc.2015.0878 Hideo Takaoka. Energy transfer model for the derivative nonlinear Schrödinger equations on the torus. Discrete & Continuous Dynamical Systems - A, 2017, 37 (11) : 5819-5841. doi: 10.3934/dcds.2017253 Nakao Hayashi, Pavel I. Naumkin, Patrick-Nicolas Pipolo. Smoothing effects for some derivative nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems - A, 1999, 5 (3) : 685-695. doi: 10.3934/dcds.1999.5.685 D.G. deFigueiredo, Yanheng Ding. Solutions of a nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems - A, 2002, 8 (3) : 563-584. doi: 10.3934/dcds.2002.8.563 Silvia Cingolani, Mónica Clapp. Symmetric semiclassical states to a magnetic nonlinear Schrödinger equation via equivariant Morse theory. Communications on Pure & Applied Analysis, 2010, 9 (5) : 1263-1281. doi: 10.3934/cpaa.2010.9.1263 Pavel I. Naumkin, Isahi Sánchez-Suárez. On the critical nongauge invariant nonlinear Schrödinger equation. Discrete & Continuous Dynamical Systems - A, 2011, 30 (3) : 807-834. doi: 10.3934/dcds.2011.30.807 Younghun Hong. Scattering for a nonlinear Schrödinger equation with a potential. Communications on Pure & Applied Analysis, 2016, 15 (5) : 1571-1601. doi: 10.3934/cpaa.2016003 Alexander Komech, Elena Kopylova, David Stuart. On asymptotic stability of solitons in a nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2012, 11 (3) : 1063-1079. doi: 10.3934/cpaa.2012.11.1063 Tarek Saanouni. Remarks on the damped nonlinear Schrödinger equation. Evolution Equations & Control Theory, 2019, 0 (0) : 0-0. doi: 10.3934/eect.2020030 Yuji Sagawa, Hideaki Sunagawa. The lifespan of small solutions to cubic derivative nonlinear Schrödinger equations in one space dimension. Discrete & Continuous Dynamical Systems - A, 2016, 36 (10) : 5743-5761. doi: 10.3934/dcds.2016052 Meina Gao Jianjun Liu
CommonCrawl
Nowhere continuous function In mathematics, a nowhere continuous function, also called an everywhere discontinuous function, is a function that is not continuous at any point of its domain. If $f$ is a function from real numbers to real numbers, then $f$ is nowhere continuous if for each point $x$ there is some $\varepsilon >0$ such that for every $\delta >0,$ we can find a point $y$ such that $|x-y|<\delta $ and $|f(x)-f(y)|\geq \varepsilon $. Therefore, no matter how close we get to any fixed point, there are even closer points at which the function takes not-nearby values. More general definitions of this kind of function can be obtained, by replacing the absolute value by the distance function in a metric space, or by using the definition of continuity in a topological space. Examples Dirichlet function Main article: Dirichlet function One example of such a function is the indicator function of the rational numbers, also known as the Dirichlet function. This function is denoted as $\mathbf {1} _{\mathbb {Q} }$ and has domain and codomain both equal to the real numbers. By definition, $\mathbf {1} _{\mathbb {Q} }(x)$ is equal to $1$ if $x$ is a rational number and it is $0$ if $x$ otherwise. More generally, if $E$ is any subset of a topological space $X$ such that both $E$ and the complement of $E$ are dense in $X,$ then the real-valued function which takes the value $1$ on $E$ and $0$ on the complement of $E$ will be nowhere continuous. Functions of this type were originally investigated by Peter Gustav Lejeune Dirichlet.[1] Non-trivial additive functions See also: Cauchy's functional equation A function $f:\mathbb {R} \to \mathbb {R} $ is called an additive function if it satisfies Cauchy's functional equation: $f(x+y)=f(x)+f(y)\quad {\text{ for all }}x,y\in \mathbb {R} .$ For example, every map of form $x\mapsto cx,$ where $c\in \mathbb {R} $ is some constant, is additive (in fact, it is linear and continuous). Furthermore, every linear map $L:\mathbb {R} \to \mathbb {R} $ is of this form (by taking $c:=L(1)$). Although every linear map is additive, not all additive maps are linear. An additive map $f:\mathbb {R} \to \mathbb {R} $ is linear if and only if there exists a point at which it is continuous, in which case it is continuous everywhere. Consequently, every non-linear additive function $\mathbb {R} \to \mathbb {R} $ is discontinuous at every point of its domain. Nevertheless, the restriction of any additive function $f:\mathbb {R} \to \mathbb {R} $ to any real scalar multiple of the rational numbers $\mathbb {Q} $ is continuous; explicitly, this means that for every real $r\in \mathbb {R} ,$ the restriction $f{\big \vert }_{r\mathbb {Q} }:r\,\mathbb {Q} \to \mathbb {R} $ to the set $r\,\mathbb {Q} :=\{rq:q\in \mathbb {Q} \}$ :=\{rq:q\in \mathbb {Q} \}} is a continuous function. Thus if $f:\mathbb {R} \to \mathbb {R} $ is a non-linear additive function then for every point $x\in \mathbb {R} ,$ $f$ is discontinuous at $x$ but $x$ is also contained in some dense subset $D\subseteq \mathbb {R} $ on which $f$'s restriction $f\vert _{D}:D\to \mathbb {R} $ is continuous (specifically, take $D:=x\,\mathbb {Q} $ if $x\neq 0,$ and take $D:=\mathbb {Q} $ if $x=0$). Discontinuous linear maps See also: Discontinuous linear functional and Continuous linear map A linear map between two topological vector spaces, such as normed spaces for example, is continuous (everywhere) if and only if there exists a point at which it is continuous, in which case it is even uniformly continuous. Consequently, every linear map is either continuous everywhere or else continuous nowhere. Every linear functional is a linear map and on every infinite-dimensional normed space, there exists some discontinuous linear functional. Other functions The Conway base 13 function is discontinuous at every point. Hyperreal characterisation A real function $f$ is nowhere continuous if its natural hyperreal extension has the property that every $x$ is infinitely close to a $y$ such that the difference $f(x)-f(y)$ is appreciable (that is, not infinitesimal). See also • Blumberg theorem – even if a real function $f:\mathbb {R} \to \mathbb {R} $ is nowhere continuous, there is a dense subset $D$ of $\mathbb {R} $ such that the restriction of $f$ to $D$ is continuous. • Thomae's function (also known as the popcorn function) – a function that is continuous at all irrational numbers and discontinuous at all rational numbers. • Weierstrass function – a function continuous everywhere (inside its domain) and differentiable nowhere. References 1. Lejeune Dirichlet, Peter Gustav (1829). "Sur la convergence des séries trigonométriques qui servent à représenter une fonction arbitraire entre des limites données". Journal für die reine und angewandte Mathematik. 4: 157–169. External links • "Dirichlet-function", Encyclopedia of Mathematics, EMS Press, 2001 [1994] • Dirichlet Function — from MathWorld • The Modified Dirichlet Function Archived 2019-05-02 at the Wayback Machine by George Beck, The Wolfram Demonstrations Project.
Wikipedia
\begin{document} \title{Efficient Representation and Approximation of Model Predictive Control Laws via Deep Learning} \author{Benjamin~Karg, Sergio~Lucia,~\IEEEmembership{Member,~IEEE,} \thanks{B. Karg and S. Lucia are with the Chair of Internet of Things for Smart Buildings, TU Berlin, and Einstein Center Digital Future, Einsteinufer 17, 10587 Berlin, Germany, e-mail: [email protected], [email protected].}} \markboth{IEEE Transactions on Cybernetics} {Shell \MakeLowercase{\textit{et al.}}: Bare Demo of IEEEtran.cls for IEEE Journals} \onecolumn © 2020 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. \twocolumn \maketitle \begin{abstract} We show that artificial neural networks with rectifier units as activation functions can exactly represent the piecewise affine function that results from the formulation of model predictive control of linear time-invariant systems. The choice of deep neural networks is particularly interesting as they can represent exponentially many more affine regions compared to networks with only one hidden layer. We provide theoretical bounds on the minimum number of hidden layers and neurons per layer that a neural network should have to exactly represent a given model predictive control law. The proposed approach has a strong potential as an approximation method of predictive control laws, leading to better approximation quality and significantly smaller memory requirements than previous approaches, as we illustrate via simulation examples. We also suggest different alternatives to correct or quantify the approximation error. Since the online evaluation of neural networks is extremely simple, the approximated controllers can be deployed on low-power embedded devices with small storage capacity, enabling the implementation of advanced decision-making strategies for complex cyber-physical systems with limited computing capabilities. \end{abstract} \begin{IEEEkeywords} Predictive control, neural networks, machine learning. \end{IEEEkeywords} \IEEEpeerreviewmaketitle \section{Introduction} Model predictive control (MPC) is a popular control strategy that computes control inputs by solving a numerical optimization problem. A mathematical model is used to predict the future behavior of the system and an optimal sequence of control inputs is computed by solving an optimization problem that minimizes a given objective function subject to constraints. The main reasons for its success are the possibility of handling systematically multiple-input multiple-output systems, nonlinearities as well as constraints. The main challenge of MPC is that it requires the solution of an optimization problem at each sampling time of the controller. For this reason, traditional applications included those related to slow systems such as chemical processes~\cite{qin2003}, \cite{rawlings2009}. During the past two decades, a large research effort has been devoted to broaden the range of MPC applications, leading to various specific MPC algorithms reaching from event-triggered robust control~\cite{liu2018aperiodic} to hierarchical distributed systems ~\cite{hans2019hierarchical}. To enable the application of MPC strategies to complex cyber-physical systems \cite{raman2014}, extending the application of MPC to fast embedded systems with limited computing capabilities is an important challenge. Two different approaches have been followed to achieve this goal. The first approach included the development of fast solvers and tailored implementations \cite{mattingley2012cvxgen} that can solve the required optimization problems in real time for fast systems. Different variations of the Nesterov's fast gradient method (see e.g. \cite{richter2012} \cite{giselsson2013}, \cite{koegel2011}) and of the alternating directions method of multipliers (ADMM) \cite{boyd2011distributed} have been very successful for embedded optimization and model predictive control. Different versions of these algorithms have been used to obtain MPC implementations on low-cost microcontrollers \cite{zometa2013}, \cite{lucia2016_ARC} or high-performance FPGAs \cite{jerez2014}, \cite{lucia2018_TII}. The second approach to extend the application of MPC to fast and embedded systems is usually called explicit MPC. The MPC problem for linear time invariant systems is a parametric quadratic program whose solution is a piecewise affine function defined on polytopes and only depends on the current state of the system \cite{bemporad2002}. Explicit MPC exploits this idea by precomputing and storing the piecewise affine function that completely defines the MPC feedback law. The online evaluation of the explicit MPC law reduces to finding the polytopic region in which the system is currently located and applying the corresponding affine law. The main drawback of explicit MPC is that the number of regions on which the control law is defined grows exponentially with the prediction horizon and the number of constraints. The inherent growth of the memory footprint and of the complexity of the point location problem limits the application to small systems and small prediction horizons, especially in the case of embedded systems with limited storage capabilities and computational power. To counteract the massive memory requirements, some approaches try to simplify the representation of the control law by eliminating redundant regions \cite{geyer2008} or by using different number representations \cite{ingole2017}. Other approaches try to approximate the exact explicit MPC solution to further reduce the memory requirements of the approach (see a review in \cite{alessio2009}). Approximate explicit MPC schemes include the use of simplicial partitions \cite{Bemporad2011}, neural networks \cite{parisini1995}, radial basis functions \cite{cseko2015}, lattice representation \cite{wen2009analytical} or using a smaller number of regions to describe the MPC law \cite{holaza2013}. To reduce the complexity of the point location problem, binary search trees (BST) \cite{tondel2003evaluation} introduce a tree structure where the nodes represent unique hyperplanes. At each node it is checked on which side of the hyperplane the state is until a leaf node is reached. At the leaf node, a unique feedback law is identified and evaluated. This method renders the online computation time logarithmic in the number of regions, but precomputation times can be prohibitive or intractable for larger problems \cite{bayat2012flexible}. Modifications of BST include approximations of the exact solution via hypercubic regions \cite{johansen2003approximate}, truncated BSTs combined with direct search to restrict the depth of BSTs \cite{bayat2011combining}, computation of arbitrary hyperplanes which balance the tree and minimize its depth \cite{fuchs2010optimized} and merging an BST with the lattice representation \cite{bayat2012flexible}. Motivated by new advances on the theoretical description of the representation capabilities of deep neural networks \cite{silver2016}, \cite{safran2017}, the goal of this work is to provide a scheme for an approximated explicit MPC controller with significantly lower memory and computational requirements compared to existing approaches. Deep neural networks (with several hidden layers) can represent exponentially many more linear regions than shallow networks (with only one hidden layer). This attribute has been exploited in recent works for complex control tasks such as mixed-integer MPC \cite{karg2018} and robust nonlinear MPC \cite{lucia2018deep}. While many control approaches have used neural networks to capture unknown or nonlinear dynamics of the model~\cite{liu2018neural}, \cite{ding2018neural}, in this work neural networks are used to directly approximate the optimal control law. This work differs from \cite{chen2018approximating}, which is based on similar ideas, by presenting bounds on the necessary size of a deep network to represent an explicit MPC law exactly, as well as by presenting a comprehensive comparison with other state-of-the-art approximate explicit MPC methods. Also, statistical verification techniques are provided that can be evaluated with high-fidelity models, even if the controller was designed with simpler models. The main contributions of the paper are: \begin{itemize} \item the derivation of explicit bounds for the required size (width and depth) that a deep network should have to exactly represent a given explicit MPC solution. \item the presentation of an approach to approximate explicit MPC based on deep learning which achieves better accuracy with less memory requirements when compared to other approximation techniques. \item statistical verification techniques to assess the validity of the obtained approximate controllers. \item an embedded implementation of the resulting controllers. \end{itemize} The remainder of the paper is organized as follows. Section \ref{sec:background} introduces background information about model predictive control and neural networks. Section \ref{sec:deep} presents explicit bounds for a deep network to be able to represent exactly an MPC law and serves as a motivation for the approximation of explicit MPC laws presented in Section \ref{sec:approximate}, where different techniques to deal with the approximation error are also presented. Section \ref{sec:results} illustrates the potential of the approach with two simulation examples and the paper is concluded in Section \ref{sec:conclusions}. \section{Background and Motivation}\label{sec:background} \subsection{Notation} We denote by $\mathbb{R}$, $\mathbb{R}^{n}$ and $\mathbb{R}^{n \times m}$ the real numbers, $n$-dimensional real vectors and $n \times m$ dimensional real matrices, respectively. The interior of a set is denoted by $\text{int}(\cdot)$, its cardinality by $|\cdot|$ and $\left \lfloor \cdot \right \rfloor$ denotes the \emph{floor} operation, i.e. the rounding to the nearest lower integer. The probability of an event is denoted by $P(\cdot)$ and the composition of two functions $f$ and $g$ by $g\circ f(\cdot) = g(f(\cdot))$. \subsection{Explicit MPC} Model predictive control (MPC) is an optimal control scheme that uses a system model to predict the future evolution of a system. We consider discrete linear time-invariant (LTI) systems: \begin{align}\label{eq:LTI} x_{k+1} = Ax_k + Bu_k, \end{align} where $x \in \mathbb{R}^{n_x}$ is the state vector, $u \in \mathbb{R}^{n_u}$ is the control input, $A \in \mathbb{R}^{n_x \times n_x}$ is the system matrix, $B \in \mathbb{R}^{n_x \times n_u}$ is the input matrix and the pair $(A,B)$ is controllable. Using a standard quadratic cost function, the following constrained finite time optimal control problem with a horizon of $N$ steps should be solved at each sampling time to obtain the MPC feedback law: \begin{subequations}\label{eq:FTOCP} \begin{align} &\underset{\tilde u}{\text{minimize}} && x_N^T P x_N + \sum_{k=0}^{N-1}{x_k^T Q x_k + u_k^T R u_k} \span \span \span \\ &\text{subject to} &&x_{k+1} = Ax_k + Bu_k, \\ & &&C_x x_k \leq c_x,\, C_f x_N \leq c_f, \label{eq:state_terminal_cons}\\ & &&C_u u_k \leq c_u, \label{eq:input_cons}\\ & &&x_0 = x_{\text{init}}, \\ & && \forall \,\, k = 0,\dots, N-1, \end{align} \end{subequations} where $\tilde u = [u_0, \dots, u_{N-1}]^T$ is a vector that contains the sequence of control inputs and $P \in \mathbb{R}^{n_x \times n_x}$, $Q \in \mathbb{R}^{n_x \times n_x}$ and $R \in \mathbb{R}^{n_u \times n_u}$ are the weighting matrices. The weighting matrices are chosen such that $P \succeq 0$ and $Q \succeq 0$ are positive semidefinite, and $R \succ 0$ is positive definite. The state, terminal and input constraints are bounded polytopic sets $\mathcal X$, $\mathcal X_f$ and $\mathcal U$ defined by the matrices $C_x\in \mathbb R^{n_{\text{cx}}\times n_x}$, $C_f\in \mathbb R^{n_{\text{cf}}\times n_x}$, $C_u\in \mathbb R^{n_{\text{cu}}\times n_u}$ and the vectors $c_x\in \mathbb R^{n_{\text{cx}}}$, $c_f\in \mathbb R^{n_{\text{cf}}}$, $c_u\in \mathbb R^{n_{\text{cu}}}$. The terminal cost defined by $P$ as well as the terminal set $\mathcal X_f$ are usually chosen in such a way that stability of the closed-loop system and recursive feasibility of the optimization problem are guaranteed~\cite{mayne2000}. The set of initial states $x_{\text{init}}$ for which~\eqref{eq:FTOCP} has a feasible solution depending on the prediction horizon $N$ is called feasibility region and is denoted by $\mathcal X_N$. The optimization problem \eqref{eq:FTOCP} can be reformulated as a multi-parametric problem \cite{bemporad2002} that only depends on the current system state $x_{\text{init}}$: \begin{subequations}\label{eq:condensed} \begin{align} &\underset{\tilde u}{\text{minimize}}&& \tilde u^T F \tilde u + x_{\text{init}}^T G \tilde u + x_{\text{init}}^T H x_{\text{init}} \span \\ &\text{subject to} && C_c \tilde u \leq T x_{\text{init}} + c_c, \end{align} \end{subequations} where $F \in \mathbb{R}^{N n_u \times N n_u}$, $G \in \mathbb{R}^{n_x \times N n_u}$, $H \in \mathbb{R}^{n_x \times n_x}$, $C_c \in \mathbb{R}^{N n_{\text{ineq}} \times N n_u}$, $T \in \mathbb{R}^{N n_{\text{ineq}} \times n_x}$, $c_c \in \mathbb{R}^{N n_{\text{ineq}}}$ and $n_{\text{ineq}}$ is the total number of inequalities in~\eqref{eq:FTOCP}. The solution of the multi-parametric quadratic programming problem~\eqref{eq:condensed} is a piecewise affine (PWA) function of the form \cite{bemporad2002}: \begin{align} \mathcal{K}(x_{\text{init}}) = \begin{cases} K_1 x_{\text{init}}+g_1 & \text{if} \quad x_{\text{init}} \in \mathcal{R}_1, \\ & \vdots \\ K_{n_{\text{r}}} x_{\text{init}}+g_{n_{\text{r}}} & \text{if} \quad x_{\text{init}} \in \mathcal{R}_{n_{\text{r}}}, \end{cases} \label{eq:exp_mpc} \end{align} with $n_{\text{r}}$ regions, $K_i \in \mathbb{R}^{Nn_u \times n_x}$ and $g_i \in \mathbb{R}^{Nn_u}$. Each region $\mathcal R_i$ is described by a polyhedron \begin{align} \mathcal{R}_i = \{x \in \mathbb{R}^{n_x} \mid Z_i x \leq z_i\} \quad \forall i = 1,\dots,n_r, \end{align} where $Z_i \in \mathbb{R}^{c_i \times n_x}$, $z_i \in \mathbb{R}^{c_i}$ describe the $c_i$ halfspaces $a_{i,j} x_{\text{init}} \leq b_{i,j}$ of the $i$-th region with $j = 1,\dots,c_i$, $a_{i,j} \in \mathbb{R}^{1 \times n_x}$ and $b_{i,j} \in \mathbb{R}$. The formulation \eqref{eq:exp_mpc} is defined on the bounded polytopic partition $\mathcal{R}_\Omega = \cup_{i=1}^{n_{\text{r}}} \mathcal{R}_i$ with $\text{int}(\mathcal{R}_i) \cap \text{int}(\mathcal{R}_j) = \emptyset$ for all $i \neq j$. Most hyperplanes are shared by neighbouring regions and the feedback law can be identical for two or more regions. Hence, the memory needed to store the explicit MPC controller~\eqref{eq:exp_mpc} can be approximated as \begin{align}\label{eq:memory_empc} \Gamma_{\mathcal{K}} = \alpha_{\text{bit}} \left( \left( n_{\text{h}} \left( n_x + 1 \right) \right) + n_{\text{f}} \left(n_x n_u + n_u \right) \right), \end{align} where $n_{\text{h}}$ is the number of unique hyperplanes, $n_{\text{f}}$ is the number of unique feedback laws and $\alpha_{\text{bit}}$ is the memory necessary to store a real number. Since for the actual implementation of the explicit MPC law only the input of the first time step is needed, only the first $n_u$ rows of $K_j$ and $g_j$ for $j=1,\dots,n_{\text{f}}$ have to be stored which equals $n_x n_u+n_u$ numbers per unique feedback law. One main drawback of the explicit MPC formulation is that the number of regions for an exact representation can grow exponentially with respect to the horizon and number of constraints~\cite{bemporad2002}, which leads to large memory requirements and might render the application of the method intractable. For the simple example of the inverted pendulum on a cart, which is presented in detail in Section~\ref{sec:results}, with $n_x=4$ states, $n_u=1$ input and box constraints on the control input and the states, the explicit solution consists of $n_{\text{r}} = 91$ regions for $N=3$, $n_{\text{r}} = 191$ regions for $N=4$ and $n_{\text{r}} = 323$ regions for $N=5$. For $N=10$ as many as $n_{\text{r}} = 1638$ regions are obtained. \subsection{Artificial Neural Networks} This subsection shortly recaps the fundamental concepts of artificial neural networks. A feed-forward neural network is defined as a sequence of layers of neurons which determines a function $\mathcal{N}:\mathbb{R}^{n_x} \rightarrow \mathbb{R}^{n_u}$ of the form \begin{equation}\label{eq:neural_network} \begin{split} \mathcal{N}(x;\theta,M,L) = \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad\\ \quad \bigg \{ \begin{array}{lll} f_{L+1} \circ g_L \circ f_L \circ \dots \circ g_1 \circ f_1(x) & \text{for} & L \geq 2, \\ f_{L+1} \circ g_1 \circ f_1(x), & \text{for} & L = 1, \end{array} \end{split} \end{equation} where the input of the network is $x \in \mathbb{R}^{n_x}$ and the output of the network is $ u \in \mathbb{R}^{n_u}$. $M$ is the number of neurons in each hidden layer and $L$ is the number of hidden layers. If $L\geq2$, $\mathcal N$ is described as a \emph{deep} neural network and if $L=1$ as a \emph{shallow} neural network. Each hidden layer consists of an affine function: \begin{align}\label{eq:affine_function} f_l(\xi_{l-1}) = W_l\xi_{l-1}+b_l, \end{align} where $\xi_{l-1} \in \mathbb{R}^M$ is the output of the previous layer with $\xi_0 = x$. The second element of the neural network is a nonlinear activation function $g_l$. In this paper, exclusively rectifier linear units (ReLU) are considered as activation function, which compute the element-wise maximum between zero and the affine function of the current layer $l$: \begin{align}\label{eq:ReLU} g_{l}(f_{l}) = \max(0,f_{l}). \end{align} The parameter $\theta = \{\theta_1, \dots, \theta_{L+1}\}$ contains all the weights and biases of the affine functions of each layer \begin{align} \theta_l = \{W_l,b_l\} \quad \forall l = 1, \dots, L+1, \end{align} where the weights are \begin{align} W_l \in \begin{cases} \mathbb{R}^{M \times n_x} & \text{if} \quad l=1, \\ \mathbb{R}^{M \times M} & \text{if} \quad l=2,\dots,L, \\ \mathbb{R}^{n_u \times M} & \text{if} \quad l=L+1, \end{cases} \end{align} and the biases are \begin{align} b_l \in \begin{cases} \mathbb{R}^M & \text{if} \quad l=1,\dots,L, \\ \mathbb{R}^{n_u} & \text{if} \quad l = L+1. \end{cases} \end{align} \subsection{Motivation} Implementing the explicit model predictive controller defined by \eqref{eq:exp_mpc} requires storing the unique sets of hyperplanes and feedback laws that define the regions and the affine controllers in each region~\eqref{eq:memory_empc}, which was shown before to be possibly prohibitive due to the exponential growth of the number of regions. The main motivation of this work is to find an efficient representation and approximation of the MPC control law, which can significantly reduce the memory requirements for the representation of the exact controller as well for achieving a high-quality approximation that outperforms other approximate explicit MPC techniques. The basic idea that motivates this work is described in the following Lemma. \begin{lem}{\cite{montufar2014}}\label{lemma:expo} Every neural network $\mathcal N(x;\theta, M, L)$ with input $x\in \mathbb R^{n_x}$ defined as in \eqref{eq:neural_network} with ReLUs as activation functions and $M \geq n_x$ represents a piecewise affine function. In addition, a lower bound on the maximal number of affine regions that the neural network represents is given by the following expression: \begin{align*} \left( \prod \limits_{l=1}^{L-1} \left \lfloor \frac{M}{n_x}\right \rfloor^{n_x} \right) \sum \limits_{j=0}^{n_x} \binom{L}{j}. \end{align*} \end{lem} \begin{proof}[Proof of Lemma~\ref{lemma:expo}] The neural network $\mathcal N(x;\theta, M, L)$ is a piecewise affine function because it only contains compositions of affine transformations with a piecewise affine function (ReLUs). For the derivation of the maximal number of regions, see \cite{montufar2014}. \end{proof} Lemma~\ref{lemma:expo} gives clear insights about why deep networks, as often observed in practice, obtain better performance to approximate complex functions when compared to shallow networks. In particular, Lemma~\ref{lemma:expo} implies that the number of affine regions that a neural network can represent grows exponentially with the number of layers $L$ as long as the width of the network $M$ is not smaller than the number of inputs $n_x$. The bound of Lemma~\ref{lemma:expo} can be slightly improved if $M\geq 3n_x$ as recently shown in \cite{serra2018}. At the same time, the number of parameters contained in $\theta$ that are necessary to fully describe the neural network $\mathcal N(x;\theta, M, L)$ are determined by the dimensions of the weights and biases at each layer. Assuming that storing each number requires $\alpha_{\text{bit}}$ bits, the total amount of memory necessary to store the neural network $\mathcal N(x;\theta, M, L)$ can be computed as: \begin{align}\label{eq:mem_deep} \Gamma_{\mathcal{N}} =\alpha_{\text{bit}}(&(n_x+1)M + (L-1)(M+1)M\notag \\& + (M+1)n_u). \end{align} Since $\Gamma_{\mathcal{N}}$ only grows linearly with respect to the number of layers $L$, deep ReLU networks can represent exponentially many more linear regions than shallow ones for a fixed amount of memory. This fact can be clearly seen in Fig.~\ref{fig:regions_weights}. \begin{figure} \caption{Number of regions with respect to the number of weights a neural network can represent. The parameters for this plot were chosen to $n_x = 2$, $M = 10$, $L = 1, \dots, 50$ and $n_u = 4$.} \label{fig:regions_weights} \end{figure} We believe that this observation, while somewhat obvious, is a very powerful result with important implications in control theory and constitutes the main motivation for this paper. \section{Deep learning-based explicit MPC}\label{sec:deep} This section shows how to design a deep neural network that can exactly represent an explicit MPC feedback law~\eqref{eq:exp_mpc} of the form $\mathcal{K}(x):[0,1]^{n_x} \rightarrow {\mathbb{R}^+}^{n_u}$ by only mapping the state $x$ to the first optimal control input $u_0^*$. Considering only the control input of the first time step is sufficient, because after its application a new control input trajectory $\tilde{u}^*$ is computed in the MPC setting. We make use of two lemmas to derive specific bounds for a deep neural network to be able to exactly represent any explicit MPC law $\mathcal{K}(x)$. The following lemma from \cite{kripfganz1987} is used. \begin{lem}\label{lemma:decomp} Every scalar PWA function $f(x): \mathbb{R}^{n_x} \rightarrow \mathbb{R}$ can be written as the difference of two convex PWA functions \begin{align} f(x) = \gamma(x) - \eta(x), \end{align} where $\gamma(x): \mathbb{R}^{n_x} \rightarrow \mathbb{R}$ has $r_\gamma$ regions and $\eta(x): \mathbb{R}^{n_x} \rightarrow \mathbb{R}$ has $r_\eta$ regions. \end{lem} \begin{proof}[Proof of Lemma~\ref{lemma:decomp}] See \cite{kripfganz1987} or \cite{hempel2013}. \end{proof} The following Lemma, recently presented in \cite{hanin2017}, gives specific bounds for the structure that a deep neural network should have to be able to exactly represent a convex piecewise affine function. \begin{lem}\label{lemma:maximum} A convex piecewise affine function $f:[0,1]^{n_x}\rightarrow \mathbb{R}^+$ defined as the point-wise maximum of $N$ affine functions: \begin{align*} f(x) = \max_{i=1,\dots, N}f_i(x), \end{align*} can be exactly represented by a deep ReLU network with width $M = n_x + 1$ and depth $N$. \end{lem} \begin{proof}[Proof of Lemma~\ref{lemma:maximum}] See Theorem 2 from \cite{hanin2017}. \end{proof} One of the main contributions of this paper is given in the following theorem, which states that any explicit MPC law of the form~\eqref{eq:exp_mpc} with $\mathcal{K}(x):[0,1]^{n_x} \rightarrow {\mathbb{R}^+}^{n_u}$ can be represented by a deep ReLU neural network with a predetermined size. \begin{thm} \label{thm:main_theorem} There always exist parameters $\theta_{\gamma,i}$ and $\theta_{\eta,i}$ for $2n_u$ deep ReLU neural networks with depth $r_{\gamma,i}$ and $r_{\eta,i}$ for $i = 1,\dots,n_u$ and width $M = n_x + 1$, such that the vector of neural networks defined by \begin{align} \label{eq:theorem} \begin{split} &\begin{bmatrix} \mathcal{N}(x;\theta_{\gamma,1},M,r_{\gamma,1}) - \mathcal{N}(x;\theta_{\eta,1},M,r_{\eta,1}) \\ \vdots \\ \mathcal{N}(x ;\theta_{\gamma,n_u},M,r_{\gamma,n_u}) - \mathcal{N}(x;\theta_{\eta,n_u},M,r_{\eta,n_u}) \\ \end{bmatrix} \end{split} \end{align} can exactly represent an explicit MPC law $\mathcal{K}(x): [0,1]^{n_x} \rightarrow {\mathbb{R}^+}^{n_u}$. \end{thm} \begin{proof}[Proof of Theorem~\ref{thm:main_theorem}] Every explicit MPC law $\mathcal{K}(x): [0,1]^{n_x} \rightarrow \mathbb{R}^{+^{n_u}}$ can be split into one explicit MPC law per output dimension: \begin{align} \quad \mathcal{K}_{i}(x): [0,1]^{n_x} \rightarrow \mathbb{R}^+ \quad \forall i = 1,\dots,n_u. \end{align} Applying Lemma~\ref{lemma:decomp} to all $n_u$ MPC laws, each one of them can be decomposed into two convex scalar PWA functions: \begin{align} \label{eq:proof_decomposed_nonvector} \mathcal{K}_{i}(x) = \gamma_i(x) - \eta_i(x) \quad \forall i = 1,\dots,n_u, \end{align} where each $\gamma_i(x)$ and each $\eta_i(x)$ are composed of $r_{\gamma_i}$ and $r_{\gamma_i}$ affine regions. The explicit MPC law $\mathcal{K}(x): [0,1]^{n_x} \rightarrow {\mathbb{R}^+}^{n_u}$ can thus be vectorized as \begin{align} \label{eq:proof_decomp} \mathcal{K}(x) = \begin{bmatrix} \gamma_1(x) - \eta_1(x) \\ \vdots \\ \gamma_{n_u}(x) - \eta_{n_u}(x) \\ \end{bmatrix}. \end{align} According to Lemma~\ref{lemma:maximum}, it is always possible to find parameters $\theta_{\gamma,i}$, $\theta_{\eta,i}$ for deep ReLU networks with width $M=n_x+1$, depth not larger than $r_{\gamma,i}$, $r_{\eta,i}$ that can exactly represent the scalar convex functions $\gamma_i(x)$ and $\eta_i(x)$. This holds because any convex affine function with $N$ regions can be described as the point-wise maximum of $N$ scalar affine functions. This means that each component of the transformed explicit MPC law can be written as: \begin{align} \label{eq:proof_NN} \begin{split} \gamma_{i}(x) - \eta_{i}(x) &=\mathcal{N}(x;\theta_{\gamma,i},M,r_{\eta,i}) - \mathcal{N}(x;\theta_{\eta,i},M,r_{\eta,i}), \\ \end{split} \end{align} for all $i = 1,\dots,n_u$. Substituting \eqref{eq:proof_NN} in \eqref{eq:proof_decomp} results in \eqref{eq:theorem}. \end{proof} Theorem~\ref{thm:main_theorem} requires that the piecewise affine function maps the unit hypercube to the space of positive real numbers. Any explicit MPC law~\eqref{eq:FTOCP} can be written in this form, as long as the invertible affine transformations defined in Assumption~\ref{ass:transform} exist. This result is formalized in the Corollary~\ref{cor:cor}. \begin{ass} \label{ass:transform} There exist two invertible affine transformations $\mathcal{A}_x: \mathcal{R}_\Omega \rightarrow [0,1]^{n_x}$ and $\mathcal{A}_u: \mathcal{U} \rightarrow {\mathbb{R}^+}^{n_u}$ for an explicit control law~\eqref{eq:exp_mpc} $\mathcal{K}_{\text{orig}}: \mathcal{R}_\Omega \rightarrow \mathcal{U}$ such that \begin{align} \label{eq:exp_mpc_trans} \mathcal{K}_{\text{orig}}(x) = \mathcal{A}_u^{-1} \circ \mathcal{K}(\hat{x}), \end{align} where $\hat{x} = \mathcal{A}_x \circ x$. The affine transformations $\mathcal{A}_x$ and $\mathcal{A}_u$ always exist, when $\mathcal{R}_{\Omega}$ and $\mathcal{U}$ are compact sets, as it is standard in control applications. \end{ass} \begin{cor}\label{cor:cor} If for a given explicit MPC solution~\eqref{eq:exp_mpc} $\mathcal{K}_{\text{orig}}: \mathcal{R}_\Omega \rightarrow \mathcal{U}$, there exist two invertible affine transformations such that Assumption~\ref{ass:transform} holds. Theorem~\ref{thm:main_theorem} can be applied to the transformed MPC solution $\mathcal{K}(\hat{x}): [0,1]^{n_x} \rightarrow {\mathbb{R}^+}^{n_u}$~\eqref{eq:exp_mpc_trans}. Hence, such an explicit MPC solution of the form~\eqref{eq:exp_mpc} can be exactly represented by two invertible affine transformations and $2n_u$ deep ReLU networks with width and depth as defined in Theorem~\ref{thm:main_theorem}. \end{cor} The proof presented in \cite{hempel2013} for the decomposition of a PWA function into the difference of two PWA functions is constructive, which means that Theorem~\ref{thm:main_theorem} gives explicit bounds for the construction of neural networks that can exactly represent any explicit MPC of the form~\eqref{eq:exp_mpc} with $\mathcal{K}(x):[0,1]^{n_x} \rightarrow {\mathbb{R}^+}^{n_u}$, considering only the first step of the optimal control input sequence. Another advantage of the proposed approach is that if the explicit MPC law is represented as a set of neural networks, its online application does not require determining the current region $\mathcal{R}_i$ and only needs the evaluation of the neural networks. This evaluation is a straightforward composition of affine functions and simple nonlinearities, which facilitates the implementation of the proposed controller on embedded systems. We illustrate Theorem~\ref{thm:main_theorem} with a small example of an oscillator with the discrete system matrices \begin{equation*} \begin{aligned} &A= \begin{bmatrix} 0.5403 & 0.8415 \\ 0.8415 & 0.5403 \\ \end{bmatrix}, &B= \begin{bmatrix} -0.4597 \\ 0.8415 \\ \end{bmatrix}. \end{aligned} \end{equation*} We chose the tuning parameters for \eqref{eq:FTOCP} to $P=0$, $R=1$, $Q=2I$ and the horizon to $N=1$. The state constraints are given by $\lvert x_i \rvert \leq 1 \, \text{for} \, i=1,2$ and input constraints by $\lvert u \rvert \leq 1$. We used the toolbox MPT3 \cite{MPT3} to compute the explicit MPC controller which has 5 regions and is illustrated in the left plot of Fig.~\ref{fig:decomp}. \begin{figure*}\label{fig:decomp} \end{figure*} By applying two invertible affine transformations \eqref{eq:exp_mpc_trans}, the algorithm given in \cite{hempel2013} is used to decompose the explicit MPC controller into the convex function $\gamma_{\text{orig}}(x) = \mathcal{A}_u^{-1} \circ \gamma(\hat{x})$ and the concave function $-\eta_{\text{orig}}(x) = -\mathcal{A}_u^{-1} \circ \eta(\hat{x})$ with $\hat{x} = \mathcal{A}_x \circ x$, depicted in the middle plots of Fig.~\ref{fig:decomp}. Both functions consist of $r_\gamma=r_\eta=3$ regions. According to Theorem~\ref{thm:main_theorem}, two neural networks $\mathcal{N}(\hat{x};\theta_\gamma,3,3)$ and $\mathcal{N}(\hat{x};\theta_\eta,3,3)$ with width $M = n_x + 1 = 3$ and depth $r_\gamma=r_\eta=3$ are used to represent the two convex functions. The parameter values of the networks $\theta_{\gamma}$ and $\theta_{\eta}$ are computed as the minimizers of the mean squared error defined by: \begin{align} \theta_\gamma = \underset{\theta_\gamma}{\text{argmin}}\sum_{i = 1}^{n_{\text{tr}}}||\mathcal{N}(\hat{x}_i;\theta_{\gamma}, M, r_\gamma)- \gamma(\hat{x}_i) ||_2^2, \end{align} based on $n_{\text{tr}} = 1000$ randomly chosen sampling points for the functions $\gamma(\hat{x})$ (and analogously for $\eta(\hat{x})$). The learned representation of the neural networks $\gamma_{\text{orig}}(x) - \eta_{\text{orig}}(x) = \mathcal{A}_u^{-1} \circ (\mathcal{N}(\hat{x};\theta_{\gamma},M,r_{\gamma}) - \mathcal{N}(\hat{x};\theta_{\eta},M,r_{\eta}))$ is shown in the right plot of Fig.~\ref{fig:decomp}, which is the same function as the original explicit MPC controller. The training procedure is considered finished when the maximal error $e_{\text{prox}} =\max_{\hat{x}}{|\mathcal{K}(\hat{x}) - (\gamma(\hat{x}) - \eta(\hat{x}))|}$ is less than $0.001$, which we consider to be an exact representation of the transformed explicit MPC law. The study of the sample complexity of random sampling points $n_{\text{tr}}$ that are necessary to obtain a given error $e_{\text{prox}}$ is an interesting research topic, but it is out of the scope of this paper. \section{Approximate explicit MPC based on deep learning}\label{sec:approximate} The previous sections outline two main connections between deep learning and explicit MPC. The first one is that deep neural networks can exactly represent the explicit MPC law, and not only approximate it arbitrarily well for an increasing number of neurons, as it is known from the universal approximation theorem \cite{barron1993universal}. The second connection is that, as shown in Lemma~\ref{lemma:expo}, the number of linear regions that deep neural networks can represent grows exponentially with the number of layers. While knowing the structure of a network that can exactly represent a given explicit MPC function may be useful in practice, we believe that the use of deep networks to achieve efficient approximations is the most promising idea. We propose different strategies to deal with the approximation error: a feasibility recovery approach based on control invariant sets and a statistical verification technique to compute safe sets. Other works have also studied the stability guarantees for approximations of explicit MPC. In~\cite{kvasnica2011stabilizing} the explicit law is approximated by a polynomial, which lies within a stability tube around the exact MPC law, which guarantees stability. By using wavelets to approximate the MPC law and barycentric interpolation, \cite{summers2009multiscale} guarantees stability by showing that the cost function of the approximate controller is a Lyapunov function. If a neural network is used as approximation, giving deterministic stability guarantees is challenging because of the stochastic learning procedure. In~\cite{hertneck2018learning}, a robust MPC scheme is defined, which accounts for the approximation error of the learning-based approach. If the assumed error bounds in the robust MPC formulation can be validated a-posteriori by the learned representation, probabilistic statements about stability can be made. This work focuses on constraint satisfaction, recursive feasibility as well as improved approximation quality while stability guarantees are out of the scope. \subsection{Training of the deep learning-based approach} To obtain the proposed approximate explicit MPC law, training data needs to be generated by solving~\eqref{eq:condensed} for $n_{\text{tr}}$ different points. By randomly choosing $n_{\text{tr,init}}$ initial values and solving~\eqref{eq:condensed} $n_{\text{tr,steps}}$ times in a closed-loop fashion~\eqref{eq:cl_exp} for each initial value, $n_{\text{tr}} = n_{\text{tr,init}} \cdot n_{\text{tr,steps}}$ input samples $x_{\text{tr},i}$ are obtained. Since from the corresponding optimal control input sequences $\tilde{u}_{\text{tr},i}^*$ only the first step of the sequence is applied to the system in the MPC setting before a new optimal control input sequence is computed, we add $u_0^* = u_{\text{tr},i}$ via the pair $(x_{\text{tr},i},u_{\text{tr},i})$ to the training data set $\mathcal{B}$. The neural network with a chosen width $M \geq n_x$, a number of layers $L$ is trained with the generated data to find the network parameters $\theta^*$ which minimize the mean squared error over all $n_{\text{tr}}$ training samples: \begin{align}\label{eq:training} \underset{\theta}{\text{minimize}}\,\,\,\frac{1}{n_{\text{tr}}}\sum_{i=1}^{n_{\text{tr}}} ||\mathcal{N}(x_{\text{tr},i};\theta, M, L)-u_{\text{tr},i}||^2. \end{align} Adam~\cite{kingma2014}, a variant of stochastic gradient descent, is used to solve~\eqref{eq:training} with Keras/Tensorflow \cite{chollet2015keras}, \cite{tensorflow2015-whitepaper}. The resulting deep network $\mathcal{N}(x;\theta^*,M,L)$ can be used within a feasibility recovery or a verification strategy to control the effect of the approximation error. These steps are explained in the following subsections. \subsection{Feasibility recovery} Since an exact representation of the explicit MPC law is not achieved, the output of the network is not guaranteed to be a feasible solution of~\eqref{eq:condensed}. In order to guarantee constraint satisfaction as well as recursive feasibility of the problem, the following strategy based on projection is proposed. The same strategy has been very recently proposed in~\cite{chen2018approximating}. It is assumed that a convex polytopic control invariant set $\mathcal C_{\text{inv}}$ is available, which is defined as: $\mathcal C_{\text{inv}}= \{x\in \mathcal X | \forall x \in \mathcal C_{\text{inv}}, \exists u \in \mathcal U \text{ s.t. } Ax + Bu \in C_{\text{inv}}\}$. The polytopic control invariant set can be described by a set of linear inequalities as $C_{\text{inv}} = \{ x\in \mathcal X | C_{\text{inv}}x\leq c_{\text{inv}}\}$. To recover feasibility of the output generated by the neural network, an orthogonal projection onto a convex set is performed \cite{boyd2004} such that the input constraints are satisfied and the next state lies within the control invariant set: \begin{subequations}\label{eq:projection} \begin{align} &\underset{\hat{u}}{\text{minimize}} & &\lVert \mathcal{N}(x;\theta,M,L)-\hat{u} \rVert_2^2 \\ &\text{subject to} & &C_{\text{inv}} (A x_{\text{init}} + B \hat u) \leq c_{\text{inv}}, \label{eq:project_states}\\ & & & C_u \hat u \leq c_u, \end{align} \end{subequations} with $x \in \mathcal{C}_{\text{inv}}$ and $\hat{u}^*$ is the optimal and feasible control input to be applied. Solving~\eqref{eq:projection} directly ensures that the input applied to the system satisfies the input constraints and also that the next state satisfies the state constraints, if~\eqref{eq:projection} is feasible. This in turn means that any consequent state will also belong to $\mathcal C_{\text{inv}}$ because of~\eqref{eq:project_states} and therefore problem~\eqref{eq:projection} remains feasible at all times. This also ensures that input and state constraints of the closed-loop are satisfied at all times. \begin{rem}\label{rem:box_constraints} In the typical case where only box input constraints are present, solving~\eqref{eq:projection} reduces to a saturation operation and no control invariant set is necessary. In the case of state constraints, a control invariant set should be computed. In the linear case, it is possible to compute such sets even for high dimensional systems \cite{Mirko2017}. The feasibility recovery requires solving the QP \eqref{eq:projection} with $n_u$ variables and $n_{\text{inv}}+n_{\text{cu}}$ constraints, which is often significantly smaller than the original QP defined in~\eqref{eq:condensed} with with $Nn_u$ variables and $N(n_{\text{cx}}+n_{\text{cu}})+n_{\text{cf}}$ constraints. The number of half-spaces $n_{\text{inv}}$ that define a polytopic control invariant set can be reduced if required \cite{Blanco2010} at the cost of conservativeness. \end{rem} \begin{rem} The generation of training points can be simultaneously used for the computation of a control invariant set. For example, if all the vertices of the exact explicit MPC solution are included as training points, taking the convex hull of all of them will generate a control invariant set. \end{rem} \subsection{Statistical verification} The feasibility recovery strategy described in the previous subsection requires the computation of the control invariant set~\cite{Mirko2017}. But applying this strategy corresponds to solving an additional optimization problem~\eqref{eq:projection} in the control loop when an approximate controller provides an infeasible control input. Furthermore, it is often the case that the model~\eqref{eq:LTI} used for the design of the controller is just an approximation of the real system which renders the computed control invariant set invalid. An actual deployment of the controller requires in that case extensive testing on detailed simulators when they are available~\cite{haesaert2015data},~\cite{fan2017d}. Motivated by this fact, we propose the use of data-driven approaches that enable the a-posteriori statistical verification of the closed-loop performance based on trajectories $\tilde{x} = [x_0,\dots,x_{k_{\text{end}}}]^T$ as explored in~\cite{quendlin_2018}. The closed-loop dynamics are given by: \begin{align}\label{eq:cl_dnn} x_{k+1} = A x_k + B \mathcal{N}(x;\theta,M,L) \end{align} for the approximate solution~\eqref{eq:neural_network} and by: \begin{align}\label{eq:cl_exp} x_{k+1} = A x_k + B \mathcal{K}(x). \end{align} for the explicit MPC controller~\eqref{eq:exp_mpc}. The data-driven verification can be divided into three steps: data generation, computation of safe sets and validation of safe sets. The notation of the data sets generated in the verification process is summarized in Table~\ref{tab:description_data_sets} and explained at the corresponding points in this Section. In this work, we refer to a safe set as the set of initial conditions of a system from which the approximate controller can be applied with a controlled risk of state and input constraint violation. \begin{table} \caption{Summary of the accents, subscripts and superscripts used for the notation in the verification for an exemplary data set $\mathcal{A}$ .} \begin{center} \begin{tabular}{cc} Notation & Explanation \\ \midrule $\tilde{\mathcal{A}}$ & Closed-loop trajectories $\tilde{x}$ \\ \midrule $\mathcal{A}$ & Initial values $x_0$ of corresponding trajectories in $\tilde{\mathcal{A}}$\\ \midrule $\tilde{\mathcal{A}}^+$ & Closed-loop trajectories for which no violations occured \\ \midrule $\tilde{\mathcal{A}}^-$ & Closed-loop rajectories for which violations occured \\ \midrule $\tilde{\mathcal{A}}_{\text{exp}}$ & Closed-loop trajectories generated via exact MPC~\eqref{eq:cl_exp} \\ \midrule $\tilde{\mathcal{A}}_{\text{dnn}}$ & Closed-loop trajectories generated via NN controller~\eqref{eq:cl_dnn} \\ \bottomrule \end{tabular} \label{tab:description_data_sets} \end{center} \end{table} \subsubsection{Data generation} For the verification procedure, different data sets are necessary, which are all independent from the data used for training the neural networks in~\eqref{eq:training}. The data sets needed for verification are $\tilde{\mathcal{D}} \in \{ \tilde{\mathcal{G}}_{\text{dnn}}, \tilde{\mathcal{T}}_{\text{dnn}}, \tilde{\mathcal{T}}_{\text{exp}}, \tilde{\mathcal{V}}_{\text{dnn}} \}$, where the \emph{\textasciitilde} denotes that they contain closed-loop trajectories $\tilde{x}$. The set $\tilde{\mathcal{G}}_{\text{dnn}}$ is used to compute the safe set with two different methods. The sets $\tilde{\mathcal{T}}_{\text{dnn}}$ and $\tilde{\mathcal{T}}_{\text{exp}}$ are used to compare the size of the safe sets for the approximate and the exact controller. The initial values of the corresponding trajectories in $\tilde{\mathcal{T}}_{\text{dnn}}$ and $\tilde{\mathcal{T}}_{\text{exp}}$ are identical. The validation set $\tilde{\mathcal{V}}_{\text{dnn}}$ is used to compute the probability with which a trajectory starting from an initial value within a computed safe set does not violate the constraints. Sets denoted with a \emph{dnn} in the subscript contain trajectories obtained via closed-loop simulation with the approximate controller~\eqref{eq:cl_dnn} whereas the subscript \emph{exp} indicates trajectories obtained with the exact MPC~\eqref{eq:cl_exp}. The data sets are evaluated by requirements formulated using metric temporal logic (MTL) \cite{baier2008principles}: \begin{align} \rho(\tilde{x}) = \square_{\interval{0}{k_{\text{end}}}}(C_x x_k \leq c_x \land C_u u_k \leq c_u), \label{eq:log_requirement} \end{align} where $\land$ is the operator for \emph{logical and} and $C_x$ and $c_x$ and $C_u$ and $c_u$ describe the state and input constraints as defined in~\eqref{eq:FTOCP}. This requirement translates to $C_x x_k$ and $C_u u_k$ have to be \emph{always} smaller than or equal to $c_x$ and $c_u$ between time steps $0$ and $k_{\text{end}}$ for the requirements to be satisfied. If this is true, we obtain $\rho(\tilde{x}) = +1$ and $\rho(\tilde{x}) = -1$ otherwise. By replacing $\square$ with $\lozenge$ in \eqref{eq:log_requirement} the meaning would change from \emph{always} to \emph{eventually}. More complex requirements can be included using MTL~\cite{baier2008principles},~\cite{quendlin_2018}. The requirement~\eqref{eq:log_requirement} is used to evaluate the previously mentioned sets of trajectories $\tilde{\mathcal{D}} \in \{ \tilde{\mathcal{G}}_{\text{dnn}}, \tilde{\mathcal{T}}_{\text{dnn}}, \tilde{\mathcal{T}}_{\text{exp}}, \tilde{\mathcal{V}}_{\text{dnn}} \}$. Thus, each data set is split into one containing all valid trajectories (marked with \emph{$+$} in the exponent) and into one containing all invalid trajectories (marked with \emph{$-$} in the exponent), for instance $\tilde{\mathcal{D}}^+ = \{\tilde{x} \, | \, \tilde{x} \in \tilde{\mathcal{D}} \land \rho(\tilde{x}) = +1 \}$. The set $\mathcal{D} \in \{ \mathcal{G}_{\text{dnn}}, \mathcal{T}_{\text{dnn}}, \mathcal{T}_{\text{exp}}, \mathcal{V}_{\text{dnn}} \}$ contains the initial value of every trajectory in the corresponding sets $\tilde{\mathcal{D}}$. \subsubsection{Safe sets} Two approaches are applied to obtain an explicit description of the safe set from closed-loop data containing factual negatives and positives. After deriving a probabilistic safe set $\mathcal{S}$, we define all $x \in \mathcal{S}$ as probabilistic positives, and all $x \not\in \mathcal{S}$ as probabilistic negatives. The first approach uses an $m$-dimensional ellipsoidal given by $x^T E x = 1$ with $E \in \mathbb{R}^{m \times m}$ and $E = E^T$. With the following convex optimization problem, it is possible to find the ellipsoidal safe set that inscribes the maximum-sized hypercube such that no initial values of invalid trajectories are contained in said ellipsoidal safe set: \begin{subequations}\label{eq:ellipsoid} \begin{align} &\underset{E}{\text{minimize}} && \Tr(E) \span \span \span \\ &\text{subject to} && E \succeq 0, \\ & && x^T E x \geq (1+\epsilon), \,\, \forall x \in \mathcal{G}_{\text{dnn}}^- \end{align} \end{subequations} where $\epsilon \geq 0$ is a tuning parameter to increase robustness, which is desirable due to the finite number of points in $\mathcal{G}_{\text{dnn}}^-$ used for the computation of the ellipsoidal. The resulting ellipsoidal set is then described by \begin{align}\label{eq:ellipsoidal_safe_set} \mathcal{S}_{\text{ell}} = \left\{ \, x \, \middle\vert \, x^T E x \leq 1\right\}. \end{align} The value of $\epsilon$ was in this work tuned via trial and error, such that $x^T E x > 1$ for all $x \in V^-_{\text{dnn}}$, which is equivalent to the absence of false positives in the validation set. The second approach relies on support vector machines (SVMs) for classification~\cite{cortes1995support} to derive a less restrictive safe set. The SVM learning problem is given by: \begin{subequations}\label{eq:SVM} \begin{align} &\underset{w,v,\zeta}{\text{minimize}} && \frac{1}{2}w^Tw + C \sum_{m=1}^{|\mathcal{G}_{\text{dnn}}|}{\zeta_m} \span \span \span \\ &\text{subject to} && y_m(w^T \phi(x_m)+v) \geq 1- \zeta_m, \\ & && \zeta_m \geq 0, \\ & && x_m \in \mathcal{G}_{\text{dnn}},\\ & && \forall m = 1,\dots,|\mathcal{G}_{\text{dnn}}|, \end{align} \end{subequations} where $w$ is the weight vector, $v$ is the bias and $\phi(\cdot)$ defines a kernel $\kappa(x_i,x_j) = \phi(x_i)^T\phi(x_j)$ and $\zeta_m$ are slack variables to relax the problem. The tuning parameter $C>0$ is a penalty term to weigh the importance of misclassification errors and $y_m$ are the decision functions. The resulting relaxed safe set will only be dependent on the $m_{\text{cr}}$ decision functions $y_i$ where $\zeta_i = 0$. The corresponding $x_i$ are called support vectors and define the set description: \begin{align}\label{eq:SVM_safe_set} \mathcal{S}_{\text{SVM}} = \{x \, | \, y_i(w^T \kappa(x,x_i)+v) \geq 1, \, \forall i = 1,\dots,m_{\text{cr}}\}. \end{align} \begin{figure} \caption{Exemplary visualization of the safe sets obtained from data containing factual positives (circles) and factual negatives (marks) for a system with two states $x_1$ and $x_2$. The ellipsoidal robust safe set contains only factual positives, but the less conservative safe set computed by a SVM contains factual negatives which are considered as false positives.} \label{fig:safe_sets} \end{figure} The two types of safe sets are illustrated exemplarily in Fig.~\ref{fig:safe_sets}. The circles depict initial values of trajectories $x = \tilde{x}(0)$ without constraint violations ($\rho(\tilde{x})=+1$, factual positives) whereas crosses represent those of invalid trajectories ($\rho(\tilde{x})=-1$, factual negatives). The ellipsoidal set (green) is more conservative, but it does not contain any initial values that lead to constraint violations. The larger set (blue) computed by SVMs allows a controlled trade-off between the size of the safe set and the proneness towards misclassification. Two different types of classification errors can be distinguished taking the view of the probabilistic safe sets. False negatives are initial values from which trajectories are falsely classified by a safe set as leading to constraint violations whereas false positives are initial values from which trajectories are considered to be safe but in fact lead to constraint violations. False positives are the worst case since they might lead to actual constraint violations of the closed-loop system. In order to assess the conservativeness of a computed safe set $\mathcal{S} \in \{ \mathcal{S}_{\text{ell}}, \mathcal{S}_{\text{SVM}} \}$, its size is compared to the size of the safe set that is obtained with the exact controller. The initial values in the test data set $\mathcal{T}_{\text{dnn}}$ are evaluated via~\eqref{eq:ellipsoidal_safe_set} or \eqref{eq:SVM_safe_set} to obtain the set $\mathcal{S}^+_{\mathcal{T}_{\text{dnn}}} = \{ x \, | \, x \in \mathcal{S} \land x \in \mathcal{T}_{\text{dnn}} \}$, containing all initial values which are part of the safe set. We define the volume of the computed safe set in relation to the exact one as: \begin{align} m_{\text{vol}} = \frac{|\mathcal{S}^+_{\mathcal{T}_{\text{dnn}}} \cap \mathcal{T}_{\text{exp}}^+|}{|\mathcal{T}_{\text{exp}}^+|}, \label{eq:m_vol} \end{align} where the numerator gives the number of initial values in the test set which lead to feasible trajectories for the approximate and exact solution and the denominator gives the number of all initial values which lead to feasible trajectories with the exact solution. \subsubsection{Validation} Since the computation of the safe sets is based on a finite amount of data points, we analyze the approximation quality of the computed safe sets by computing the amount of false positives as well as by applying methods from statistical learning theory (STL)~\cite{luxburg2011statistical}. The proportion of false positives for classifying the initial values in $\mathcal{V}_{\text{dnn}}$ is given by: \begin{align} m_{\text{fp}} = \frac{|\mathcal{S}^+_{\mathcal{V}_{\text{dnn}}} \cap \mathcal{V}_{\text{dnn}}^-|}{|\mathcal{S}^+_{\mathcal{V}_{\text{dnn}}}|}, \label{eq:m_fp} \end{align} where $\mathcal{S}^+_{\mathcal{V}_{\text{dnn}}} = \{ x \, | \, x \in \mathcal{S} \land x \in \mathcal{V}_{\text{dnn}} \}$. To make probabilistic statements about the safe set with a certain confidence, STL is used. While~\cite{hertneck2018learning} recently used STL to define an upper bound on the approximation error of an approximate controller to make statements about the closed-loop behavior, in this work STL is used to directly validate the closed-loop performance of the approximate solution. This means that the validation strategy is valid regardless of the (potentially incorrect) model that is used to compute the MPC solution provided that a simulator of the real system exists. An indicator function $I(x)$ is introduced to assign to all initial values a risk via the temporal logic requirement defined in~\eqref{eq:log_requirement}: \begin{align}\label{eq:indicator} I(x) = \begin{cases} 1 & \text{if} \, \, \rho(\tilde{x}) = +1,\\ 0 & \text{if} \, \, \rho(\tilde{x}) = -1, \end{cases} \end{align} with $x = \tilde{x}(0)$. The expected value of \eqref{eq:indicator} for all $x \in \mathcal{S}_{\mathcal{V}_{\text{dnn}}}^+$ describes the empirical risk $r_{\text{emp}}$ that a trajectory starting from initial value $x \in \mathcal{S}$ leads to constraint satisfaction over the whole time period $[0, k_{\text{end}}]$: \begin{align}\label{eq:empirical_risk} r_{\text{emp}} = \frac{1}{n_{\text{s}}}\sum_{i=1}^{n_{\text{s}}}{I(x_i)} \end{align} where $n_{\text{s}} = |\mathcal{S}_{\mathcal{V}_{\text{dnn}}}^+|$. The empirical risk $r_{\text{emp}}$ is based on a data set and is the best approximation of the true risk $r_{\text{true}}$, which is based on all $x \in \mathcal{S}$. Hoeffdings inequality \cite{hoeffding1994probability} provides an upper bound on the probability that the empirical risk $r_{\text{emp}}$ deviates more than $\delta$ from the true risk $r_{\text{true}}$: \begin{align}\label{eq:hoeffding} P \left( \left| r_{\text{emp}} - r_{\text{true}} \right| \geq \delta\right) \leq 2 \exp(-2n_{\text{s}}\delta^2). \end{align} Thus, for all $x \in \mathcal{S}$ \begin{align}\label{eq:safety_and_confidence} P(I(x)=1) = r_{\text{true}} \geq r_{\text{emp}} - \delta, \end{align} with confidence level $h_{\delta} = 1 - 2 \exp(-2n\delta^2)$. The meaning of~\eqref{eq:safety_and_confidence} is that a trajectory starting from an initial value within the safe set will not violate the constraints with a probability greater than or equal to $r_{\text{emp}} - \delta$. \subsection{Alternative approximation methods} The proposed deep learning-based approximate explicit MPC approach is compared to other approximation approaches. The first alternative approximates the explicit controller using multi-variate polynomials of the form $\mathcal{P}:\mathbb{R}^{n_x} \rightarrow \mathbb{R}^{n_u}$ with degree $p$: \begin{align}\label{eq:approx_poly} \mathcal{P}(x;\alpha,p) = \begin{bmatrix} \overset{p}{\underset{i_{1}=0}{\sum}} \dots \overset{p}{\underset{i_{n_x}=0}{\sum}} a_{1,m} \prod_{j=1}^{n_x} x_j^{i_{j}} \\ \vdots \\ \overset{p}{\underset{i_{1}=0}{\sum}} \dots \overset{p}{\underset{i_{{n_x}}=0}{\sum}} a_{n_u,m} \prod_{j=1}^{n_x} x_j^{i_{j}} \\ \end{bmatrix} \end{align} where the index $m = \sum_{j=1}^{n_x} i_j$ and $\alpha_i = \{a_{i,1},\dots,a_{i,(p+1)^{n_x}} \}$ for $i=1,\dots,n_u$ contains all coefficients. The coefficients of the polynomials are computed by solving \begin{align}\label{eq:training_poly} \underset{\alpha}{\text{minimize}}\,\,\,\frac{1}{n_{\text{tr}}}\sum_{i=1}^{n_{\text{tr}}} ||\mathcal{P}(x_{\text{tr},i};\alpha,p)-u_{\text{tr},i}||^2 \end{align} where $u_{\text{tr},i}$ is the exact optimal control input obtained solving~\eqref{eq:condensed} for each training point $x_{\text{tr},i}$. The memory footprint of a multi-variate polynomial is given by \begin{equation} \Gamma_{\mathcal{P}} = \alpha_{\text{bit}} n_u (p+1)^{n_x}. \end{equation} The second method is similar to the approach in \cite{holaza2013}. We use the partition of an explicit MPC description with a shorter horizon $N \leq N_{\text{max}}$ (and therefore less regions) and adapt the parameters $\lambda = \{\lambda_1,\dots,\lambda_{n_{\text{r}}}\}$ where $\lambda_i = \{K_i,g_i\}$ by solving the following optimization problem: \begin{align}\label{eq:training_optimized} \underset{\lambda}{\text{minimize}}\,\,\,\frac{1}{n_{\text{tr}}}\sum_{i=1}^{n_{\text{tr}}} ||\mathcal{L}_N(x_{\text{tr},i};\lambda)-u_{\text{tr},i}||^2. \end{align} We denote the optimized descriptions $\mathcal{L}:\mathbb{R}^{n_x} \rightarrow \mathbb{R}^{n_u}$ as \begin{align}\label{eq:approx_smaller_horizon} \mathcal{L}_N(x;\lambda) = \begin{cases} K_1 x+g_1 & \text{if} \quad x \in \mathcal{R}_1, \\ & \vdots \\ K_r x+g_r & \text{if} \quad x \in \mathcal{R}_r. \end{cases} \end{align} The memory footprint of the optimized explicit MPC can be estimated as done for the standard explicit MPC~\eqref{eq:memory_empc}. The explicit MPC description and the approximation methods are summarized in Table~\ref{tab:parameters}. We introduce a new abbreviation for explicit MPC laws $\mathcal K_N(x)$ where $N$ stands for the horizon of the primary problem \eqref{eq:FTOCP} they are derived from. \begin{table} \caption{Summary of algorithms used, including exact explicit solution $\mathcal K_N$ and the approximations methods.} \begin{center} \begin{tabular}{lcc} Method & Param. & Explanation \\ \midrule $\mathcal{K}_N(x)$ & $N$ & prediction horizon \\ \hline \multirow{3}{*}{$\mathcal{N}(x;\theta,M,L)$} & $\theta$ & aff. trans. $\{W_l,b_l\}$ $\forall$ layers \\ & $M$ & neurons per hidden layer \\ & $L$ &number of hidden layers\\ \hline \multirow{2}{*}{$\mathcal{P}(x;\alpha,p)$} & $p$ & degree of the polynomial \\ & $\alpha$ & coefficients $a_i$ for all terms\\ \hline \multirow{2}{*}{$\mathcal{L}_N(x;\lambda)$} & $N$ & prediction horizon \\ & $\lambda$ & aff. trans. $\{K_i,g_i\}$ $\forall$ regions \\ \bottomrule \end{tabular} \label{tab:parameters} \end{center} \end{table} \section{Simulation results}\label{sec:results} The potential of the proposed approach is illustrated with a simulation example modified from \cite{wang2010} and the classic example of the inverted pendulum on a cart. The goal is in both cases to steer the system to the origin. The approximate methods via polynomials $\mathcal{P}(x;\alpha,p)$~\eqref{eq:approx_poly}, optimized explicit MPC with reduced horizon $\mathcal{L}(x;\lambda)$~\eqref{eq:approx_smaller_horizon} and neural networks $\mathcal{N}(x;\theta,M,L)$~\eqref{eq:neural_network} are compared with respect to their performance and their memory footprint~\eqref{eq:memory_empc} to the exact explicit MPC solution $\mathcal{K}(x)$~\eqref{eq:exp_mpc}. The exact explicit MPC controller is considered as the benchmark for the chosen performance index average settling time (AST). The AST is defined as the time necessary to steer all states to the origin. A state is considered to be at the origin when $\lvert x_i \rvert \leq \SI{1e-2}{}$. The relative AST (rAST) is the performance measure with respect to the exact solution with the longest horizon $N_{\text{max}}$. In the following, the dependency of the controllers on $x$ and on the parameters are dropped for the sake of brevity. Additionally, neural networks $\mathcal{N}(x;\theta,M,L)$ will be referred to by $\mathcal{N}_{M,L}$ and polynomials $\mathcal{P}(x;\alpha,p)$ by $\mathcal{P}_p$. \begin{rem} We investigate in this section the performance of shallow ($L=1$) and deep ($L\geq2$) neural networks. For very deep networks ($L \gg 10$), the vanishing gradient problem can occur in the training phase which jeopardizes the approximation accuracy. To counteract the effect, measures like highway layers~\cite{srivastava2015highway} can be taken. In this work, applying countermeasures is not necessary, because the used ReLU networks are less prone to vanishing gradients and the deepest network considered does not exceed $L=10$ layers. \end{rem} \subsection{Case-studies} Two examples to investigate the proposed approach are introduced. The control tasks are solved many times from different initial conditions. The trajectories generated with the corresponding explicit MPC solutions $\mathcal{K}_7$ and $\mathcal{K}_{10}$ were used to train the different approximation approaches \eqref{eq:training}, \eqref{eq:training_poly} and \eqref{eq:training_optimized}. Since both case-studies include box input constraints, a simple saturation was used to guarantee satisfaction of the input constraints for the approximate controller as proposed in Remark~\ref{rem:box_constraints}. It is assumed that all states of the systems can be measured. \subsubsection{Oscillating Masses (OM)} The first example represents two horizontally oscillating masses interconnected via a spring where each one is connected via a spring to a wall, as shown in Fig.~\ref{fig:oscillating_masses}. Both masses can only move horizontally and have a weight of \SI{1}{\kilogram} and each spring has a constant of \SI{1}{\newton\per\metre}. The states of each mass are its position, limited to $\lvert s \rvert \leq \SI{4}{\metre}$, and its speed $v$, limited to $\lvert v \rvert \leq \SI{10}{\metre\per\second}$. A force limited by $\lvert u \rvert \leq \SI{.5}{\newton}$ can be applied to the right mass. \begin{figure} \caption{Chain of masses connected via springs.} \label{fig:oscillating_masses} \end{figure} The state vector is given by $x = [s_1, v_1, s_2, v_2]^T$ and the system matrices are discretized with first-order hold and a sampling time of \SI{0.5}{\second} resulting in: \begin{equation*} \begin{aligned} &A= \begin{bmatrix} 0.763 & 0.460 & 0.115 & 0.020 \\ -0.899 & 0.763 & 0.420 & 0.115 \\ 0.115 & 0.020 & 0.763 & 0.460 \\ 0.420 & 0.115 & -0.899 & 0.763 \\ \end{bmatrix}, &B= \begin{bmatrix} 0.014 \\ 0.063 \\ 0.221 \\ 0.367 \\ \end{bmatrix}. \end{aligned} \end{equation*} The benchmark horizon was $N_{\text{max}}=7$ corresponding to 2317 regions. The exact explicit controller $\mathcal{K}_7$ was used to generate 25952 training samples . \subsubsection{Inverted pendulum on cart (IP)} The second example is the inverted pendulum on a cart, illustrated in Fig.~\ref{fig:inverted_pendulum_on_cart}. The goal is to keep the pole erected and the cart in the central position. The states are the angle of the pole $\Phi$, its angular speed $\dot{\Phi}$, the position of the cart $s$ and the speed of the cart $\dot{s}$. The states $x = [\Phi,s,\dot{\Phi},\dot{s}]^T$ are constrained to $|x|^T \leq [1, 1.5, 0.35, 1.0]^T$. The force $|u| \leq \SI{1}{\newton}$ is directly applied to the cart. \begin{figure} \caption{Inverted pendulum on a cart.} \label{fig:inverted_pendulum_on_cart} \end{figure} Euler-discretization with a sampling time of \SI{0.1}{\second} was used to obtain the discrete system dynamics given by: \begin{equation*} \begin{aligned} &A= \begin{bmatrix} 1 & 0.1 & 0 & 0 \\ 0 & 0.9818 & 0.2673 & 0 \\ 0 & 0 & 1 & 0.1 \\ 0 & -0.0455 & 3.1182 & 1 \\ \end{bmatrix}, &B= \begin{bmatrix} 0 \\ 0.1818 \\ 0 \\ 0.4546 \\ \end{bmatrix}. \end{aligned} \end{equation*} For this example the explicit benchmark solution was computed with horizon $N_{\text{max}}=10$ resulting in a PWA function consisting of 1638 polyhedral regions. 88341 samples were generated to train the approximated controllers. \subsection{Performance} We investigated both examples OM and IP by simulating closed-loop trajectories starting from randomly chosen initial values within the feasible state space. For each initial value, the exact MPC controller $\mathcal{K}_{(\cdot)}$ and the approximate methods were applied. The controller $\mathcal{K}_{(\cdot)}$ provided the benchmark performance. For both case-studies, the evaluation led to similar results, as it can be seen in Table~\ref{tab:AST_and_memory}. The proposed deep neural networks $\mathcal{N}_{6,6}$ and $\mathcal{N}_{10,6}$ only use $\SI{0.23}{\percent}$ and $\SI{1.07}{\percent}$ of the memory of the optimal solutions $\mathcal{K}_7$ and $\mathcal{K}_{10}$ while reaching an average AST that is only $\SI{1.5}{\percent}$ and $\SI{3.8}{\percent}$ longer than the exact solution. The deep neural network clearly achieves the best trade-off between performance and memory requirements. It is interesting to see that a shallow network $\mathcal{N}_{43,1}$ and $\mathcal{N}_{120,1}$ with a slightly larger memory footprint than the deep network, achieves considerably worse performance. The results show that a naive polynomial approximation of the explicit MPC does not lead to good results as the performance that can be achieved with no more than 2 $\SI{}{\kilo\byte}$ is significantly worse than the other approximation methods. Even if the optimized explicit MPC with the finest partition $\mathcal{L}_6$ and $\mathcal{L}_7$ is compared to the benchmark, the proposed deep neural network performs slightly better for OM and clearly better for IP while having a much smaller memory footprint. \begin{figure} \caption{Position of the first mass (top plot) and control inputs (bottom plot) for different control strategies for one exemplary closed-loop simulation of the oscillating masses.} \label{fig:results} \end{figure} Fig.~\ref{fig:results} and Fig.~\ref{fig:performance_inverted_pendulum_on_cart} show an example of the closed-loop trajectories obtained for each type of controller for the two examples. It can be clearly seen that the polynomial approximation (degree 3) cannot properly approximate the explicit controller. The best results, which are almost identical to the exact explicit controller $\mathcal K_7$ and $\mathcal K_{10}$, are obtained by the deep neural networks $\mathcal{N}_{6,6}$ and $\mathcal{N}_{10,6}$. \begin{figure} \caption{Position of the pendulum (top plot) and control input (bottom plot) for different control strategies for one exemplary closed-loop simulation of the inverted pendulum on a cart.} \label{fig:performance_inverted_pendulum_on_cart} \end{figure} \begin{table} \centering \caption{Comparison of the relative average settling time (rAST) for 10000 simulation runs and memory footprint $\Gamma$ for different controllers for the oscillating masses (OM) and inverted pendulum on cart (IP) example.} \begin{tabular}{ccccccc} &&&&&&\\ OM & $\mathcal{K}_7$ & $\mathcal{L}_6$ & $\mathcal{L}_3$ & $\mathcal{P}_3$ & $\mathcal{N}_{6,6}$ & $\mathcal{N}_{43,1}$ \\ \midrule rAST [-]& 1 & 1.020 & 1.113 & 1.407 & 1.015 & 1.125\\ $\Gamma \,[\SI{}{\kilo\byte}]$ & 691.9 & 431.8 & 38.3 & 2.00 & 1.93 & 2.02 \\ \bottomrule &&&&&&\\ &&&&&&\\ IP & $\mathcal{K}_{10}$ & $\mathcal{L}_7$ & $\mathcal{L}_6$ & $\mathcal{P}_3$ & $\mathcal{N}_{10,6}$ & $\mathcal{N}_{120,1}$ \\ \midrule rAST [-]& 1 & 1.897 & 2.273 & 2.276 & 1.038 & 1.060\\ $\Gamma \,[\SI{}{\kilo\byte}]$ & 444.7 & 191.5 & 137.1 & 2.00 & 4.77 & 5.63 \\ \bottomrule \end{tabular} \label{tab:AST_and_memory} \end{table} \subsection{Statistical verification} To derive the set in which the application of the proposed controller is safe, explicit descriptions of safe sets are computed via~\eqref{eq:ellipsoid}~and~\eqref{eq:SVM}. The data sets containing the initial values of the trajectories and their cardinality are $|\mathcal{G}_{\text{dnn}}| = 20000$, $|\mathcal{T}_{\text{exp}}| = |\mathcal{T}_{\text{dnn}}| = 10000$ and $|\mathcal{V}_{\text{dnn}}| = 40000$. In case of an exact approximation $\mathcal{K}_7$ and $\mathcal{K}_{10}$ and $\mathcal{N}_{6,6}$ and $\mathcal{N}_{10,6}$ would be equivalent. This would mean that all initial values $x \in \mathcal{D}^+$ are part of the safe set. Since $\mathcal{N}_{6,6}$ and $\mathcal{N}_{10,6}$ are only approximations, the test sets $\mathcal{T}_{\text{dnn}}^+$ and $\mathcal{T}_{\text{exp}}^+$ are directly compared to derive a first naive measure of the approximation quality. The ratio of the cardinality of the two sets: \begin{align} m_{\text{dir}} = \frac{|\mathcal{T}_{\text{dnn}}^+|}{|\mathcal{T}_{\text{exp}}^+|}, \label{eq:m_dir} \end{align} is \SI{97.7}{\percent} for OM and \SI{95.1}{\percent} for IP as pointed out in the rows of Table~\ref{tab:safe_set_and_approximation} denoted by \emph{direct}. \begin{table} \centering \caption{Comparison of approximate neural network controllers and exact explicit MPC and their safe sets for the examples oscillating masses (OM) and inverted pendulum on cart (IP). } \begin{tabular}{ccccc} OM & vol. [\SI{}{\percent}] & false pos. [\SI{}{\percent}] & safety [\SI{}{\percent}] & confidence [\SI{}{\percent}] \\ \midrule direct & 97.7 & - & - & - \\ ellipsoidal & 70.5 & 0 & 98.5 & >99.9 \\ SVM & 98.3 & 1.6 & 96.9 & >99.9 \\ \bottomrule &&&& \\ &&&& \\ IP & vol. [\SI{}{\percent}] & false pos. [\SI{}{\percent}] & safety [\SI{}{\percent}] & confidence [\SI{}{\percent}] \\ \midrule direct & 95.1 & - & - & - \\ ellipsoidal & 53.3 & 0 & 97.0 & >99.9 \\ SVM & 83.1 & 1.8 & 95.1 & >99.9 \\ \bottomrule \end{tabular} \label{tab:safe_set_and_approximation} \end{table} The optimization problem \eqref{eq:ellipsoid} is solved to obtain explicit formulations of an ellipsoidal safe set $\mathcal{S}_{\text{ell}}$. The estimated volume~\eqref{eq:m_vol} for OM is \SI{70.5}{\percent} and \SI{53.3}{\percent} for IP. This restricts the usage of the approximated controller to a significantly smaller volume, but the absence of false positives on the validation set ($m_{\text{fp}} = 0$)~\eqref{eq:m_fp} indicates a certain robustness, only limited in this case by the use of a finite amount of data points. By applying~\eqref{eq:safety_and_confidence} for both examples it can be said with confidence $>\SI{99.9}{\percent}$ that a trajectory starting from $x \in \mathcal{S}_{\text{ell}}$ will not violate the constraints with a probability $\geq\SI{97.0}{\percent}$. The results considering $\mathcal{S}_{\text{ell}}$ are given in the rows of Table~\ref{tab:safe_set_and_approximation} named \emph{ellipsoidal}. To overcome the conservativeness of $\mathcal{S}_{\text{ell}}$ with respect to the covered volume $m_{\text{vol}}$, a relaxed safe set $\mathcal{S}_{\text{SVM}}$ is computed via~\eqref{eq:SVM}. Since the data is not linearly separable the radial basis function (RBF) is chosen as the kernel: \begin{align} \kappa(x, x_i) = \text{exp}\left( -\nu \mid\mid x-x_i \mid\mid^2\right),\label{eq:rbf} \end{align} where $\nu$ is a tuning parameter defining the width of the kernel function. The volume of the SVM safe sets are $\SI{98.3}{\percent}$ for OM and $\SI{83.1}{\percent}$ for IP while classifying less than $\SI{2}{\percent}$ of the validation set as false positives. This proportion of false positives leads to a slightly reduced safety of $\SI{96.9}{\percent}$ for OM and $\SI{95.1}{\percent}$ for IP with confidence $>\SI{99.9}{\percent}$ The results are summarized in the rows of Table~\ref{tab:safe_set_and_approximation} labelled \emph{SVM}. This shows that the proposed controller can be applied within the safe sets with a small risk of constraint violation with high confidence. Especially if the application can be allowed in $\mathcal{S}_{\text{SVM}}$, the volume of the safe set is similar to the explicit MPC solution while providing a comparable performance. For safety critical applications, fallback strategies should be available to be used when the approximate controller cannot achieve the desired performance. \subsection{Embedded implementation} The embedded implementation of the proposed approximate neural network controllers is straightforward since evaluating neural networks consists only of multiplications, additions and the evaluation of simple nonlinearities, which are in this case rectified linear units. Both networks were deployed on a low-cost \SI{32}{\bit} microcontroller (SAMD21 Cortex-M0+) with \SI{32}{\kilo\byte} of RAM and \SI{48}{\mega\hertz} clock speed. The evaluation time of the $\mathcal{N}_{6,6}$-controller for the oscillating masses example was \SI{1.6}{\milli\second} and the code required \SI{23.2}{\kilo\byte} of memory. The code of the $\mathcal{N}_{10,6}$-controller for the inverted pendulum had a slightly larger memory footprint of \SI{24.6}{\kilo\byte} and the evaluation time was \SI{3.8}{\milli\second}. The code was automatically generated with the open-source toolbox \emph{edgeAI} \cite{karg2018}. \subsection{Binary Search Trees} The use of Binary Search Trees can reduce the memory requirements and especially the evaluation time of the explicit MPC solutions~\cite{tondel2003evaluation} compared to a standard explicit MPC implementation. However, we have not included the corresponding results for BSTs for the given examples since computing them for $N \geq 3$ with the toolbox MPT3~\cite{MPT3} was intractable. For shorter horizons, the BST led to a memory footprint reduction around \SI{25}{\percent}. For instance for the inverted pendulum on a cart with horizon $N=2$ the BST led to a reduction of \SI{21.9}{\percent}, but it was not possible to solve problems with longer horizons. \section{Conclusions and future work}\label{sec:conclusions} We have shown that explicit MPC formulations can be exactly represented by deep neural networks with rectifier units as activation functions and included explicit bounds on the dimensions of the required neural networks. The choice of deep networks is especially interesting for the representation of explicit MPC laws as the number of regions that deep networks can represent grows exponentially with their depth. This notion was exploited to propose an approximation method for explicit MPC solutions. Stochastic verification techniques have been used to ensure constraint satisfaction if the neural network is used to approximate, and not to exactly represent, the explicit MPC law within a safe set. Simulation results show that the proposed deep learning-based explicit MPC achieves better performance than other approximate explicit MPC methods with significantly smaller memory requirements. This significant reduction of the memory footprint enabled the deployment of the proposed deep neural network controllers on a low-power embedded device with constrained resources. Future work includes the design of stability guaranteeing formulations and the computation of safe sets which allow to predict the probability and the magnitude of constraint violations via gaussian process regression. \ifCLASSOPTIONcaptionsoff \fi \begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{figures/benji_cv.JPG}}]{Benjamin Karg} was born in Burglengenfeld, Germany, in 1992. He received the B.Eng. degree in mechanical engineering from Ostbayerische Technische Hochschule Regensburg, Regensburg, Bavaria, Germany, in 2015, and his M.Sc. degree in systems engineering and engineering cybernetics from Otto-von-Guericke Universität, Magdeburg, Saxony-Anhalt, Germany, in 2017. He currently works as a research assistant at the laboratory "Internet of Things for Smart Buildings", Technische Universität Berlin, Germany, to pursue his PhD. He is also member of the Einstein Center for Digital Future. His research is focused on control engineering, artificial intelligence and edge computing for IoT-enabled cyber-physical systems. \end{IEEEbiography} \begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{figures/sergio_cv.JPG}}]{Sergio Lucia} (M’16) received the M.Sc. degree in electrical engineering from the University of Zaragoza, Zaragoza, Spain, in 2010, and the Dr. Ing. degree in optimization and automatic control from the Technical University of Dortmund, Dortmund, Germany, in 2014. He joined the Otto-von-Guericke Universität Magdeburg and visited the Massachusetts Institute of Technology as a Postdoctoral Fellow. Since May 2017, he has been an Assistant Professor and Chair with the Laboratory of “Internet of Things for Smart Buildings”, Technische Universität Berlin, Berlin, Germany, and with Einstein Center Digital Future, Berlin. His research interests include decision-making under uncertainty, distributed control, as well as the interplay between machine learning techniques and control theory. Dr. Lucia is currently Associate Editor of the Journal of Process Control. \end{IEEEbiography} \end{document}
arXiv
Smallest 17 Primes in Arithmetic Progression 1 Theorem 2 Proof 3 Historical Note The smallest $17$ primes in arithmetic progression are: $3\,430\,751\,869 + 87\,297\,210 n$ for $n = 0, 1, \ldots, 16$. First we note that: $3\,430\,751\,869 - 87\,297\,210 = 3\,343,\454\,659 = 17\,203 \times 194\,353$ and so this arithmetic progression of primes does not extend to $n < 0$. \(\displaystyle 3\,430\,751\,869 + 0 \times 87\,297\,210\) \(=\) \(\displaystyle 3\,430\,751\,869\) which is prime \(\displaystyle 3\,430\,751\,869 + 10 \times 87\,297\,210\) \(=\) \(\displaystyle 4\,303\,723\,969\) which is prime But note that $3\,430\,751\,869 + 17 \times 87\,297\,210 = 4\,914\,804\,439 = 41 \times 97 \times 1 235807$ and so is not prime. This theorem requires a proof. It remains to be shown that there are no smaller such APs You can help $\mathsf{Pr} \infty \mathsf{fWiki}$ by crafting such a proof. When this page/section has been completed, {{ProofWanted}} should be removed from the code. If you would welcome a second opinion as to whether your work is correct, add a call to {{Proofread}} the page (see the proofread template for usage). David Wells, in his Curious and Interesting Numbers of $1986$ reported that this was the $2$nd longest known arithmetic progression of prime numbers. It was discovered in $1977$ by Sol Weintraub. Since that time, plenty longer have been found. Oct. 1977: Sol Weintraub: Seventeen Primes in Arithmetic Progression (Math. Comp. Vol. 31, no. 140: 1030) www.jstor.org/stable/2006135 1981: Richard K. Guy: Unsolved Problems in Number Theory 1986: David Wells: Curious and Interesting Numbers ... (previous) ... (next): $3,430,751,869$ Retrieved from "https://proofwiki.org/w/index.php?title=Smallest_17_Primes_in_Arithmetic_Progression&oldid=385011" Proof Wanted Arithmetic Progressions This page was last modified on 28 December 2018, at 06:45 and is 3,811 bytes
CommonCrawl